WHAT IS IT ---------------- This model is an embodied, agent-based neural net. Traditionally, connectionist computational models assume that nodes in a neural net are activated instantaneously with the firing of their input neurons. However, this assumption has come increasingly under fire because it removes potentially rich temporal information channels; firing rates or synchronous / asynchronous firing, for example. The goal of this model is to develop a neural network that exploits these channels by including individual action potentials that flow at varying frequencies and phases, and embodies these variables in an agent's task-oriented behavior. The model consists of two parts: 1) The top half of the model is a neural "network" consisting of three neurons: an input neuron and two output neurons (one for walking and one for turning). 2) The bottom half of the model embodies the above neural network in an agent performing a task. The neural network represents the structure of the agent's neural activity as it navigates its world looking for food. The Agent: The agent is in a world where it can either see black patches or yellow patches. Black patches are empty, yellow patches are food (or "cheese"). The agent (or "rat") must find food in order to survive. It had a baseline energy value of SUPPLY, and loses energy at the rate of METABOLISM per clock tick. When it finds food, it increases its energy by FOODVALUE. If the agent's energy value slips to 0, it dies. The Neural Network: The behavior of the Input Neuron is determined by the rat's visual field. If the rat sees only black patches in front of it, the input neuron is not excited, and so fires at a slow pace of one action potential (or impulse) every eight ticks. If the rat sees a yellow patch in any of the three spaces in front of it, then it is excited, and fires a fast rate of one impulse every four ticks. HOW TO USE IT ---------------------- WALKREQUENCY: This value is the count used to determine the rate at which the walk neuron increases its own charge. The higher this value, the longer it takes for the walk neuron to charge up. When the walk neuron's internal count reaches this value, it 1) increases its charge by WALKUPVALUE, and 2) sets its count to 0. WALKTHRESHOLD: This is the charge value that the walk neuron needs to reach in order to fire. If the walk neuron's charge value is at, or above, this threshold then 1) the walk neuron fires, 2) the agent walks one step forward, and 3) the walk neuron's charge is set to 0. WALKUPVALUE: This is value that the walk neuron increases its charge by each time its internal count reaches the frequency value. WALKRESPONSIVENESS: This is the value that the walk neuron changes its charge by each time it receives an impulse from the input neuron. This value can be positive (exitatory - increasing the charge value), or negative (inhibitory - decreasing the charge value). TURNREQUENCY: This value is the count used to determine the rate at which the turn neuron increases its own charge. The higher this value, the longer it takes for the turn neuron to charge up. When the turn neuron's internal count reaches this value, it 1) increases its charge by TURNUPVALUE, and 2) sets its count to 0. TURNTHRESHOLD: This is the charge value that the turn neuron needs to reach in order to fire. If the turn neuron's charge value is at, or above, this threshold then 1) the turn neuron fires, 2) the agent turns one step forward, and 3) the turn neuron's charge is set to 0. TURNUPVALUE: This is value that the turn neuron increases its charge by each time its internal count reaches the frequency value. TURNRESPONSIVENESS: This is the value that the turn neuron changes its charge by each time it receives an impulse from the input neuron. This value can be positive (exitatory - increasing the charge value), or negative (inhibitory - decreasing the charge value). DENSITY: The density of the food in the agent's cheese-world. SUPPLY: The initial amount of food that agent starts with. METABOLISM: The amount of energy the agent loses each turn. FOODVALUE: The amount of energy the agent gets for finding a piece of cheese. Each time an agent dies, the screen resets, and a new set of properties are randomly assigned to a new agent. THINGS TO TRY: ---------------------------- Try to find a generalized search strategy for the rat: under what properties of its neurons will it have the best success finding food, and surviving? Originally, this model had a genetic algorithm that evolved a search strategy. To make this model more interesting, we have eliminated the genetic algorithm, and thus leave the parameter space open to the user to explore what neural properties result in success for the rat. There are many solutions to this problem, but they all hold within a constant ratio. The relative values of the turn and walk neurons will always be constant within some range. (HINT: You want the rat to be inhibited from turning when it is excited by seeing cheese, and you want it to move forward, when excited, to get the cheese. When it doesn't see cheese - when the input neuron is firing slowly - you want the rat to walk and turn, in a spiraling search pattern.) THINGS TO NOTICE: ---------------------------- The most important dimension of this model is the interaction between the agent and neural network. Consider the difference between a Hebbian network that represents strengths in weighting between inputs and outputs, and this network, where the timing of the frequencies, and the distances of the neurons determines the parameters of successful neural properties. Building a biologically plausible neural net requires taking time and space into account as 'channels of information' in the structure of an agent's interaction with its environment. EXTENDING THE MODEL: ------------------------------------ There are lots of different avenues for extending this neural net. You can try to design your own GA, or you can try to introduce more neurons with different properties, and try to find values for these neurons that correspond to successful behavior in the agent. As a first exercise, try making the position of the neurons slider variables, and see how their location affects the parameter space for a successful neural network. REFERENCES AND CREDITS: ----------------------- This model was originally designed as a genetic algorithm that evolved a generalized search strategy for finding randomly distributed food on a grid; it was part of three-week research project undertaken at the Santa Fe Institute by Sean McClelland and Damon Centola. The values for the above parameters were assigned by a randomly selected 26-bit long bit-string. These values corresponded to the behavior of the rat, and its in success finding food. After ten different rats, with ten different randomly selected genomes, ran through the cheese-world, a weighted lottery selected pairs for crossing at randomly selected cross-over axes. The resulting offspring represented new genome structures that resulted in new values for the neural parameters. Ultimately, the model evolved an agent that could search with high success under the trained density, on a randomly distributed food grid.