NetLogo banner

Home
Download
Help
Forum
Resources
Extensions
FAQ
NetLogo Publications
Contact Us
Donate

Models:
Library
Community
Modeling Commons

Beginners Interactive NetLogo Dictionary (BIND)
NetLogo Dictionary

User Manuals:
Web
Printable
Chinese
Czech
Farsi / Persian
Japanese
Spanish

  Donate

NetLogo User Community Models

(back to the NetLogo User Community Models)

[screen shot]

Download
If clicking does not initiate a download, try right clicking or control clicking and choosing "Save" or "Download".(The run link is disabled because this model uses extensions.)

WHAT IS IT?

This is a 1 dimensional cellular automata explorer

HOW IT WORKS

Patches get new states from the states of their 2 closest neighbors and the state of themself. These values are composed into a single value by a rule.

Rules can be:
1)mutated
2)changed automatically and manually
3)gotten from examples

Rules are feedforward neural networks (http://en.wikipedia.org/wiki/Feedforward_neural_network) containing 1 - 3 hidden layers with each layer having 1 - 20 neurons + output layer consisting of one neuron and input layer consisting of 3 neurons.

Activation function:
y = x / (0.1 + abs x)

Patch states, neuron output values and connection weight values are all in range [-1,1]

Each neuron operates as follows:
1)sum of all previous layer inputs times the incoming connection weight is calculated
2)previous value (1) is sent trough the activation function
3)previous value (2) is compared to the threshold value of neuron. if (2) > threshold value of neuron and output is produced and it has the value (2), otherwise the output value will be 0

The only special case are the neurons in the feedforward neural network input layer that only produce neuron output.

HOW TO USE IT

BASICS:

Press go to start. Perss new-rules for new rules. Switch world by changing the world slider. Mutate the most interesting world.

IN DEPTH:

go:
start the program

show-line:
Visualizes current row of the world with a line. Doing so may give valuable information about the nature of the observed rule.

darken:
no color updates, easier to see line

refresh-color:
refresh the color values of the whole screen

new-rules:
Makes completely new rules

random-values:
random patch values

random-one:
random patch value for one patch

one?
if its on then random-one will be used when switching/mutating worlds, otherwise random-values determine the states of the world

mutate-rules:
mutate the rules based on current rule. if the rule stays the same it is mutated up to 1000 times, if it is still the same a random rule will be generated

mutate-previous:
mutate previously mutated world

world slider:
switch active world and rule

examples:
Example rules. Push initiate-example to view some rules.

setup:
Restarts the model. There is no need to press this unless you encounter some crazy errors i think

clear-turtles:
Clears all turtles. This is useful when the neural network visualization failed for some reason.

get/set:
get the current rule or set the current rule on the rule input field.

patch-state:
the current value of the patch where the mouse is located

continue?:
continue or not when hit the bottom of the screen

visualize-network:
Visualizes currently used neural network. The network is colored according to threshold and weight values. Values above 0 are green and below are blue with increasing whiteness as the absolute value increases. Neurons are also shifted on y-axis according to their threshold values.

NetLogo colors explained:
http://ccl.northwestern.edu/netlogo/docs/programming.html#colors

NetLogo math explained:
http://ccl.northwestern.edu/netlogo/docs/programming.html#math

vision-level and color-offset:
patches get their color from:
state * (70 + color-offset) * 10 ^ vision-level
Using a different vision level may reveal the true nature of some patterns. Also it may reveal patterns that emerge only on some vision-level and stay completely hidden on others. Similar patterns may not be similar at all or vice versa depending on the vision level.

unit-change-strength:
exponentially distributed absolute change value for a single neuron threshold or connection weight when mutating

%-of-units-to-change:
exponentially distributed percentage value for a single layer in the neural network that determines the amount of units to change. In this case "single layer" refers to either weight layer or threshold layer.

table:length data is a valuable input for an quick overview of a rule. Different values enable making some predictions on the nature of a rule

Table ditch point defines algorithm choice. If table:length data > ditch-table then tables will not be filled anymore. The remaining data will still be used if possible though. This results in a minor slowdown, however the overall benefits gained seem to outweigh ditching table usage completely.

In this case a table with 1 million entries is equal to about 122 MB of memory space.

fps:
frames per second

operation-on-rule:
all values on the rule field (neuron thresholds and connection weights) are sent trough the "formula". formula must be a valid (in netlogo language) math formula. Please see http://ccl.northwestern.edu/netlogo/docs/dictionary.html#mathematicalgroup for some help.
In formula:
z is current value from the feedforward neural network
a,b and c are constants

All operations are done on the neural network that is in the rule field. After each operation all the rules will be replaced with the resulting rule and mutations of it.

omit-zero:
in case omit-zero is "on", z is 0 and the formula is z + 0.1, value 0 will become 0. Values that are not zero will become z + 0.1

restrict-to-range?:
Patch states, neuron output values and connection weight values are all in range [-1,1]. Automatic operations will restrict the resulting values from operation-on-rule into that range when restrict-to-range? is "on". If restrict-to-range? is "off" then the values will be looped in that range instead.

restrict-to-range input min max
observer> show restrict-to-range 1.01 -1 1
observer: 1
observer> show restrict-to-range 0.56347347 -1 1
observer: 0.56347347
observer> show restrict-to-range -50000000000 -1 1
observer: -1

loop-in-range input min max
observer> show loop-in-range 0.5 -1 1
observer: 0.5
observer> show loop-in-range 1.1 -1 1
observer: -0.8999999999999999
observer> show loop-in-range 1.2 -1 1
observer: -0.8
observer> show loop-in-range 2 -1 1
observer: 0
observer> show loop-in-range -400.335 -1 1
observer: -0.33499999999997954

zero-border:
If it is "off" all values will be restricted/looped in range [-1,1]. If it is on then negative values will be restricted/looped in range [-1,0] and positive in range [0,1].

Examples with operation-on-rule (after loading the model with default settings for formula and related fields):
1)initiate-example 25
2)start the model with go (in order to see the transition)
3)press operation-on-rule

or
1)initiate-example 3
2)start the model with go
3)press operation-on-rule 5 times
4)press operation-on-rule 6 more times
5)press operation-on-rule 3 more times

THINGS TO NOTICE

Notice how very simple neural networks can produce very complicated behaviour. Visualize-networks for example rules 12 and 13.

THINGS TO TRY

Play with color-offset and vision-level while the model is running to understand how exactly they work.

NETLOGO FEATURES

Each patch on the current row checks if there is a key in the table that corresponds to the states of their 2 closest neighbors and the state of themself in the form of a 3 element list (example [0.1513513613681 -0.30628268623 0]) to avoid recalculating the values with the neural network. Tables grow very large, yet this operation does not become any slower.

RELATED MODELS

2d totalistic CA explorer
http://ccl.northwestern.edu/netlogo/models/community/totalistic2dCA
Netlogo models library:
Cellular automata group
Artificial Neural Net

(back to the NetLogo User Community Models)