Home Download Help Resources Extensions FAQ References Contact Us Donate Models: Library Community Modeling Commons User Manuals: Web Printable Chinese Czech Japanese Spanish |
## NetLogo User Community Models(back to the NetLogo User Community Models) ## 1dCAexplorerby suslik (Submitted: 02/15/2011)
This is a 1 dimensional cellular automata explorer
Patches get new states from the states of their 2 closest neighbors and the state of themself. These values are composed into a single value by a rule.
Rules can be:
Rules are feedforward neural networks (http://en.wikipedia.org/wiki/Feedforward_neural_network) containing 1 - 3 hidden layers with each layer having 1 - 20 neurons + output layer consisting of one neuron and input layer consisting of 3 neurons.
Activation function:
Patch states, neuron output values and connection weight values are all in range [-1,1]
Each neuron operates as follows:
The only special case are the neurons in the feedforward neural network input layer that only produce neuron output.
Press go to start. Perss new-rules for new rules. Switch world by changing the world slider. Mutate the most interesting world.
go:
show-line:
darken:
refresh-color:
new-rules:
random-values:
random-one:
one?
mutate-rules:
mutate-previous:
world slider:
examples:
setup:
clear-turtles:
get/set:
patch-state:
continue?:
visualize-network:
NetLogo colors explained:
NetLogo math explained:
vision-level and color-offset:
unit-change-strength:
%-of-units-to-change:
table:length data is a valuable input for an quick overview of a rule. Different values enable making some predictions on the nature of a rule
Table ditch point defines algorithm choice. If table:length data > ditch-table then tables will not be filled anymore. The remaining data will still be used if possible though. This results in a minor slowdown, however the overall benefits gained seem to outweigh ditching table usage completely.
In this case a table with 1 million entries is equal to about 122 MB of memory space.
fps:
operation-on-rule:
All operations are done on the neural network that is in the rule field. After each operation all the rules will be replaced with the resulting rule and mutations of it.
omit-zero:
restrict-to-range?:
restrict-to-range input min max
loop-in-range input min max
zero-border:
Examples with operation-on-rule (after loading the model with default settings for formula and related fields):
or
Notice how very simple neural networks can produce very complicated behaviour. Visualize-networks for example rules 12 and 13.
Play with color-offset and vision-level while the model is running to understand how exactly they work.
Each patch on the current row checks if there is a key in the table that corresponds to the states of their 2 closest neighbors and the state of themself in the form of a 3 element list (example [0.1513513613681 -0.30628268623 0]) to avoid recalculating the values with the neural network. Tables grow very large, yet this operation does not become any slower.
2d totalistic CA explorer |

(back to the NetLogo User Community Models)