Home Download Help Resources Extensions FAQ NetLogo Publications Contact Us Donate Models: Library Community Modeling Commons Beginners Interactive NetLogo Dictionary (BIND) NetLogo Dictionary User Manuals: Web Printable Chinese Czech Farsi / Persian Japanese Spanish

NetLogo User Community Models(back to the NetLogo User Community Models)
## WHAT IS IT?
The model implements of a spreadingactivation algorithm over networks of varying topologies. In particular, it displays the evolution of the correlation between activation and two microlevel network properties, degree and clustering coefficient. This spreadingactivation algorithm has been used in prior psycholinguistic research to explain human behavior on a wide variety of tasks involving speech perception and production.
However, this prior research has focused primarily on predictive models, focusing on the dispersion at the conclusion of a preselected number of timesteps. Our present goal is to explain and describe the interaction between the network's topology and the spreadingactivation algorithm, rather than measure the algorithm's performance on specific complex networks and compare its output to human behavior. The NetLogo implementation, and the broader paradigm of agentbased modeling, puts front and center the system's statespace and its dynamics.
## HOW IT WORKS
There are only two kinds of agents, those used in classic network studies  nodes and links. At setup, one node (the "target node") receives 100 points of activation and each other node receives 0. "Random" networks are implemented by the Gilbert model (1959). "Small World" networks are implemented by the WattsStrogatz model (1998). "Preferential Attachment" networks are implemented by the BarabasiAlbert model (1999). Users may also load in their own networks using files containing the adjacency matrix.
Each tick, the system begins by calculating the amount of activation each node will emit that tick. Once that calculation is completed, nodes then split that amount evenly among their neighbors. If the node has no neighbors, the activation disappears from the system. Finally, each node decays in activation. For details, see Vitevitch, Ercal, & Adagarla (2011) and Siew (2019).
The model converges when the sum change in activation over the course of a tick is less than 1e10, defined as "epsilon" in the code.
## HOW TO USE IT
Users can use the 'filename' input to interact with the "Read Network" setup option and the "Save Network" button. When using this option, all parameters are ignored; the network is simply a faithful copy of the adjacency matrix stored in the file. See the "Reading and Writing Files" section for more information.
The population slider determines the number of nodes in the network.
The neighborhoodsize slider is inert in the random network. In the smallworld network, it describes the number of nodes (in each direction) that each node will be neighbors with, prior to rewiring. In the preferential attachment network, it determines the number of nodes that each node will attach to when introduced to the network.
In the random network, the p slider determines the probability that any given pair of nodes is connected by a link. In the smallworld network, it determines the probability that any given link is rewired. In the preferential attachment network, it is inert.
The retention slider determines the amount of activation each node retains on each tick. For example, on the first tick, the target node multiplies its 100 points of activation by the retention parameter. It then divides the remaining points equally among its neighbors.
The decay parameter determines the amount of activation the system will leak over time. Each tick, after all spreading actions occur, each node multiplies sets its activation value to its current activation value multiplied by (1  decay).
The first plot measures the total change in the the distribution of activation over time. Each node measures its change with abs(activation at beginning of tick  activation at end of tick). The total change is the sum of the change measured by each node. When total change = 0, the model converges.
The second plot measures the correlation of activation and two microlevel network metrics  degree and C(x). By convergence, activation and degree are perfectly correlated, with highdegree nodes holding more activation than lowdegree nodes.
## READING AND WRITING FILES
The model allows users to read networks from adjacency matrices using the nw:loadmatrix command. The corresponding adjacency matrix must be in the same folder as the NetLogo model is saved in, unless the user has set a different working directory using the Command Center. To load the file, the user must enter its name into the Input object in the Interface titled 'filename'. The extension (.txt) must be included.
The model also allows the user to save generated networks using nw:savematrix. The file will be created in the same folder as the NetLogo model, or in the working directory if this has been changed by the user. This allows the user to perform multiple runs on the same networks, even using BehaviorSpace.
For technical notes, see [the documentation for the nw extension](https://ccl.northwestern.edu/netlogo/docs/nw.html).
## THINGS TO NOTICE
First, note that the underlying spreadingactivation algorithm is deterministic. This means that, holding the spreadingactivation parameters constant, all betweenrun variance is driven entirely by the process of network generation.
The most notable finding is that, for any given (connected) topology, there is a single attractor state whose basin of attraction is the entire statespace of the system. In that state, each node's activation value is fully described by a positive linear function of that node's degree. This entails that the distribution of activation is perfectly described by the network's degree distribution. In other words, both C(x) and the network's higherlevel topological properties have no impact on the convergence point.
Moreover, the system smoothly approaches this point without fluctuations, with rare exceptions when retention is especially low. The system's convergence is visualized in the Activation Values Predictors chart, where the attractor state is represented by a correlation of 1 between activation value and degree. Convergence is especially quick in dense networks, and the steady path toward it is the main driver of system dynamics at every point. This is in conflict with prior research, which has claimed that, in substantial sections of the statespace, high values for both degree and C(x) have robust negative causal impacts on the node's activation value.
Note that this description of the statespace is conditional on the decay parameter being set to 0, as is typical in this research. That will be a running assumption in our discussion here. Note also that this attractor state can be fully described without referencing the retention value, and is therefore independent of it. Because the basin of attraction is the entire statespace, the convergence point is also independent of the initial distribution of activation.
Further, note that there is a significant variation in the betweenrun effects of C(x). This is true even holding the networkconstruction method constant. This implies that the direction and strength of the effect of C(x) depends on the network's particulars, and cannot be reduced to facts about the construction method. For example, holding p constant, different random networks will have different trajectories for the effect of C(x). This is because different networks will have different correlations between C(x) and degree, which serves as a confound.
Finally, note that when using bipartite networks, each node necessarily has C(x) = 0. Thus, the C(x) predictor is not plotted.
## THINGS TO TRY
The effect of network structure on restingstate activation values is most clear when the population is divisible by ten. This is simply because the initial state arbitrarily assigns 100 units of activation to one node. These 100 units evenly divide up on regular networks, such as a ring or complete network.
Create a complete network using the random model with p = 1; create a ring using the smallworld model with p = 0. Varying the topology slightly, by setting offsetting p from 0 (on a ring) or 1 (on the complete network), and observe its effects on the dynamics and endpoint.
Play around with the "labels?" switch and choose a preferred visualization. It is recommended that labels be present when the population is low, and absent when the population is high.
Load in complex networks of your choice (see nw:loadmatrix for usage). Note that the target node will always be node 0, represented by the first line of the adjacency matrix.
Save networks generated here, then load them in to examine the same network under many parameters.
## EXTENDING THE MODEL
The primary function of this implementation of the spreadingactivation algorithm is not to develop a novel model, but rather to use the agentbased modeling framework to more closely examine claims that have been made using other implementations. Thus, extending the present model is not especially fruitful. Rather, we should analyze whether the dynamics of the current model can explain human behavior. If not, what is needed is a novel model of the task in question, not an extension of the present model.
There are some small extensions of the model that do not change the fundamental assumptions. For example, new topologies could be used to further explore how network structure interacts with spreading activation. These new topologies could include features common in the Network Science literature but not implemented presently, including links that are onedirectional and/or weighted.
Additionally, the model could be extended so that the user has more control over the initial distribution of activation. While the interface does not presently support users performing custom initial distributions, this can be done using the Command Center. For example, the following code would "reset" the activation values, with the new initalization splitting the activation between two nodes:
ask turtles [ set activation 0 ]
## NETLOGO FEATURES
The model uses the nw extension to create, read, and write networks, and it uses the stats extension to measure correlations.
## RELATED LITERATURE
This model furthers research on spreadingactivation first put forward in the domain of Cognitive Network Science. This NetLogo model implements a traditional spreadingactivation algorith described first in Vitevitch, Ercal, & Adagarla (2011) and implemented in the R package {spreadr} (Siew, 2019). This previous research appeals to this spreadingactivation algorithm to explain the effects of networklevel metrics in human behavior. Specifically, it has been argued that the structure of lexical similarity networks, which are constructed using the edit distance metric (Vitevitch, 2008; Arbesman, Strogatz, & Vitevitch, 2010), can affect speech perception and generation through both various networklevel properties, and that the spreadingactivation mechanism can explain several of these effects. For a review, see Vitevitch (2021).
It is a wellknown fact that speech perception is facilitated when the stimulus word is relatively distinct from other words (Pisoni & Luce, 1998; Vitevitch, Stamer, & Sereno, 2008). For example, a word like "back," which has many similarsounding words, is likely to be harder to perceive than a word such as "bag," which has relatively few similarsounding words. This is traditionally interpreted as evidence that mental representations of lexical items compete with one another during spoken word recognition, possibly via inhibitory links. In terms of lexical similarity networks, this means that having a high degree impedes processing.
Further, Chan & Vitevitch (2009) found that words with a low clustering coefficient, or C(x), were more easily recognized, even when controlling for degree and other relevant psycholinguistic variables. They proposed a verbal model whereby words with low C(x), relative to words with high C(x), stood out more prominently relative to competing neighbor nodes due to the quick diffusion of activation beyond the local neighborhood. Chan & Vitevitch (2010) found a parallel effect on speech production.
Vitevitch, Ercal, & Adagarla (2011) argued that a spreadingactivation algorithm explained these effects. They did so by isolating the twohop neighborhood of the nodes representing the stimuli in the experiments of Chan & Vitevitch (2008) and simulating spreadingactivation using those nodes as the target nodes. These findings are contested by simple observations using the present implementation of the spreadingactivation mechanism.
Siew (2019) implemented the same spreadingactivation algorithm in an R package titled spreadr. This NetLogo model allows the user to more closely examine what's going on under the hood of these algorithms.
## RELATED MODELS
### Network Related
### Diffusion Related
## CREDITS AND REFERENCES
Arbesman, S., Strogatz, S. H., & Vitevitch, M. S. (2010). The structure of phonological networks across multiple languages. International Journal of Bifurcation and Chaos, 20(03), 679685.
Barabási, A. L., & Albert, R. (1999). Emergence of scaling in random networks. Science, 286(5439), 509512.
Chan, K. Y., & Vitevitch, M. S. (2009). The influence of the phonological neighborhood clustering coefficient on spoken word recognition. Journal of Experimental Psychology: Human Perception and Performance, 35(6), 1934.
Chan, K. Y., & Vitevitch, M. S. (2010). Network structure influences speech production. Cognitive Science, 34(4), 685697.
Gilbert, E. N. (1959). Random graphs. The Annals of Mathematical Statistics, 30(4), 11411144.
Luce, P. A., & Pisoni, D. B. (1998). Recognizing spoken words: The neighborhood activation model. Ear and Hearing, 19(1), 1.
Siew, C. S. (2019). spreadr: An R package to simulate spreading activation in a network. Behavior Research Methods, 51(2), 910929.
Vitevitch M. S. (2008). What can graph theory tell us about word learning and lexical retrieval?. Journal of speech, language, and hearing research : JSLHR, 51(2), 408–422.
Vitevitch, M. S., Ercal, G., & Adagarla, B. (2011). Simulating retrieval from a highly clustered network: Implications for spoken word recognition. Frontiers in psychology, 2, 369.
Vitevitch, M. S., Stamer, M. K., & Sereno, J. A. (2008). Word length and lexical competition: Longer is the same as shorter. Language and Speech, 51(4), 361383.
Watts, D. J., & Strogatz, S. H. (1998). Collective dynamics of ‘smallworld’networks. Nature, 393(6684), 440442.
All correspondence related to this model should be written to LeoNiehorsterCook@gmail.com. 
(back to the NetLogo User Community Models)