Beginners Interactive NetLogo Dictionary (BIND)
Farsi / Persian
NetLogo Models Library:
This model is designed as a "thought experiment" to shed light on the ongoing debate between two theories of learning, constructivism and social constructivism. In it, agents "learn" through playing a game either as individuals, social interactors, or both.
This model of "learning through playing" was created with the following objectives:
to demonstrate the viability of agent-based modeling (ABM) for examining socio/developmental-psychological phenomena;
to illustrate the potential of ABM as a platform enabling discourse and collaboration between psychologists with different theoretical commitments;
to visualize the complementarity of Piagetian and Vygotskiian explanations of how people learn.
The design problem for this model-based thought experiment was to create a single environment in which we could simulate both “Piagetian” and “Vygotskiian” learning.
We chose to model a game in which contestants all stand behind a line and each throws a marble, trying to land it as close as possible to a target line some "yards" away (30 NetLogo patches): Players stand in a row. They each roll a marble at a target line. Some players undershoot the line, some overshoot it. Players collect their marbles, adjust the force of their roll, and, on a subsequent trial, improve on their first trial—they have “learned” as individuals.
To show the difference between the “Piagetian” and “Vygotskiian” perspectives, we focus on the differential emphases they put on the contribution of the social milieu to individual learning.
We simulated four learning strategies:
“Random”: a control condition, in which players’ achievement does not inform their performance on subsequent attempts, unless they are exactly on target.
“Piagetian”: players learn only from their own past attempts.
“Vygotskiian”: players learn only by watching other players nearby, not from their own attempts.
“Piagetian–Vygotskiian”: players learn from both their own and a neighbor’s performance.
To run this simulation, press SETUP and then GO. To execute the simulation one attempt at a time, press GO ONCE.
The NUMBER-OF-PLAYERS slider controls how many players there are in the game.
You can change the number of attemps that each player makes in a simulation run using the ATTEMPTS-PER-RUN slider.
To run the simulation under a particular learning mode (Random/Piagetian/Vygotskiian/Piagetian-Vygotskiian), use the STRATEGY chooser and select the strategy from its drop-down menu. Make sure the switch RANDOMIZE-STRATEGY-EACH-RUN? is set to "Off."
Adjust the value of the ZPD slider to set the maximum difference between a player's score and its selected neighbor's score that still allows the player to learn from that neighbor (applies under the Vygotskiian and Piagetian-Vygotskiian strategies).
Adjust the value of the MOVE-ERROR slider to set the level of "noise" in the players' perception and execution.
Adjust the value of the #-VYGOTSKIIAN-NEIGHBORS slider to set the number of neighbors that players see under the Vygotskiian and Piagetian-Vygotskiian strategy.
If the TRAILS? switch is "On", each player will leave a colored trail behind when they make an attempt, making it easier to see where they end up.
Set the switch RANDOMIZE-STRATEGY-EACH-RUN? to "On" in order to view multiple runs under the different conditions. This is helpful for comparing between different experimental outcomes. Note that the target line, in the middle of the view, changes color to reflect the condition you are running the simulation under -- these are the same colors as in the graph and histogram.
If the STOP-AFTER-EACH-RUN? switch is set to "On", the simulation will stop after ATTEMPTS-PER-RUN attempts have been made.
Note that the learning process involves “feedback loops.” That is, a player’s learning—the individual “output” of a single attempt—constitutes “input” for the subsequent attempt. In the “Piagetian” condition, this is a player-specific internal loop, and in the “Vygotskiian” condition one person’s output may be another person’s input on a subsequent attempt, and so on.
Note also that over the course of a “Piagetian–Vygotskiian” run of the simulation, players might learn on one attempt from their own performance and on another from the performance of a neighbor. Both sources of information are simultaneously available for each player to act upon.
The combined “Piagetian–Vygotskiian” strategy tends to be the best one, but whether the Piagetian learning is greater than the Vygotskiian learning or vice versa depends on combinations of the settings of the parameters #-VYGOTSKIIAN-NEIGHBORS, ZPD, and MOVE-ERROR. Try to find some settings under which Piagetian works better and some settings under which Vygotskiian works better. Can you explain where the difference comes from?
Try to implement a modified version of the “Vygotskiian” strategy, where the better performing players modify their next move to be within the ZPD of a less well performing neighbor, making a play that is worse than they know how to make, in order to help the less well performing player learn to play better.
This model is a modernized version of the model presented in:
The original model is available at: http://ccl.sesp.northwestern.edu/research/conferences/JPS2005/JPS2005.nlogo
For more about the additions suggested in EXTENDING THE MODEL, see:
If you mention this model or the NetLogo software in a publication, we ask that you include the citations below.
For the model itself:
Please cite the NetLogo software as:
Copyright 2005 Uri Wilensky.
This work is licensed under the Creative Commons Attribution-NonCommercial-ShareAlike 3.0 License. To view a copy of this license, visit https://creativecommons.org/licenses/by-nc-sa/3.0/ or send a letter to Creative Commons, 559 Nathan Abbott Way, Stanford, California 94305, USA.
Commercial licenses are also available. To inquire about commercial licenses, please contact Uri Wilensky at email@example.com.