NetLogo banner

NetLogo Publications
Contact Us

Modeling Commons

Beginners Interactive NetLogo Dictionary (BIND)
NetLogo Dictionary

User Manuals:
Farsi / Persian


NetLogo User Community Models

(back to the NetLogo User Community Models)

[screen shot]

If clicking does not initiate a download, try right clicking or control clicking and choosing "Save" or "Download".(The run link is disabled for this model because it was made in a version prior to NetLogo 6.0, which NetLogo Web requires.)


This model is a multiplayer version of the iterated prisoner's dilemma. It is intended to explore the strategic implications that emerge when the world consists entirely of prisoner's dilemma like interactions. If you are unfamiliar with the basic concepts of the prisoner's dilemma or the iterated prisoner's dilemma, please refer to the PD BASIC and PD TWO PERSON ITERATED models found in the PRISONER'S DILEMMA suite.


The PD N PERSON ITERATED model demonstrates an interesting concept: When interacting with someone over time in a prisoner's dilemma scenario, it is possible to tune your strategy to do well with theirs. Each possible strategy has unique strengths and weaknesses that appear through the course of the game. For instance, always defect does best of any against the random strategy, but poorly against itself. Tit-for-tat does poorly with the random strategy, but well with itself.

This makes it difficult to determine a single "best" strategy. One such approach to doing this is to create a world with multiple agents playing a variety of strategies in repeated prisoner's dilemma situations. This model does just that. The turtles with different strategies wander around randomly until they find another turtle to play with. (Note that each turtle remembers their last interaction with each other turtle. While some strategies don't make use of this information, other strategies do.)

When two turtles interact, they display their respective payoffs as labels.

Each turtle's payoff for each round will determined as follows:

                | Partner's Action
Turtle's |
Action | C D
C | 3 0
D | 5 1
(C = Cooperate, D = Defect)

(Note: This way of determining payoff is the opposite of how it was done in the PD BASIC model. In PD BASIC, you were awarded something bad- jail time. In this model, something good is awarded- money.)



SETUP: Setup the world to begin playing the multi-person iterated prisoner's dilemma. The number of turtles and their strategies are determined by the slider values.

GO: Have the turtles walk around the world and interact.

GO ONCE: Same as GO except the turtles only take one step.

NEW ROUND: You can calculate new population sizes for each strategy according to how they have managed in the game. First, stop the simulation, then press "New Round" and this sets new population sizes according to simple evolutionary equations.
(New amount = Old population size * Old amount * Avg points of strategy / SUM of every strategy(Old amount * Avg points of strategy))
New population sizes are rounded to the nearest integer.
Note: It is possible for one strategy to achieve more turtles than is the maximum on the slider. Be careful with this.


MAXTICKS - This is the maximux number of games that are to be played between turtles before the simulation stops. You can set this to any positive(!) integer. The only limit is the one set by NetLogo programming language (2^53, about 9 quadrillion).

DO-POPPLOT - This is just to help you plot population sizes if you need to do it right now and then.

RESET PLOT - Does what it says. Resets the population plot.

TEST POPULATION FOR DIFFERENT VALUES OF TREMBLINGHAND - Try this out, and model simulates the population with every possible value of trembling hand. It gives you nice output to export to Spreadsheet if you like.

TEST POPULATION FOR DIFFERENT VALUES OF MISTAKEHEARING - Try this out, and model simulates the population with every possible value of trembling hand. It gives you nice output to export to Spreadsheet if you like.


TREMBLINGHAND-PROB - This is the probability that turtle makes a mistake when it tries to choose between co-op and defect. Nota that unforgiving cannot make mistakes. You can change it in the code if you like.

MISTAKEHEARING-PROB - This is the probability that turtle makes a mistake in observing what its opponent chose last time. Turtle still gets the amount of points it should, but it thinks that the opponent defected when it really cooperated or the other way around. With mistakehearing-prob unequale to zero, for example tit-for-tat-strategies can get into vicious revenge-cycles when one of them mistakes opponents coop for defection.

N-STRATEGY: Multiple sliders exist with the prefix N- then a strategy name (e.g., n-cooperate). Each of these determines how many turtles will be created that use the STRATEGY. Strategy descriptions are found below:


RANDOM - randomly cooperate or defect

COOPERATE - always cooperate

DEFECT - always defect

TIT-FOR-TAT - If an opponent cooperates on this interaction cooperate on the next interaction with them. If an opponent defects on this interaction, defect on the next interaction with them. Initially cooperate.

UNFORGIVING - Cooperate until an opponent defects once, then always defect in each interaction with them.

UNKNOWN - This strategy is included to help you try your own strategies. It currently defaults to Tit-for-Tat.

TWO-TITS-FOR-TAT - Cooperates, unless opponent has defected in previous interaction or before that.

THREE-TITS-FOR-TAT - Cooperates, unless opponent has defected in any of the three previous interactions.

TIT-FOR-TWO-TATS - Cooperate if opponent has cooperated in at least one of previous two interactions. Start with cooperation.

TIT-FOR-THREE-TATS - Cooperate if opponent has cooperated in at least one of previous three interactions. Start with cooperation.

MAJORITY - Cooperate if opponent has cooperated in majority of three previous interactions. Starts with cooperation.


AVERAGE-PAYOFF - The average payoff of each strategy in an interaction vs. the number of iterations. This is a good indicator of how well a strategy is doing relative to the maximum possible average of 5 points per interaction.

POPULATION SIZES - If you use Do-evolution, this plots the population sizes after every round.


Set all the number of player for each strategy to be equal in distribution. For which strategy does the average-payoff seem to be highest? Do you think this strategy is always the best to use or will there be situations where other strategy will yield a higher average-payoff?

Set the number of n-cooperate to be high, n-defects to be equivalent to that of n-cooperate, and all other players to be 0. Which strategy will yield the higher average-payoff?

Set the number of n-tit-for-tat to be high, n-defects to be equivalent to that of n-tit-for-tat, and all other playerst to be 0. Which strategy will yield the higher average-payoff? What do you notice about the average-payoff for tit-for-tat players and defect players as the iterations increase? Why do you suppose this change occurs?

Set the number n-tit-for-tat to be equal to the number of n-cooperate. Set all other players to be 0. Which strategy will yield the higher average-payoff? Why do you suppose that one strategy will lead to higher or equal payoff?


1. Observe the results of running the model with a variety of populations and population sizes. For example, can you get cooperate's average payoff to be higher than defect's? Can you get Tit-for-Tat's average payoff higher than cooperate's? What do these experiments suggest about an optimal strategy?

2. Currently the UNKNOWN strategy defaults to TIT-FOR-TAT. Modify the UNKOWN and UNKNOWN-HISTORY-UPDATE procedures to execute a strategy of your own creation. Test it in a variety of populations. Analyze its strengths and weaknesses. Keep trying to improve it.

3. Relate your observations from this model to real life events. Where might you find yourself in a similar situation? How might the knowledge obtained from the model influence your actions in such a situation? Why?

4. Look at below the playground and you find tremblinghand-prob, which is the probability of making the wrong move. Try changing this with tft-strategies and see what happens. See also the mistakehearing-prob which is the probability of seeing what opponent did the wrong way. What is the difference between these two?


Relative payoff table - Create a table which displays the average payoff of each strategy when interacting with each of the other strategies.

Complex strategies using lists of lists - The strategies defined here are relatively simple, some would even say naive. Create a strategy that uses the PARTNER-HISTORY variable to store a list of history information pertaining to past interactions with each turtle.
Currently has history up to 4 interactions.

Spatial Relations - Allow turtles to choose not to interact with a partner. Allow turtles to choose to stay with a partner.

Environmental resources - include an environmental (patch) resource and incorporate it into the interactions.


Note the use of the TO-REPORT primitive in the function CALC-SCORE to return a number

Note the use of lists and turtle ID's to keep a running history of interactions in the PARTNER-HISTORY turtle variable.

Note how agent sets that will be used repeatedly are stored when created and reused to increase speed.


PD Basic

PD Two Person Iterated

PD Basic Evolutionary


Copyright 2002 Uri Wilensky. All rights reserved.

Permission to use, modify or redistribute this model is hereby granted, provided that both of the following requirements are followed:
a) this copyright notice is included.
b) this model will not be redistributed for profit without permission from Uri Wilensky. Contact Uri Wilensky for appropriate licenses for redistribution for profit.

This model was created as part of the projects: PARTICIPATORY SIMULATIONS: NETWORK-BASED DESIGN FOR SYSTEMS LEARNING IN CLASSROOMS and/or INTEGRATED SIMULATION AND MODELING ENVIRONMENT. The project gratefully acknowledges the support of the National Science Foundation (REPP & ROLE programs) -- grant numbers REC #9814682 and REC-0126227.

This model and this guide have been expanded by Lasse Lindqvist ( in 2012. I release all my edits and improvements to public domain. Note that the original copyright of Uri Wilensky still stands.

The original version can be found in Netlogo Models Library under Social Science - (unverified) - Prisoner's Dilemma and by choosing PD-N-Person Iterated. It can also be found here:

(back to the NetLogo User Community Models)