NetLogo banner

Home
Download
Help
Forum
Resources
Extensions
FAQ
NetLogo Publications
Contact Us
Donate

Models:
Library
Community
Modeling Commons

Beginners Interactive NetLogo Dictionary (BIND)
NetLogo Dictionary

User Manuals:
Web
Printable
Chinese
Czech
Farsi / Persian
Japanese
Spanish

  Donate

NetLogo User Community Models

(back to the NetLogo User Community Models)

[screen shot]

Download
If clicking does not initiate a download, try right clicking or control clicking and choosing "Save" or "Download".(The run link is disabled for this model because it was made in a version prior to NetLogo 6.0, which NetLogo Web requires.)

## WHAT IS IT?

This model is derived from and inspired by Bowles and Gintis, "A Cooperative Species: Human Reciprocity and its Evolution" (2013: 64-66).

This model contains two games: an iterated Prisoner's dilemma game, and a Public Goods game consisting of N_groups each of n_size. Generally speaking, the purpose of the model is to see under what conditions "cooperation" (or "altruism") will prevail given self-interested agents. It involves the concepts of multi-level (group) selection and inclusive fitness. Each turtle faces a choice: CONTRIBUTE or NOT-CONTRIBUTE, or alternatively, COOPERATE or NOT-COOPERATE. Initially, all turtles behave as if all turtles contributed the previous round, so all agents contribute except defectors. Then, turtles are randomly placed in different groups each of size n, and play a public goods game with other members of the group. After the first round, turtles only CONTRIBUTE if a 'sufficient' number of other turtles contributed in the previous round.

A 'sufficient number' is determined by the Agent's Threshold (the t_threshold variable), which is randomly selected between 0 and size_n (the size of the group). An agent with a threshold of 5, for example, would cooperate only if 5 other agents cooperated in its group from the previous round. This simulates the idea of conditional preferences to comply with a social norm. A percentage of initial defectors can be set. A defector has a Threshold of size_n + 1, which means it will never cooperate because doing so would require more cooeprators in the group than are total agents in the group.

In the Prisoner's Dilemma game, each agent interacts with one other agent. This other agent is either selected randomly from the population (thus making the likelihood of interacting with a cooperator proportional to the percentage of cooperators) or else it is set as fixed, using the PD_assortment chooser. If the odds are "fixed", then you must select the likelihoods that altruists/cooperators will interact with other altruists, and the likelihoods that other defectors/non-altruists will interact with cooperators.

Other NOTES
C = Cost of cooperation to agent (self)
B = Benefit of cooperation to neighbor (other)
Agents only derive benefit if others in their group cooperate, and agents always pay cost of cooperating with others. "Earnings" are the accumulated payoffs of each agent.

Below are some conditions in which cooperation is expected to prevail, although they are not tested explicitly in this model:
p > c/b, where p is the probability of meeting again
q > c/b, where q is the probability that one's reputation will become known.
k < b/c, where k are the number of cooperating neighbors

## HOW IT WORKS
The model has 3 basic steps:
1. Play the game (either Public Goods or Prisoners Dilemma) and collect payoffs.
2. LEARN (adapt). This is modeled by the Replicator_Dynamics switch and chooser.
3. MOVE (i.e. "re-assort" into different groups, if this is switched on)

## REPLICATOR DYNAMICS AND OTHER SETTINGS
The Replicator_Dynamics algorithms change the distribution of cooperators and defectors in the population. I included these primarily as predictive devices. They are currently specified at the global level, and are not derived agent interactions. Basically, they make predictions of expected proportions of cooperators and defectors, and if a greater number of cooperators is predicted, for example, that number of defectors with the smallest payoffs (or earnings- this can be changed) are replaced by cooperators.

1 The "Replicator Equation" is as follows:
let Pr(i) = the proportion of strategy i, and let $i = the payoff of strategy i.
The 'weights' of each strategy i is given by: Pr(i)t+1 = Pr(i)$(i), which becomes the numerator in a ratio giving us the new proportion of strategy i in the population at time t+1: Pr(i)t+1 = Pr(i)$(i) / Sum of Weights for all strategies.
The "strategies" here are just 2: cooperate or defect. The idea of a replicator equation (or 'genetic algoritm') in general is that it combines two forces: a) people blindly imitate the most prevalent or popular strategies, and b) people can also choose optimal strategies. Here, the strategy is a function of both its proportion and relative payoff.

2 The "Relative Payoff" algorithm is derived from Bowles and Gintis. The idea is that the probability of changing to another strategy is proportional to the difference between the *mean* payoffs for each strategy (cooperate and defect). Agents will switch only if the mean payoff is larger for the other strategy. Currently, I have programmed two versions of this, but am using only the first.

(i). Qij = B($j - $i), where Qij is the probability of individual switching from i to j, and $j and $i are the payoffs for strategy j and i, respectively. In this model, each agent has probability Q of switching.

(ii). Pr(i)t+1 = Pr(i) - a * Pr(i)(1-P)B($j - $i), where a is set to unity. This is a system-level prediction or change, not acting at the level of each agent. Notice the paramters "a" and "B."

The problem with both versions is that the parameter "B" has to be set seemingly arbitrarily, and set sufficiently small so that the respective probabilities are less than 1.

3 The "Variance Ratio" algorithm states that:
Change in Pr(altruists) = (b-c)var(pj) - c * Avar(pij), where b = benefits, c = costs, var(pj) = between-group variance, and Avar(pij) = weighted-average within-group variance.
According to Bowles and Gintis, the ratio of between-group variation (of altruists) to the total variation (which is the weighted-average within-group variation + the between-group variation) must be greater than the ratio c/b for evolution to favor altruism. This ratio is also equivalent to the probability of being paired with an altruist minus the probability of being paired with an altruist conditional on being an altruist or non-altruist, respectively, or P(A|A) - P(A|N). The problem with utilizing this is that it slows down the model, almost to a halt.

4 Finally, there is the "Imitation" strategy. Currently, this is very preliminary. I adopt a simple approach, in which agents look to their 4 closest neighbors and copy the most successful strategy among them. This leads to cascades of homogeneity due to the network topology.

5 Another way you can model population changes more directly is by switching on "Starvation?" and/or "Kill_Defectors?" The idea behind each is simple: accumulated resources have to be consumed. The consumption level is set to (B/size_n) / 2. These are deducted from Earnings each round. "Starvation" lets agents die if their total earnings go below zero. "Kill_defectors" is based on the obvious recognition (usually ignored) that so-called defectors or free riders cannot survive in isolation. Nobody can survive in isolation! Therefore a society of completely non-altruistic, non-cooperative individuals is impossible (and maybe even an oxymoron). The non-cooperators are not self sufficient. Thus, this parameter says that defectors who cannot find other cooperators to interact with will die.

6 Finally, "contrite" is taken from Bowles and Gintis. It says that if an agent defects by mistake, then it will unconditionally cooperate the next two rounds.

## THINGS TO TRY
The "Variance Ratio" prediction can be tested. This replicator dynamic CANNOT be used in the Prisoner's Dilemma game since there are no groups. Instead, set replicator dynamics to either "Replicator Equation" or "Relative Payoff" and switch to PD_assortment = "fixed" and set the probabilities of cooperators and non-cooperators interacting with cooperators (respectively) so that the difference between them is more than the ratio of c/b. This is equivalent to forming groups. Then run the Pairwise Prisoner's Dilemma Game.

For the "Imitation" replicator dynamics Algorithm, the turtles are imitating/watching the 4 closest turtles to them, but interacting randomly with turtles across the whole social space. It may be interesting to see if the results are effected at all by the restricting context of observation to the context of action.

## CREDITS AND REFERENCES

Bowles and Gintis, "A Cooperative Species: Human Reciprocity and its Evolution" (2013: 64-66).

(back to the NetLogo User Community Models)