NetLogo banner

 Contact Us

 Modeling Commons

 User Manuals:


NetLogo User Community Models

(back to the NetLogo User Community Models)


by Klaus G. Troitzsch (Submitted: 01/22/2018)

[screen shot]

Download TheftNorm
If clicking does not initiate a download, try right clicking or control clicking and choosing "Save" or "Download".(The run link is disabled because this model uses extensions.)


This model is an attempt at modelling the emergence of several norms from criticism uttered by agents which feel offended by theft and by begging for alms. It uses some but not all of the features described by Luis Gustavo Nardin, Giulia Andrighetto, Rosaria Conte, Mario Paolucci in their IntERS model ( and most features described by Klaus G. Troitzsch in his EONOERS model ( This means that the agents representing the human actors follow emerging norms and calculate the salience of these norms according to the number of invocations of these emerging norms they receive from others.


**Normative board**
Agents have a normative board, which can store critical remarks and retaliations from other agents which felt offended by theft or begging alms but also from those agents which retaliated theft or refused to give alms, and a mechanism with which they can remember such utterances (invocations of norms which are about to emerge), and they can calculate the salience of the respective emerging norms as a means to decide which of their possible actions they will apply in a given situation. This works as follows:

* Events such as theft, alms-begging, refusal of alms or theft punishment trigger messages to agents in the neighbourhood. Each of these messages contains a reference to an emerging norm (usually the norm or norms which is or are followed or violated by the action). A recipient of such a message updates those of its memory entries which refer to the specified emerging norm according to whether the action is performed by itself or by another agent and whether the message contains a punishment or a sanction. This is implemented in a series of `update-` procedures (the `update-` procedures combine the invocation and the updates).

* The calculation of an action probability consists of two steps: the individual drive to steal or to beg is determined by the relative position on the wealth rank list of all agents (the poorer the agent the higher the drive). Likewise the indivdual drive to punish theft or to give or refuse alms is also determined by this rank, but the other way round (the richer the agent the higher the drive to help or to punish). The normative drive depends on former experience with an agent's own actions and on observations made of normative behaviour of other agents' in the neighbourhood (see, pp. 33-39).

* When an agent has to choose between different action options (including the option of doing nothing) it calculates a probability (`calculate-propensity-from` for actions controlled by only one norm, and `propensity-to` for actions controlled by two conflicting norms) for each of the action options which in turn depends on a rationally calculated "individual drive" (implemented in the procedure `calculate-my-individual-drive-to` with respect to the action named as the procedure's argument) and a "normative drive" which is calculated from the salience of the norm or norms involved in the action option. The salience of the norm, in turn, is calculated according to a complicated formula (implemented in the procedure `calculate-my-normative-drive-to`) which combines the memory of
* cases when this agent abided by the respective norm or violated it,

* cases when this agent observed norm compliances and violations in its neighbourhood,

* cases when this agent received punishments or sanctions and

* cases when this agent heard about, applied or personally received invocations of the respective norm.

The **"logic of action"** mentioned above can be described as follows:

Immediately after the initialisation all persons are scheduled to decide whether they are going to steal or to beg alms at 06:00 of the first day (and to repeat this action after a uniformly distributed delay whose maximum duration can be changed with the slider `MAX-DAYS-BETWEEN-ACTIONS` --- currently there are no weekends)


Change all the sliders and switches that you want to change and press `setup`.

The model will run for `max-months` simulated months (and can only be stopped with the help of `Tools`->`Halt`).

The two groups of plots show what their headlines suggest. The last column of plots at the top right corner contains histograms of action propensities, the two triangular blocks in the middle show the correlations of action propensities (above the diagonal) and of norm saliences (below the diagonal). The diagrams below these scatterplots show the wealth distribution as a histogram changing over time and as a plot of the deciles of this distribution over time, The plots in the lower right corder show how many thefts and begging acts occur over time (moving averages and totals).

The calculation of action propensities consists of two steps: the individual drive of agents to steal or to ask for alms is determined by their individual ranks on the wealth variable. The individual drive of agents to punish thieves or to give or refuse alms is also determined by this rank (the richer the greater the propensity to give and to punish theft, the smaller the propensity to refuse). The normative drive depends on former experience with an agent's own actions and on observations made of normative behaviour of other agents' in the neighbourhood (see, pp. 33-39).

The following sliders control global variables:

* `LISTENERS` defines how many nearest neighbours of an agents are aware of its actions.
* `RANGE` defines the maximmum distance within which an agent will steal or ask for alms.
* `RADIUS` is the distance whithin which agents can move per day.
* `NDW` is the weight of the normative drive for calculating the decision probability, the sum of the two weights for individual drive and normative drive is 1.0
* `DISCOUNT` determines the weight of older memory contents (whenever a new event arrives, the number of all previous events is multiplied by `DISCOUNT` before 1 is added for the current event; thus with `DISCOUNT = 1.0` all events have the same weight inependent of their age.
* `BACKGROUND` defines the state of the memories of the agents at the time of initialisation; if this is 0, all memories are empty at the beginning, if it is positive, all agents' memories are filled with `BACKGROUND` anti-theft norm-invocations, if it is negative, they are filled with the same number of pro-theft (or: no-private-property) norm-invocations.
* `MAX-DAYS-BETWEEN-ACTIONS` defines how often agents will take actions (see below).


The model keeps a database of all theft and begging events and writes files for analysis.

The model calculates the main parameters of the norm salience distributions (mean and standard deviation).


The formula for calculating salience could be simplified. As weighting coefficients are multiples of 0.33 one could replace them with the respective multiples of 1.0 and change the `numerator` and the `denominator` accordingly. The formula is claimed in Deliverable 3.1 to have been extracted from Cialdini, R. B., Kallgren, C. A., Reno, R. R. (1990). A focus theory of normative conduct: A theoretical refinement and reevaluation of the role of norms in human behavior. Advances in Experimental Social Psychology. 24: 201–234. But it seems that only the terms used in this formula were extracted from this source (which does not say anything about the weights).


The model can be used to find out which of the input parameters guiding the behaviour of the agents have the greatest impact on the various output parameters (see the reference cited at the end of this description). A few experiments are provided in the behavior space.


The model could be extended by endowing the agents with even more learning capabilities. Agents could then optimise their behaviour as a reaction on punishment and sanctions.


Nothing special. The code uses recursive functions to determine to which family an extorter belongs and to write up the hierarchy in the output window. This version makes use of the time extension (which needs to be present either in the directory where the model resides or in the `extensions` directory of the NetLogo installation; perhaps the model runs only in NetLogo 5!). The initial wealth distribution is a beta distribution (converted from NetLogo's gamma function).


This model uses some of the features of models of the GLODERS project:

* is a period oriented version with constant action probabilities for all agent-types
* is a period oriented version with norm oriented behaviour of all agent types, much like in this version
* is an event oriented version with norm oriented behaviour of all agent types, much like in this version


The research leading to the features adopted from the GLODERS project has received funding from the European Union Seventh Framework Programme (FP7/2007--2013) under grant agreement no. 315874 ([Global dynamics of extortion racket systems, GLODERS]( These results reflect only the author’s views, and the European Union is not liable for any use that may be made of the information contained therein. The author thanks his colleagues in this and earlier projects for fruitful discussions over many years.

The model owes a lot to the Nardin et al. discussion paper and to the discussions within the GLODERS project. The GLODERS project used earlier versions of this model as a proof of concept and a quick and dirty prototype of the final GLODERS simulator.

This model is described in more detail in Klaus G. Troitzsch: Can lawlike rules emerge without the intervention of legislators? (submitted to Frontiers in Evolutionary Sociology and Biosociology),


Cite this model with the URL


Written by Klaus G. Troitzsch 2013-2016. The model may be used and extended if the source is quoted and a note is sent to

(back to the NetLogo User Community Models)