NetLogo banner

 Contact Us

 Modeling Commons

 User Manuals:


NetLogo User Community Models

(back to the NetLogo User Community Models)


by Dylan Evans (Submitted: 04/14/2003)

[screen shot]

Download OptimismAISB2
If clicking does not initiate a download, try right clicking or control clicking and choosing "Save" or "Download".

(You can also run this model in your browser, but we don't recommend it; details here.)


This project is inspired by the phenomenon of 'motivational bias'. It shows how the principle of maximum expected utility (MEU), can - in certain types of environment - be outperformed by 'biased' decision rules.

In this model, three types of agent live in a 2D gridworld consisting of 441 (21 x 21) patches, each of which represents an 'opportunity'. Each opportunity has a probability of success (p, ranging from 0-1), a benefit for success (b, ranging from 0.0001 to 10 energy points) and a cost of failure (c, ranging from 0.0001 to 10 energy points).The colour of the patch is determined by the probability of success, with darker patches representing more difficult opportunities.

Agents have only one goal - to maximise their energy points. In other words, their utility function is a linear function of their energy level. Agents have some knowledge of the probability of success (p), the benefit for success (b), and the cost of failure (c), for each opportunity they face. Those c, b and p are properties of the patch they are on an that moment. The level of noise affecting the agents' knowledge of these values can be set by means of the sliders on the interface.

The error can vary from 0 (perfect information) to 10 (great uncertainty). This error determines the standard deviation used for the normal distribution of which the mean is the true value of c, b or p of the patch. A random number drawn from this distribution determines the agents' guesses about the values of c, b and p. There are two error sliders, one affects knowledge of p, the other affects the agents' knowledge of c and b.

Agents do not move, but since each patch is updated each turn, each agent is presented with a different opportunity at every time step. Each turn, every agent must decide whether or not to 'play' that opportunity or not. This decision is made according to the agent's 'decision rule'. There are three types of agent, each with a different decision rule:

- The RATIONAL agent (BLUE) uses the principle of maximum expected utility. I.e. it only plays when the expected utility of playing is greater than that of not playing.
- The OPTIMISTIC agent (YELLOW) also uses the principle of maximum expected utility, but uses a biased estimate of p (its estimate of p is multiplied by its estimate of b divided by its estimate of c).
- The EMOTIONAL agent (RED) plays whenever it estimates b to be more than twice c, and does not play whenever it estimates b to be less than half c. When it estimates that b and c are similar, its chance of playing is proportional to its estimate of p.

If an agent decides to play an opportunity, its chance of success is determined by the probability of success associated with that opportunity. If it plays and succeeds, its energy level is increased by the benefit for success associated with the opportunity. If it plays and fails, its energy level is decreased by the cost of failure associated with that opportunity. If an agent does not play, its energy level remains the same for that turn. Agents start with zero energy. Agents never die, and there is no reproduction


Click the SETUP button to set up a collection of 50 agents of each type.

Click the GO button to start the simulation. The simulation then runs for 500 time clicks and stops.

The ONE-STEP button allows you to step through the simulation one turn at a time.

The ERRORP slider controls the amount of noise that affects the agents' knowledge of p. When ERRORP is set to zero, agents have perfect information about p.

The ERRORCB slider controls the amount of noise that affects the agent's knowledge of c and b. When ERRORCB is set to zero, agents have perfect information about c and b.

The DISPLAY switch allows you to turn off the graphics screen, which makes the model run faster. This is useful when using the FULLRUN button.

The FULLRUN button gives you data of the performances of all the agents under all conditions. It steps by increments of 1 through all the different values of ERRORCB (set by the ERRORCB slider) and the range of ERRORP values. The ERRORCB starts at 0 and gradually changes to 10. For the given level of ERRORCB it also gradually varies the ERRORP slider, starting at zero and then increasing ERRORP by increments of 1 till this also reached 10. For every combination of values of ERRORCB and ERRORP, 10 runs of 500 time steps each are performed. For each run, the final average energy of all the agents is printed in the command centre.

When the FULLRUN procedure terminates, the results may be saved in a text file by selecting 'Export Output' from the FILE menu. This text file can then be imported into an EXCEL spreadsheet and graphed.


With a low levels of ERRORP and ERRORCB, the rational agents do best. This is precisely what you would expect from game theory; the principle of maximum expected utility is designed to give optimal performance in a world of perfect information.

With high levels of ERRORP and ERRORCB, all the agents do poorly. The rational agents are slightly better, but this is not a significant difference.

With a high level of ERRORP and a small level of ERRORCB, the emotional agents and the optimistic agents outperform the rational agents. Can you work out WHY this happens?


Try different values for the ERRORP and ERRORCB sliders. What are the critical thresholds at which rational agents are no longer the best?


Try creating more types of agent with different decision rules. Can you find any decision rules that do better than the rational agents (ie. the principle of maximum expected utility)? Under what conditions does the new decision rule outperform the rational agents?

Can you find a decision rule that does better than the rational agent if and only if ERRORP is low but ERRORCB is high?


This model was created by Annerieke Heuvelink, Daniel Nettle and Dylan Evans.

For a paper discussing this model in more detail, visit

(back to the NetLogo User Community Models)