Home
Help
Resources
Extensions
FAQ
NetLogo Publications
Donate

Models:
Library
Community
Modeling Commons

User Manuals:
Web
Printable
Chinese
Czech
Farsi / Persian
Japanese
Spanish

## NetLogo User Community Models

## WHAT IS IT?

This model is derived from and inspired by:
Centola, Damon, Robb Willer, and Michael Macy. “The Emperor’s Dilemma: A Computational Model of Self‐Enforcing Norms.” American Journal of Sociology 110, no. 4 (January 1, 2005): 1009–1040.

The basic idea is to test under what conditions people will not only comply with norms they privately disbelieve (i.e. 'FALSE COMPLIANCE'), but also when they will actively enforce them (i.e. 'FALSE ENFORCEMENT').

EMPEROR'S DILEMMA ROUTINE:
1. agents observe neighbors compliance and enforcement.
2. Each agent then makes two decisions:
(i) whether to comply with the norm, and
(ii) whether to enforce the norm.

## HOW IT WORKS
Below are the main agent variables:
Beliefs: B = 1 if believes; B = -1 if doesn't believe.
Strength (of Belief): varies from 0 to 1
Compliance: 1 if agent complies with the norm, and 0 the agent does not comply. Enforcement: 1 if agent enforces norm, -1 if enforces deviance, and 0 if agent doesn't enforce at all.

First, agents must decide whether to comply with the norm. In this model, a disbeliever complies if the proportion of neighbors enforcing compliance is greater than the strength of disbeliever's belief. For example, if the strength of disbelief is .5, but 60% of an agent's neighbors are enforcing compliance, then this agent will also comply. Thus, this depends on the question of enforcement, given next.

Second, agents make a decision to enforce based on whether those around them are complying with their private norms. In this model, the "Need to Enforce" is inversely related to the proportion of neighbors complying with their private beliefs! This is a somewhat strange assumption. In practice, it means that an isolated individual constituting an extreme minority is more likely to impose his or her beliefs on others when nobody else believes them. A more plausible approach is that groups are more likely to enforce norms on minorities, but that will wait for another model. Because of this assumption, enforcement drops whenever full compliance is achieved, causing the system to swing back towards non-compliance.

Note that the parameter "K" refers to the Cost of Enforcement.

"Rewiring probability" here is the same as that used in "Small Worlds" algorithm, but instead of rewiring links, it asks each neighbor and randomly assigns one of these neighbors to an agentset N_Neighbors if probability is below "rewiring probability". The "Small Worlds" parameter will set up small-worlds links. At the limit, where re-wiring probability = 1, small-worlds is the same as the global condition.

## Other Assumptions

- No hypocritical enforcement; Agents can only enforce compliance if they also complied, and can only enforce deviance if they have deviated.

- Initial condition is that agents conform to their own private beliefs and no one enforces anything.

- In the article, strength of belief, S, of disblievers ranges 0 < S <= .38 The mean of their conviction is 0.19.

## THINGS TO TRY

Change the "Conversion" setting. Conversions allow the beliefs of agents to change according to a stochastic process.

The parameter K (cost of enforcement) was fixed in the article cited above. It turns out that the interesting results do not obtain when the cost is varied.

There are several conditions in which this model can be run:

- Global: Each agent can interact with any other agent.

- Local: Each agent can interact only with its N closest neighbors, where N is set by "Influence_Range." There are two local conditions: i) Clustered, or ii) Random. These refer to whether the BELIEVERS are initially clustered or randomly distributed.

The interesting finding of this article is that cascades of false compliance and false enforcement will only be generated if the BELIEVERS (i.e. zealots) are initially clustered together, and agents can only interact locally (i.e. they lack global or outside information).

## CREDITS AND REFERENCES

Centola, Damon, Robb Willer, and Michael Macy. “The Emperor’s Dilemma: A Computational Model of Self‐Enforcing Norms.” American Journal of Sociology 110, no. 4 (January 1, 2005): 1009–1040.