NetLogo User Community Models(back to the NetLogo User Community Models)
|
Download If clicking does not initiate a download, try right clicking or control clicking and choosing "Save" or "Download".(The run link is disabled because this model uses extensions.) |
## WHAT IS IT?
(a general understanding of what the model is trying to show or explain) JAMM stands for Jobs Assignment Multiagents Model. It depicts a system where projects have to be dispatched to a community of Solution Design Specialists through a set of Account Managers, but it can fairly represent many situations where a number of sources generate a workload for a distributed community of intelligent servants to process. In the model Account Managers generate jobs with a given random distribution of generation times, and publish them on a dashobard where they are visible to all the Solution Design Specialists. These have to pick as many as they can and work them out until jobs are accomplished. Solution Design Specialists aim at maximizing their productivity, i.e. the amount of ACV (Annual Contract Value) their jobs will bring to their company and can dynamically select several strategies to reach this goal. They also learn from past experience and observe results of their fellow specialists and eventually copy them. They can also persistently team up with the same Account Managers if that produces better results. JAMM employs probability distributions to mimic some stochastic processes, like the generation of new jobs, the value of the contracts (ACV) jobs will eventually result in if won, the workload each job requires. Such distributions are defined according to a real data set from an actual CRM system.
## HOW IT WORKS
(what rules the agents use to create the overall behavior of the model) There are three breeds of agents: Account Managers, Jobs and Solution Design Specialists.
### Account Managers (_amgrs_) _amgrs_ are proto agents as they do not make decisions and do not undertake acctions. They passively generate jobs according to a stochastic process described through a Gamma probability distribution whose statistical parameters, alpha and lambda, are controlled by input. In this version of the modell all amgrs are created identical at the beginning of the simulation, except their jobs creation statistics which can vary in a limited range and whose values are decided randomly by the model set-up routine. _amgrs_ form the external ring of white persons on the main screen. An _amgr_ shape grows in size with the ACV that gets won. Their number is proportional to the number of the specialists as per a multiplier factor decided by input.
### Jobs (_jobs_) _jobs_ are randomly generated by _amgrs_ and placed on the screen as a white circle over the _amgr_ who created them. Their main attributes are: - ACV : the annual contract value, randomly distributed according to a skewed t-Student distribution with given parameters drawn from the real data set. - win probability: randomly distributed according to a normal distribution with given mean and standard deviation (0.3 and 0.1 respectively). - Expiration date: the amount of days to the date the job must be completed and submitted. It is randomly distributed between 7 and 60 days, which reflects typical real figures from the business. - Workload: the quantity of man-days it takes to process the job. Note that in the current version of the model, only one specialist can work on a job, therefore Workload and Expiration date are related to each other, and Workload cannot exceed the specialist carrying capacity. A _job_'s size is proportional to its ACV value, so bigger circles could potentially mean bigger wins (not yet graphically implemented in this version). _jobs_ are proto agents too, as they do not make decisions and do not undertake actions. All what they do is to accomplish their fate: once their expiration counter approaches zero, the _job_ may either have reached the centre of the screen (if a _sdsspclist_ has picked it up) or it may still be with its _amgr_, in which case it will turn into a No Bid. No Bids are those jobs that have been let expire because of lack of resources to support them. _jobs_ who reach the centre may turno into Won or Lost according to their win probability. All expired _jobs_ update the general performance indicators of the model and "die". _jobs_ waiting for a _sdspclsts_ on the dashboard degrade their win probability, which decrease proportionally to the time to expire. This is to reflect the fact that _sdspclsts_ will have to rush to do fast what would have needed more time to get completed, and the finaly quality will be damaged.
### SDS (_sdspclsts_) _sdspclsts_ are the only true agents of the model. Their quanity is controlled by input through a global variable. They arrange themselves to form the internal ring of red persons on the main screen. A _sdspclsts_ is almost always busy processing jobs, but it has a finite capacity (controlled by input) so it can only take jobs until it is full. As _jobs_ come with discrete workload requirement, _sdspclsts_ will always have some residual free capacity, which may or may not be enough to take another job. Capacity will get gradually freed once jobs get completed. _sdspclsts_ will continuously check the dashboard to see if any new job is available for them to pick it up. If they have to choose among more than one job, they will do so according to the criteria of their current strategy. This ruleset is randomly selected upon _sdspclsts_ creation, and then it may change over time. In fact _sdspclsts_ will try to change strategy after a performance evaluation period (consisting of K _jobs_ submitted, where K is controlled by input) if they see their performance worsening. There can be up to 8 strategies a _sdspclst_ can randomly choose from. The actual available strategies are enabled or disabled by the input interface switches, and it is possible to start with few strategies and have new ones kick in during the model execution. In future versions of JAMM, _sdspclsts_ will increase their skills as their _jobs_ turn into wins, and their skills amount improves the win probability of their next _jobs_. The current model ensures _sdspclsts_ picking up _jobs_ from _amgrs_ they successfully worked with in the past do increase new _jobs_'s win probability by a fixed amount: this is to reflect the "team factor" that makes two people winning together raising their performance. The eight strategies are:
#### Strategy 0, 1 and 2 Strategies 0, 1 and 2 tell _sdspclst_ to pick the first available job having the highest ACV, the highest win probability or the lowest workload respectively.
#### Strategy 3 Strategy 3 tells the _sdspclsts_ to pick one random job from those waiting to be assigned.
#### Strategy 4 Strategy 4 is about copying the best strategy of another random _sdspclst_ that is performing better in terms of win rate.
#### Strategy 5 Strategy 5 tells the _sdspclst_ to pick a job from an _amgr_ it successfully worked with in the past. Note that win_prob of jobs get increased by a given factor when strategy 5 is employed: such factor depends on previous win probability of the _sdspclst_ .
#### Strategy 6 _sdspclsts_ employing Strategy 6 follow a First In First Served policy (FIFS) to select their next job.
#### Strategy 7 _sdspclsts_ employing Strategy 7 follow a Last In First Served policy (LIFS) to select their next job.
#### Strategy 8 Jobs generated by most successful _amgrs_ are given priority during the jobs pick up phase. Most successul _amgrs_ are ranked in terms of their won ACV to date.
### Interaction with the environment Interatction with the environment happens by means of the _jobs_, which enter the system through the _amgrs_, get processed through the _sdspclsts_ and exit the system when completed, by turning won, lost or no-bid.
## HOW TO USE IT
(how to use the model, including a description of each of the items in the Interface tab) The interface provides user with some input controllers and some output obersvables. Five sliders control the quantity of _sdspclsts_, the quantity of _amgrs_(through a multiplier factor), the amount of workload a _sdspclst_ can carry, the number of _jobs_ a _sdspclst_ has to process before it will check again its strategy performance (and eventually will try to change strategy) and the minimum win probability a job must have to get published (this reflects the minimum qualification criteria: deals with low win probabilty are not worth the resources required to process the projects). The _amgrs_'s Gamma statistics can be set by input too. A set of eight switches, one per each of the possible strategies, enable or disable the strategies _sdspclsts_ can employ when look for new _jobs_ in the waiting list. Each strategy can be enabled or disabled at any point in time during the simulation to observe the effects on the dynamics of the system. The main screen shows in real time how _jobs_ appear in the system and what _sdspclsts_ pick them up. Both _amgrs_ and _sdspclsts_ grow in size when their ACV performance are higher than the average of their fellow agents. When picked up by a _sdspclst_, _jobs_ travel towards the green square at the center of the screen, where they disappear. It is assumed that a _job_ reaching the green square is a completed one and gets passed on to the _amgr_ who has published it, who submits it to his/her customer. On the right hand side there are a number of plots showing the evolution of some performance indicators, either global ones (like global throughput) and individual ones. Individual ACV win rates are shown in two versions: an all-time version, and a recent version; the latter shows individual ACV win rate from the last K accomplished jobs. One of these graphs shows the timeline of strategies adopted by each _sdspclst_ at any given time, and when such strategies change (note that the graph is limited to the first 5 _sdspclsts_ for graphical reasons (future versions will show more). An histogram graph shows what strategies are active and how many _sdspclsts_ are using them at any given time. Another one shows dynamically strategies' rank in terms of ACV won. In this model 1 tick equals 1 working day.
## THINGS TO NOTICE
(suggested things for the user to notice while running the model) As more strategies get activated and deactivated you will see bursts of strategy changes amongs the _sdspclsts_. Global performance parameters are always better when _sdspclsts_ have more strategies to select than when there is one single strategy only. In other words, enabling a bottom-up decision making results in better global and individual performances. Also useful to notice how throughput increases when _sdspclsts_ have reduced carrying capacity. The current version of the model limit the number of _sdspclsts_ to a maximum of 5 due to limitations in the graphical output windows. The model however does easily support much more _sdspclsts_ (hence _amgrs_ too) if one accepts that all graphical monitors limit to the first 5 _sdspclsts_.
## THINGS TO TRY
(suggested things for the user to try to do (move sliders, switches, etc.) with the model) Progressively enable more strategies through the strategies switches on the interface panel. Then progressively disable strategies and observe how performances change, especially the recent ones where changes have visible quick effects. Notice how strategies that have been disabled at run time are still available to some _sdspclsts_ if they did serve them well in the past, as specialists keep memory of their past performance and learn from experience what strategies produce the best results in terms of ACV.
## EXTENDING THE MODEL
(suggested things to add or change in the Code tab to make the model more complicated, detailed, accurate, etc.) A possible extension of the model should be to have heterogenous agents in terms of statistics. I.e _amgrs_ should have different Poisson statistics, so that some would generate jobs more frequently or bigger than others or with higher / lower win probabilities. _sdspclsts_ could start with different skills levels. Further diversification would be achieved by dividing _jobs_ and _sdspclsts_'s skills in different knowledge domains, e.g. A, B and C. _sdspclsts_ with high skill in e.g. domain B would be incentivised to try pick a _job_ of that domain. _sdspclsts_ know what is their past best performing strategy up to a given moment, and tend to revert to that if the current one is not performing well. However a true reinforcement learning is yet to be implemented. Likewise, implementing strategy evolution through GA is something to try.
## NETLOGO FEATURES
(interesting or unusual features of NetLogo that the model uses, particularly in the Code tab; or where workarounds were needed for missing features)
## RELATED MODELS
(models in the NetLogo Models Library and elsewhere which are of related interest) El Farol model is somehow related to JAMM with regards to the idea of dynamic strategies selection.
## CREDITS AND REFERENCES
(a reference to the model's URL on the web if it has one, as well as any other necessary credits, citations, and links)
|