NetLogo banner

 Contact Us

 Modeling Commons

 User Manuals:


NetLogo User Community Models

(back to the NetLogo User Community Models)


by Eliora Henzler (Submitted: 01/31/2018)

[screen shot]

Download NETLOGO_FINALV6 (5)
If clicking does not initiate a download, try right clicking or control clicking and choosing "Save" or "Download".

(You can also run this model in your browser, but we don't recommend it; details here.)


On Campus Recruitment (OCR) is the service managed by Penn Career Services that enables organizations to come to campus to interview students for post-graduate jobs and summer internships. It schedules thousands of interviews per year. OCR is the primary source of employment at Penn. 30% of students are employed through the process, and the top 10 employers, which are corporate employers with predictable hiring patterns such as banks and consultancies, each employ over 20 students.

In reality, OCR begins with the information session, in which organizations present themselves and interact with potential candidates for the first time. Interested candidates then submit an application through Career Services by a certain date, and the organization screens all applications after this date. Organizations then decide which candidates to invite to an interview. Following this first interview, there are often several more rounds of interview, after which employers decide to hire or not candidates. The process lasts approximately 30 days.

This model explores the efficiency of OCR as a process of allocating candidates with set skills, motivation, and interest, to a limited amount of jobs with set requirements and desirability. The conclusions of the model depend on the measure of efficiency. In this first model efficiency is measured with net value satisfaction. The ideal model would allocate the best fit candidates to the maximum amount of jobs.


Candidates compete for jobs in successive interactions with jobs that have thresholds for successful applications and interviews (t_i_match). Candidates begin in the home patches to the left and have fixed skills, motivation, energy, and interests. Jobs begin in the interview patches to the right and have fixed skill requirements and a desirability that ranges from 1 to the number of jobs recruiting.


Candidates decide from their home patches whether to attend the organization’s information sessions, resulting in an 80% chance that their interest and motivation increase, and a 20% of the contrary. Candidates then decide to apply to one job depending on certain job preferences that combine their interest, the job’s desirability, and a self-assessment of their competence for the job.


Candidates have an index that includes their skills and motivation. The index can be weighted more heavily on skills or motivation (because in reality, skills mostly affect the application while motivation mostly affects the interview in which candidates meet employers and can display motivation).


Jobs screen applications then decide to invite candidates to interview. The success of an application depends on the match of skills between applicants and jobs. Jobs look at the match between skills, invite 40% of candidates with the appropriate amount of skills to interview, then rank applicants from highest to lowest skills match, and hire the one candidate with the highest skills match of the list and an index larger than the t_i_match threshold.

In the first round, candidates prefer to apply to the most desirable jobs. As these fill up however, some candidates are not hired, and some jobs are not applied to. The model runs through the code again, with candidates applying to the jobs that have not hired someone, and these jobs screening applicants that have not been hired. Candidates once hired move to the patch of their employer. Candidates that have not been hired move to the home patches and have lower energy.

The model runs until every job has hired an applicant, or until applicants no longer have the energy to repeat the recruitment process after being rejected from a certain amount of jobs that varies from candidate to candidate. As there are more applicants than jobs, some candidates are left without employment. The number of ticks represents the number of rounds of application and interview between candidates and jobs.

To evaluate efficiency we look at net value satisfaction of candidates and employers, and aggregate satisfaction, which represents a positive externality of the process to society as employment increases social welfare. The model is more efficient for the same aggregate satisfaction if more people are getting employed as a result. For example, a model that allocates 60 skilled candidates to jobs is more efficient on the societal scale than a model that allocates 30 skilled candidates to jobs.
The satisfaction of candidates is a function of their employment. If candidates are rejected, their satisfaction is their negative energy expenditure. The more effort they put into the process, i.e., if they attended the information-session, applied, and interviewed, the stronger the pain of rejection and the lower their satisfaction. If candidates are hired, their satisfaction consists of their interest in the job and the job’s desirability.
The satisfaction of employers is a function of the caliber of their candidate, i.e., his index.


Set the NUMBER-OF-CANDIDATES and NUMBER-OF-JOBS sliders to decide how many students will participate in OCR and how many positions are available. Press SETUP to create the home patches and the OCR interview space.
Choose from the INDEX-WEIGHT chooser either MOTIVATION or SKILLS to decide which factor will have a higher influence in the selection process.
You can randomize candidates’ decision to apply to a job by setting the RANDOM-JOB-SELECTION switch to on.
The SATISFACTION plot shows candidate, employer, and aggregate satisfaction.
Press GO to run the recruitment process.


Does the relative number of candidates to jobs influence the satisfaction of candidates and employers?
Many procedures are embedded in the code. This is a model for observation rather than manipulation. For instance, the code could be modified to remove information session attendance altogether, to manually change the weights of different elements making up the indexes etc.


Change NUMBER-OF-JOBS while leaving the NUMBER-OF-CANDIDATES constant and observe how this affects aggregate satisfaction and candidates hired.
Switch RANDOM-JOB-SELECTION to on and observe how this affects aggregate satisfaction for the same number of candidates and jobs.
Run the model with different options in INDEX-WEIGHTS, and see how the results differ.

In reality, candidates apply to multiple jobs, and jobs hire multiple candidates, which makes the recruitment process more complex.


Most importantly, in reality candidates have far more variables than skills, motivation, and interest. According to Career Services, two most important attributes are skills and motivation, which is related to interest. Candidates also have certain socio-economic backgrounds however, which in turn determine the number of connections they have in the industry they are applying to, which weighs heavily in the application process, as candidates with connections can be immediately admitted to an interview.

In an extension of this model, we would incorporate candidate demographics, including country of origin (because certain organizations refuse to employ foreign students if they must sponsor their visa), ethnicity and gender (because organizations have diversity programs that benefit ethnic minorities, women, and non-heterosexual candidates), connections and ability to network (because a personal tie with someone in the company trumps many of these other variables - we could use LinkedIn data here).

In this model we randomize the required skills for jobs, whereas jobs usually have higher requirements than candidates have skills. Furthermore, requirements are a function of desirability, as jobs with higher desirability are more competitive and have higher requirements.


A more robust version of this model would have the index weights, currently in the form of a chooser, as candidate strategies, to evaluate whether there is an optimal candidate strategy.

We would also incorporate a learning component, which allows candidates that were not employed in the first round of recruitment to learn from their mistakes and perform better in the second round. Furthermore, students that have been through OCR more than once are better positioned to be hired, as are students who have prepared thoroughly for their interviews. We would create different agent sets within candidates with more or less experience and the ability to learn.


Satisfaction is a function in part of motivation, which can increase or decrease throughout OCR depending on individual dispositions. Candidates who have experienced rejection can have increased motivation with an “I will do better next time” outlook, thereby performing better in a subsequent round, or feel defeated with a “ I will never succeed” outlook and have decreased motivation, which will hinder their performance.

Efficiency can be calculated different ways. We currently evaluate efficiency in absolute value with net satisfaction, but there can be a comparative component by comparing satisfaction level between regular OCR and randomized OCR. There can also be a social welfare component that is different from aggregate satisfaction.


An interesting feature is the list of skills of candidates and employers, because it is simplified to an absence or presence of skills, rather than skills themselves.


In terms of turtle strategy in a transactional setting between agents, other models to look at are El Farol and Minority Game, where agents are at a disadvantage if other agents have the same behavior, similar to the application process.

(back to the NetLogo User Community Models)