NetLogo banner

Home
Download
Help
Resources
Extensions
FAQ
NetLogo Publications
Contact Us
Donate

Models:
Library
Community
Modeling Commons

Beginners Interactive NetLogo Dictionary (BIND)
NetLogo Dictionary

User Manuals:
Web
Printable
Chinese
Czech
Farsi / Persian
Japanese
Spanish

  Donate

NetLogo User Community Models

(back to the NetLogo User Community Models)

[screen shot]

Download
If clicking does not initiate a download, try right clicking or control clicking and choosing "Save" or "Download".(The run link is disabled because this model uses extensions.)

## WHAT IS IT?
An abstract implementation of a local search algorithms--First-choice gradient ascent (FCGA) with several variations.

Gradient descent is a first-order optimization algorithm. To find a local minimum of a function using gradient descent, one takes steps proportional to the negative of the gradient (or of the approximate gradient) of the function at the current point. If instead one takes steps proportional to the positive of the gradient, one approaches a local maximum of that function; the procedure is then known as gradient ascent.

Local search algorithms can be used to approximate solutions to NP-hard/complete problems.

I use two examples to show how to use my abstract gradient ascent implementations.

## HOW IT WORKS

Essentially, you must define:
1) a random solution generation function
2) a solution evaluation function
3) a neighboring solution generation function

Using a form of stochastic local search called First-Choice Gradient Ascent, search the statespace of Jumping Mazes (RJMs) and hill climbing for a given number of iterations and print/display the best maze/highest peak evaluated according to the objective function specified by the objective function.

Given a number of iterations from the user, first generate a random problem and evaluate it according to the objective function. Then for the given number of iterations:

Go to a neighboring solution state
Re-evaluate the objective function on this new solution.

If the objective function has not decreased (worsened), accept the change, and furthermore remember this solution if its evaluation is the best (maximum) evaluated so far. Otherwise, reject the change and revert the cell to its previous solution.
Finally, print/display the solution with the best (maximum) objective function evaluation that it found.

Let j be chosen at random, and jbest ← j.
For a given number of iterations:
j' ← step(j)
if e(j') > e(j)
j ← j'
if e(j) > e(jbest)
jbest ← j
return jbest

## HOW TO USE IT

Modify the sliders in the UI and click setup for the appropriate problem example.

## THINGS TO NOTICE

As iterations increases, the solution's evaluation increases. As the number of restarts increases, the solution becomes more of the best of randomly guessed states (consider restarts = iterations)

## EXTENDING THE MODEL
Try implementing an abstract A*

## NETLOGO FEATURES

I use the matrix extension for the RJM example.

## RELATED MODELS

A* model

## CREDITS AND REFERENCES

Matthew Saponaro

(back to the NetLogo User Community Models)