NetLogo User Community Models
## WHAT IS IT?
Gradient descent is a first-order optimization algorithm. To find a local minimum of a function using gradient descent, one takes steps proportional to the negative of the gradient (or of the approximate gradient) of the function at the current point. If instead one takes steps proportional to the positive of the gradient, one approaches a local maximum of that function; the procedure is then known as gradient ascent.
Local search algorithms can be used to approximate solutions to NP-hard/complete problems.
I use two examples to show how to use my abstract gradient ascent implementations.
## HOW IT WORKS
Essentially, you must define:
Using a form of stochastic local search called First-Choice Gradient Ascent, search the statespace of Jumping Mazes (RJMs) and hill climbing for a given number of iterations and print/display the best maze/highest peak evaluated according to the objective function specified by the objective function.
Given a number of iterations from the user, first generate a random problem and evaluate it according to the objective function. Then for the given number of iterations:
Go to a neighboring solution state
If the objective function has not decreased (worsened), accept the change, and furthermore remember this solution if its evaluation is the best (maximum) evaluated so far. Otherwise, reject the change and revert the cell to its previous solution.
Let j be chosen at random, and jbest ← j.
## HOW TO USE IT
Modify the sliders in the UI and click setup for the appropriate problem example.
## THINGS TO NOTICE
As iterations increases, the solution's evaluation increases. As the number of restarts increases, the solution becomes more of the best of randomly guessed states (consider restarts = iterations)
## EXTENDING THE MODEL
## NETLOGO FEATURES
I use the matrix extension for the RJM example.
## RELATED MODELS
## CREDITS AND REFERENCES
(back to the NetLogo User Community Models)