Local Search Algorithms and Optimistic Problems
Local search methods work on
complete state formulations. They keep only a small
number of nodes in memory.
Local search is useful for
solving optimization problems:
o Often it is easy to find a
solution
o But hard to find the best
solution
Algorithm goal:
find optimal configuration
(e.g., TSP),
• Hill climbing
• Gradient descent
• Simulated annealing
• For some problems the
state description contains all of the information relevant
for a solution. Path to the
solution is unimportant.
• Examples:
o map coloring
o 8-queens
• Start with a state
configuration that violates some of the constraints for being a
solution, and make gradual
modifications to eliminate the violations.
• One way to visualize
iterative improvement algorithms is to imagine every
possible state laid out on a
landscape with the height of each state corresponding
to its goodness. Optimal
solutions will appear as the highest points. Iterative
improvement works by moving
around on the landscape seeking out the peaks by
looking only at the local
vicinity.
Iterative improvement
In many optimization
problems, the path is irrelevant; the goal state itself is the solution.
An example of such problem
is to find configurations satisfying constraints (e.g., nqueens).
Algorithm:
– Start with a solution
– Improve it towards a good
solution
Example:
N queens
Goal: Put n chess-game
queens on an n x n board, with no two queens on the same row,
column, or diagonal.
No comments:
Post a Comment