Notes - Artificial Intelligence MT24, Local search


Flashcards

What is the main way that an optimisation problem differs from a general search problem?


An optimisation problem doesn’t require a path, just a state that needs to be found.

@Define the components of an optimisation problem.


  • A (usually large) state space $S$
  • A goal test
  • An objective function $f(n)$

Then the goal is:

\[\begin{aligned} &\max_n f(n) \\\\ &\text{ s.t. }n \in S, \text{ and } n \text{ is a goal state} \end{aligned}\]

What is the main idea behind local search?


  • Start from some state
  • Repeatedly move to neighbour states with better objective value

@Describe a basic hill-climbing algorithm.


function HillClimbing(problem) returns (local maximum):
	current = problem.InitialState

	while True:
		neighbour = highest-valued successor of current
	
		if neighbour.Value < current.Value:
			return current

		current = neighbour

What successor function could be used for local search on the TSP?


Reconnect two edges by swapping the endpoints.

@Define a plateau in the context of optimisation.


A region where the loss is flat.

@Define a ridge in the context of optimisation.


A region of the loss landscape where loss is only decreasing in a single direction, and all other directions look like they’re increasing the loss.

@Define first choice hill climbing.


Randomly generate successors until one better than the current state is found.

@Define stochastic hill-climbing.


Pick randomly from the choice of uphill moves, potentially where the probability of choosing a move is proportional to the gradient of ascent.

Describe the local-beam search @algorithm, and highlight why this is different to searching in parallel.


  • Start with $k$ states in the frontier
  • Repeatedly:
    • Generate successors of all $k$ states
    • If any succesor is the goal, halt
    • Otherwise, select the best $k$ successors to update the frontier

This is different to running $k$ searches in parallel, since the $k$ successors that are chosen don’t have to come from each of the $k$ predecessors indepedently.

@Define stochastic local beam search.


Like local beam search, but pick successor randomly with probability depending on the value of the objective.

Describe the technique of random restarts in hill climbing.


Conduct a series of searches starting from random states.

Describe the simulated annealing @algorithm.


  • When finding the successor of a state, pick a random move
  • If the random move improves the objective, accept it
  • Otherwise, accept the move with some probability dependent on the badness of the move and the “temperature schedule”.
current = problem.InitialState

for t = 1 to N:
	T = temperatureSchedule(t)
	next = select random successor of current
	delta = next.Value - current.Value

	if delta > 0:
		current = next
	else:
	    current = next with probability e^(delta / T).

return current



Related posts