AIMA - Solving Problems by Searching


In which we see how an agent can find a sequence of actions that achieves goals when no single action will do.

If you imagine a vacuuming robot whose performance is measured by the cleanliness of the floor at each given time step, it’s easy to work out what to do in any given moment. If the floor is dirty you suck up all the dirt, and if the floor is clean then you move to a new square and repeat the process. Each immediate action has a value that you can determine since it’s simple to see how each choice you make will impact the performance measure.

But what do you do when you have several options for action that have unknown value? How do you decide what steps you should take? An example of this would be trying to drive from one town to another; there’s many different possible paths you can take and it’s not clear which one will put you in the best position.

Searching is the solution to this – exploring different combinations of actions in order to find the one that works best (or to find any solution at all in a reasonable amount of time).

There’s two main types of searches:

  • Uninformed searches, which have no knowledge other than the description of the problem.
  • Informed searches, which can make use of specialised information (heurisitics) to better optimise what to search first.

Flashcards

2021-04-005

What is goal formulation?


Finding a goal that will help an agent maximise its performance measure.

What is problem formulation?


The process of deciding what states and actions should be considered in order to reach a goal.

In the context of problem solving, what is a search?


The process of looking for a sequence of actions that reaches the goal.

In the context of problem solving, what is the input to a search algorithm?


A problem.

In the context of problem solving, what is the output of a search algorithm?


An action sequence.

What 5 things do you need to define a problem formally?


  1. Initial state
  2. Possible actions
  3. A transition model
  4. Goal test
  5. Path cost

What does it mean for an action to be applicable in $s$?


The action can be carried out in the state $s$.

What is a transition model?


A description of what an action in a certain state will do.

What is a successor state?


Any state that is reachable from a given state by an applicable action.

What is state space?


The set of all states reachable from the initial state.

What are the nodes and edges in a state space graph?


  • Nodes: States
  • Edges: Actions

What is a path in state space?


The actions to get from one state to another.

When formulating a problem, what is the goal test?


Something that determines whether a given state achieves the goal.

What is the path cost in state space?


A function that gives a numerical cost to each path.

What is the step cost in state space?


The cost of taking a single action.

\[c(s, a, s')\]

Can you explain this notation?


It’s the step cost of taking an action $a$ in state $s$.

In the context of problem solving and state space, what is the optimal solution?


The solution with the lowest path cost.

What is a search tree?


The tree formed as you look for possible solutions to a problem.

What is at the root of a search tree?


The initial state.

What is along each branch of a search tree?


An action.

What does each node of a search tree correspond to?


A state.

In building a search tree, what is “expanding”?


Generating all the next possible states by applying all possible actions to the current state.

What are all the leaf nodes available for expansion in a search tree called?


The frontier.

What is the frontier in a search tree?


All the leaf nodes available for expansion.

What distinguishes different search algorithms?


The search strategy.

What is the search strategy of a search algorithm?


How it picks what node it will expand next.

Why can the search tree for a finite set of states sometimes be infinite?


Because there can be loops in the paths.

What causes loopy and redundant paths?


Repeated states.

Finish the quote: “algorithms that forget…”?


“algorithms that forget their history are doomed to repeat it”

Why is a graph search better than a tree search in problem solving?


A graph search uses an explored set to eliminate redundant paths.

What is the ideal data structure for the frontier of a searching algorithm?


A queue.

What 4 ways can you measure problem-solving performance?


  • Completeness
  • Optimality
  • Time complexity
  • Space complexity

2021-04-06

What does it mean for an algorithm to be complete?


It is guaranteed to find a solution when one exists.

What does $b$ represent when discussing the complexity of a searching algorithm?


The branching factor, the maximum number of successors of a particular node.

What does $d$ represent when discussing the complexity of a searching algorithm?


The depth of the shallowest goal node.

What does $m$ represent when discussing the complexity of a searching algorithm?


The maximum length of any path in the state space.

How is time complexity measured for a searching algorithm?


The number of nodes generated during the search.

How is space complexity measured for a searching algorithm?


The number of nodes stored in memory.

What kind of world representation do searching methods use for solving problems?


An atomic representation.

What is an uninformed search?


A search whose strategy uses no additional information other than the problem definition.

How does breadth-first search choose which node to expand next?


It picks the shallowest unexpanded node.

What data structure for unexpanded nodes can be used for breadth-first search?


A FIFO queue.

Why is breadth-first search not optimal despite always finding the path to the shallowest goal node?


Because the path cost may be shorter for a different path.

In what common scenario is breadth-first search optimal?


When all actions have the same path cost.

In terms of the branching factor $b$ and the shallowest goal node depth $d$, what is the time complexity of breadth-first search?


\[O(b^d)\]

For exponential complexity in algorithms, what is usually the limiting factor: time or space?


Space.

Instead of expanding the shallowest node, what small modification could you make to breadth-first search in order to make it optimal?


Expanding the node with the lowest path cost.

What is the notation for the path cost of some node?


\[g(n)\]

When talking about paths through state space, what does $C$ represent?


The path cost.

When talking about paths through state space, what does $C^{*}$ represent?


The optimal path cost.

In what order does uniform-cost search expand nodes?


In the order of smallest path cost.

When is uniform-cost search complete?


When the cost of every potential action exceeds some small positive constant $\epsilon$.

How does depth-first search choose which node to expand next?


It picks the deepest unexpanded node.

What data structure for unexpanded nodes can be used for depth-first search?


A LIFO queue.

In terms of the branching factor $b$ and the maximum depth of any node $m$, what is the worst-case time complexity of depth-first search?


\[O(b^m)\]

In terms of the branching factor $b$ and the maximum depth of any node $m$, what is the space complexity of depth-first search for a graph?


\[O(b^m)\]

In terms of the branching factor $b$ and the maximum depth of any node $m$, what is the space complexity of depth-first search for a tree?


\[O(bm)\]

What advantage does depth-first search have over breadth-first search?


Depth-first search doesn’t use an exponential amount of memory for a tree search.

Why is depth-first search incomplete for infinite state spaces?


Because it will keep following the infinite path.

How can you overcome the limitation of depth-first search in infinite state spaces?


Using a depth-limited search.

What does $\mathcal{L}$ represent in depth-limited search?


The depth limit.

When is a depth-limited search incomplete?


When $\mathcal{L} < d$.

A depth-first search is just a depth-limited search where $\mathcal{L} = \infty$.

What is the diameter of a graph?


The maximum amount of steps required to get from one node to another.

What property of a state space (a graph) could you use to use as $\mathcal{L}$ for a depth-limited search?


The diameter of the graph.

What is an iterative deepening search?


Where you use multiple depth limited searches, each increasing in their limit until you find a goal.

What is the advantage of iterative deepening over a breath-first search?


You get completeness with less memory usage.

What is the best search algorithm to use when the search space is large and the depth of the goal is not known?


Iterative deepening.

What is bidirectional search?


Where you run two different searches, one going forwards from the start and one going backwards from the goal.

In terms of the branching factor $b$ and the depth of the goal $d$, what is the time complexity for a bidirectional search?


\[O(b^{\frac{d}{2}})\]

What is the requirement for a birectional search to be applicable?


You have to know how to search backwards.

What is an informed search?


A search whose strategy uses additional information beyond the problem definition.

What is the notation for an evaluation function of a node in an informed search?


\[f(n)\]

What is the notation for a heuristic function that evaluates a node?


\[h(n)\]

How does a greedy best-first search choose which node to expand next?


It picks the one with the lowest $h(n)$.

What is a heuristic function?


An estimation of the cheapest path through a node to the goal state.

What’s an example of a heuristic function for route finding?


The Euclidean distance from one point to another.

What is the evaluation function $f(n)$ for a greedy best-first search?


\[f(n) = h(n)\]

What is the evaluation function $f(n)$ for $A^{*}$ search?


\[f(n) = g(n) + h(n)\]

How does $A^{*}$ choose which node to expand next?


It picks the one with the lowest combination of path cost and the heuristic.

How are uniform-cost searches and $A^{*}$ searches similar?


$A^{*}$ uses $g(n) + h(n)$ to order expansions, uniform-cost just uses $g(n)$.

What does it mean for a heuristic $h(n)$ to be admissible?


It never overestimates the cost to reach the goal.

What does it mean for a heuristic $h(n)$ to be consistent?


Its estimate is always less than or equal to the cost of reaching the node $n$ plus the estimate of any successor $h(n’)$.

If $h(n)$ is a heuristic function, $c(s, a, s’)$ is the step cost of reaching a certain state from an action, how could you write the condition that $h(n)$ is consistent?


\[h(n) \le c(s, a, s') + h(n')\]

How could you think about heuristic consistency as a triangle equality?


  • $h(n)$ is one side of the triangle
  • $h(n’)$ is another side of the triangle
  • $c(s, a, s’)$ is another side

It’s impossible to create a triangle when $h(n)$ is not consistent.

When is $A^{*}$ optimal for a tree-search?


If the heuristic $h(n)$ is admissible.

When is $A^{*}$ optimal for a graph-search?


If the heuristic $h(n)$ is consistent.

What is pruning?


Removing possibilities from consideration without having to examine them.

What is the absolute error $\Delta$ of a heuristic $h$ with actual cost $h^{*}$?


\[\Delta = h^{*} - h\]

What is the relative error $\epsilon$ of a heuristic $h$ with actual cost $h^{*}$?


\[\epsilon = \frac{h^{*} - h}{h^{*}}\]

What is the main drawback of $A^{*}$?


It uses memory exponentially.

What does $IDA^{*}$ mean?


Iterative-deepening $A^{*}$.

Instead of using a depth limit $\mathcal{L}$ for iterative deepening, what does $IDA^{*}$ use?


A $f(n)$ cutoff.

Why would you use $IDA^{}$ over $A^{}$?


Because you want space to be bounded.

What does $RBFS$ mean?


Recursive best-first search.

How does $RBFS$ work?


It recursively looks at the most promising nodes while remembering the second-best alternative.

How do $MA^{}$ and $SMA^{}$ modify $A^{*}$ to bound memory usage?


Once memory limits have been reached, they “forget” the worst alternatives.

Why can memory limitations make a problem intractable from the view of computation time?


Because you can’t reuse information and have to keep regenerating it.

What is one way you can measure the performance of a heuristic?


The effective branching factor $b^{*}$.

What is the effective branching factor $b^{*}$ of a heuristic?


The average number of successors generated by typical node.

What should a good heuristic do to the effective branching factor?


Decrease it as much as possible.

Why should you choose a heuristic with larger estimations that is still consistent from a group of heuristics?


Because it provides a more accurate estimation of the actual path cost.

What is a relaxed problem?


A problem with fewer restrictions on actions.

What is true about the state-space graph of a relaxed problem?


It is a supergraph of the actual problem.

Why are relaxed problems useful?


They let you come up with useful heuristics.

Given consistent heuristics $h _ 1, h _ 2, …, h _ m$, how could you combine them into the best heuristic function?


\[h(n) = \max\{h_1(n), h_2(n), ..., h_m(n) \}\]

What are pattern databases?


Precomputed solutions to subproblems that can be used as a heuristic.




Related posts