Search
Search Problems
-
a state space
e.g. (x, y) location
-
a successor function (with actions, costs)
e.g. update location
-
a start state and a goal test
e.g. (x, y)==END
A solution is a sequence of actions (a plan) which transforms the start state to a goal state.
State Space Graphs
Search Trees
- Start state: root node
- Successors: children
- Nodes: states
For most problems, we can never actually build the whole tree.
General Tree Search
1 procedure Tree-Search(problem, strategy):
2 initialize the search tree using the initial state of problem
3 loop do
4 if there are no candidates for expansion
5 return failure
6 choose a leaf node for expansion according to strategy
7 if the node contains a goal state
8 return the corresponding solution
9 else
10 expand the node and add its children to the search tree
11 end
DFS
Fringe is a LIFO stack.
1 procedure DFS(G,v):
2 let S be a stack
3 S.push(v)
4 while S is not empty
5 v = S.pop()
6 if v is not labeled as discovered:
7 label v as discovered
8 for all edges from v to w in G.adjacentEdges(v) do
9 S.push(w)
Branching factor: $b$, the depth of the whole tree: $m$
- Time complexity: $O(b^m)$
- Space complexity: $O(bm)$ (space good)
- can process the whole tree
- not optimal
BFS
Fringe is a FIFO queue.
1 frontier = Queue()
2 frontier.put(start)
3 visited = {}
4 visited[start] = True
5
6 while not frontier.empty():
7 current = frontier.get()
8 for next in graph.neighbors(current):
9 if next not in visited:
10 frontier.put(next)
11 visited[next] = True
Branching factor: $b$, the depth of the shallowest solution: $s$
- Time complexity: $O(b^s)$ (time good)
- Space complexity: $O(b^s)$
- can process all nodes above shallowest solution
- optimal only if costs are all 1
Iterative Deepening
= DFS space + BFS time
Uniform Cost Search
Fringe is a priority queue with priority cumulative cost.
Solution cost: $C^\ast$, min arcs cost: $\epsilon$ -> effective depth roughly $C^\ast/ \epsilon$.
- Time complexity: $O(b^{C^*/ \epsilon})$
- Space complexity: $O(b^{C^* / \epsilon})$
- can process all nodes with cost less than cheapest solution
- of course optimal
Disadvantages:
- explore options in every direction
- no info about goal location
Search Heuristics
A function that estimates how close a state is to a goal.
e.g. Manhattan distance, Euclidean distance
Greedy Search
-
Strategy: expand a node that you think is closest to a goal state (lowest heuristic)
-
Worst-case: like a badly-guided DFS
A* Search
- Strategy: order by the sum $f(n) = g(n) + h(n)$
- optimal when admissible heuristics (heuristic cost $\leq$ actual cost to the goal)
1 frontier = PriorityQueue()
2 frontier.put(start, 0)
3 came_from = {}
4 cost_so_far = {}
5 came_from[start] = None
6 cost_so_far[start] = 0
7
8 while not frontier.empty():
9 current = frontier.get()
10
11 if current == goal:
12 break
13
14 for next in graph.neighbors(current):
15 new_cost = cost_so_far[current] + graph.cost(current, next)
16 if next not in cost_so_far or new_cost < cost_so_far[next]:
17 cost_so_far[next] = new_cost
18 priority = new_cost + heuristic(goal, next)
19 frontier.put(next, priority)
20 came_from[next] = current
A* Graph Search
Optimal when consistent heuristic (heuristic arc $\leq$ actual cost for each arc)