Uninformed Search
25 Jan 2022
§ Motivation
Many problems in AI --- planning, reasoning, learning, etc. --- can be described as a search through a state space.
Discrete State
A discrete state is a representation of the world, discrete states are connected through possible actions.
Then, problems in AI can be described as finding a path from a current state to a goal state, the connections along the path describe a sequence of actions required to make the transition from current to goal states.
Navigation as Search
Oradea
*
/ \
/ \ * Neamt
/ \ \
Zerind * \ \
/ \ \
/ \ \
/ \ Siblu Fagaras * Iasi
Arad *---------------*---------------* \
| | \ \
| | \ \
| | \ \
| * Rimnicu Vilcea \ * Vaslui [start]
* Timisoara |\ \ /
\ | \ \ /
\ | \ \ /
\ | \ \ /
\ | \ Petesti \ /
Lugoj [goal] * | *-------------------*-----------*-------------* Hirsova
| | / Bucharest | Urziceni \
| | / | \
Mehadia * | / | \
| | / Giurgiu * Eforie *
| |/
Dobreta *----------*
Craiova
- Finding a path from a starting location to a destination.
- In the above, from Vaslui to Lugoj (edges represent connections, the weights --- or distances --- for each edge are omitted).
- Emphasis on discrete moves (discrete waypoints such as: city to city, corner to corner).
"Puzzles as Search"
Given the following puzzle, we wish to use search to find a solution.
The actions in the game are: sliding a tile up/down/left/right into an empty space.
Initial
.---+---+---.
| 7 | 2 | 4 |
+---+---+---+
| 5 | | 6 |
+---+---+---+
| 8 | 3 | 1 |
'---+---+---'
|
.------------+------+------+------------.
| | | |
. . . .
2 down 6 left 5 right 3 up
. . . .
| | | |
v v v v
. . . .
.---+---+---. .---+---+---. .---+---+---. .---+---+---.
| 7 | | 4 | | 7 | 2 | 4 | | 7 | 2 | 4 | | 7 | 2 | 4 |
+---+---+---+ +---+---+---+ +---+---+---+ +---+---+---+
| 5 | 2 | 6 | | 5 | 6 | | | | 5 | 6 | | 5 | 3 | 6 |
+---+---+---+ +---+---+---+ +---+---+---+ +---+---+---+
| 8 | 3 | 1 | | 8 | 3 | 1 | | 8 | 3 | 1 | | 8 | | 1 |
'---+---+---' '---+---+---' '---+---+---' '---+---+---'
: : : :
| | | |
'------------'------+------'------------'
|
v
.---+-+-+---.
| | 1 | 2 |
+---+---+---+
| 3 | 4 | 5 |
+---+---+---+
| 6 | 7 | 8 |
'---+---+---'
Goal
In the above diagram, we see the four possible actions upon the initial state. Our objective is now to search through these descendant states in order to find the solution. The solution path for this problem corresponds to the sequence of actions that transforms the initial state into the goal state.
Another example of using a search is in robot motion planning, in which we create a configuration space that describes the possible range of movement of a robot and the obstacles.
§ Formulating Search Problems
State Space
The state space is a set of discrete representations/configurations (states) of the world.
A state space need not be finite, i.e. in the case of Goldbach's conjecture or the Collatz conjecture the search space is potentially infinite.
Operator
An operator is a functor mapping a state to a subset of states, i.e.
An operator encodes the legal moves or actions upon a given state, and transforms a state to another --- in some cases many possible successors may exist, and in others none.
The operator can be applied recursively upon the initial state the recursive application then creates the (reachable) state space from
Branching Factor
The branching factor of a problem is defined to be the average number of successors for each state.
It is often helpful to visualize the state space as a tree, as such a given node will have children --- on average. The size of the tree (number of nodes) will grow exponentially with
* initial state, s
/ \ branching factor = 2
/ \
/ \
/ \
/ \
* * 2 successors of s
/ \ / \
/ \ / \
* * * * 4 successors of successors
/ \ / \ / \ / \
* * * * * * * * 8 successors
In the above, each node in the search tree represents a state in the state space, however, each node does not necessarily represent a unique state.
Node in a Search Tree
A node in a search tree represents a particular path that leads to the state corresponding to a node.
That is, there may exist multiple nodes in a search tree that correspond to the same state, yet these nodes are discrete as they represent differing paths to a certain goal state.
Goal
Goals are a subset of states in the state space, i.e.
For example the set of winning states in tic-tac-toe is defined to be the subset of states which contain 3 of the same character (X or O) in a row, column, or diagonal; the subset of winning states are the goal states in a search for tic-tac-toe.
In many problems, finding a path to a goal node --- any path --- is acceptable. In some cases, the shortest path or the path with fewest actions required is desired, others may be interested in a least cost path for problems with operators with nonuniform costs.
Least Cost Path
When planning a trip, some may seek to minimize the travel costs, as such one may ask themselves whether choosing to take the bus as opposed to the cab may affect costs.
Then, the total cost of the trip can be represented as a function ,
where
- represents the different states (or locations) to visit on the trip,
- represents the operation that causes the transition from to
- represents the cost (monetary or otherwise) of the th operation
§ Weak Methods of Uninformed Search
In many cases, it is impractical to pregenerate the entire search tree then begin searching for a path to a goal node.
The alternative is to start from the inital node then to recursively expand and explore each node until the goal is found.
Expansion and Exploration
A node is expanded when its children are generated upon visitation.
A node is said to be explored iff the node has been goal tested.
This way, we keep only the relevant nodes in memory --- or the frontier --- instead of having to keep track of the entire tree.
Frontier (Agenda)
The frontier of an uninformed search represents the set of expanded nodes that have yet to be explored.
The set of expanded nodes in the frontier definition correspond to the children generated upon visiting a given node.
§ A Unified Search Algorithm
First we describe a generic search algorithm that may be modified with varying data structures.
function UnifiedSearch(problem) -> Node? {
let node := Node(state: problem.initial)
if problem.goal_test(state: node.state)
return node
let frontier := OrderedSet([ node ])
while not frontier.empty() {
let node := frontier.pop()
for child in problem.expand(node) {
let successor := child.state
if problem.goal_test(state: successor)
return child
}
}
return None
}
Above, we use OrderedSet
as a generic data structure --- by modifying the underlying data structure we can change the behaviour of the algorithm.
§ Breadth First Search (BFS)
Breadth First Search
In BFS, we expand the children of children after each sibling.
Here, we use a FIFO queue in place of OrderedSet
.
§ BFS Analysis
If the shallowest node occurs at depth with branching factor then in the worst case we must check all levels up to and including
In the worst case, the frontier must store all the children at the level below the goal (), i.e.
Completeness
A search is complete iff the search algorithm is guaranteed to find a goal node should it exist.
BFS is complete, because every goal exists at a finite depth and BFS will explore each level.
Optimality
An algorithm is optimal if it is guaranteed to find a goal node with the minimum path cost.
In the case of equal operator costs, the goal with least path cost will also be the shallowest goal node in the tree, as a result, BFS will find it first because it explores level by level.
§ Depth First Search (DFS)
Depth First Search
In DFS, we expand the children of children before the siblings.
In this case, we use a LIFO stack in place of OrderedSet
.
§ DFS Analysis
Let be the maximum depth of a search tree, then the worst case occurs when the goal at a depth is on the right-most branch.
Then, the number of nodes checked will almost be the entire tree,
At each iteration we pop a node and insert at most children, so the frontier will be at most which is
DFS is only complete if the search tree is finite, if there exists branches with infinite depth then it is not guaranteed that a goal nodes will be part of that infinite branch.
DFS is not optimal as it is not complete.
§ BFS & DFS Analysis (Summary)
Algorithm | Time | Space | Complete | Optimal |
---|---|---|---|---|
BFS | yes | yes if equal operator costs, no otherwise | ||
DFS | yes if finite search space, no otherwise | no |
While DFS, in the worst case, can be exponentially worse than BFS in terms of time complexity, its space complexity is linear. In practice, the limiting factor is the frontier size.
§ Graph Search
There exists search trees which contain multiple paths to a given state, for example for games in which operations are reversible --- think Rubik's cube, we can reverse a 90° clockwise rotation with 3 more 90° clockwise rotations or a single 90° anticlockwise rotation --- we can return to a given state after executing an operation, using other operators.
In these cases, we desire to keep track of previously reached states in order to reduce redundancy in the search space, i.e. if the descendants of a node have already been explore, there is no need to look again --- except if a cheaper path to the node is found.
We may use a data structure like a hash table to keep track of visited states.
Remembering Visited States
Suppose that we wanted to perform a state search on the given lattice, without remembering the visited nodes,
o---o---o---o---o
| | | | |
o---o---o---o---o
| | | | |
o---o---*---o---o
| | | | |
o---o---o---o---o
| | | | |
o---o---o---o---o
Level 0
Let the white circles represent unvisited nodes and red circles represent nodes in the frontier.
We can expect the branching factor to be more realisticly since the graph is limited to a 5 5 the average branching factor
o---o---o---o---o o---o---*---o---o o---o---*---o---o
| | | | | | | | | | | | | | |
o---o---*---o---o o---*---+---*---o o---*---+---*---o
| | | | | | | | | | | | | | |
o---*---+---*---o o---*---+---*---o *---+---+---+---*
| | | | | | | | | | | | | | |
o---o---*---o---o o---o---*---o---o o---*---+---*---o
| | | | | | | | | | | | | | |
o---o---o---o---o o---o---o---o---o o---o---*---o---o
Level 1 Level 2 Level 3
Then on level 0, we search only 1 node; at level 1, 6; at level 2, 14; at level 3, 28.
At level , we can expect to search nodes --- even if there are only 25 distinct states in this space.
Then, we can think of graph search as BFS with an additional check for visited states, below is a modification to UnifiedSearch()
into GraphSearch()
.
function GraphSearch(problem) -> Node? {
let node := Node(state: problem.initial)
if problem.goal_test(state: node.state)
return node
let frontier := Queue([ node ])
let visited := HashTable([ problem.initial ])
while not frontier.empty() {
let node := frontier.pop()
for child in problem.expand(node) {
let successor := child.state
if problem.goal_test(state: successor)
return child
if not visited.contains(successor) {
visited.push(successor)
frontier.push(child)
}
}
}
return None
}
Note: This GraphSearch()
implementation can also be converted into BFS by chainging Queue([ node ])
into Stack([ node ])
.
The discussion complexity analysis of graph search follows that of BFS.
§ Iterative Deepening
Naturally, a question we may ask ourselves is: how can we get the benefits from both DFS and BFS?
That is, is it possible to maintain a linear frontier size (DFS) while searching level-by-level (BFS)?
One method is to preform depth-limited search (DLS), i.e. let , then perform DFS down to depth of , if no goal found then let and repeat.
function IterativeDeepening(problem) -> Node? {
let depth := 0
loop {
let result := DepthLimitedSearch(problem, depth)
if result is not Node(None)
return result
depth := depth + 1
}
}
function DepthLimitedSearch(problem, depth: u64) -> Node? {
let initial_node := Node(state: problem.initial)
let frontier := Stack([ initial_node ])
let result := None
while not frontier.empty() {
let node := frontier.pop()
if problem.goal_test(state: node.state)
return node
if node.depth <= depth
for child in problem.expand(node)
frontier.push(child)
else
result := Node(None)
}
return result
}
Note: The above pseudocode makes distinct the reasons of failure for DLS.
The implementation returns a Node?
or an optional node, i.e. None
or a node corresponding to a state. In the case that DLS has exhausted the search in the current branch to no avail --- DLS has failed, we indicate this by returning None
. If DLS has searched all nodes upto a certain depth, but has yet to fail, we have reached the cutoff --- in this case we return a Node(None)
.
In IterativeDeepening()
, if we see that DLS has reached the cutoff (returned Node(None)
), then we add one to the max-depth and repeat DLS. If DLS had failed (returned None
) or returned a valid node, we return that as the result.
§ Complexity Analysis
As iterative deepening (ID) uses DFS, the frontier shall never exceed or
In the case of equal operator costs, ID is both complete and optimal.
Notice that each iteration of ID regenerates the previous levels, this may seem like performing a lot of extraneous work, however, we cannot store the previous tree as it will grow exponentially with the branching factor, i.e. requiring space on the order of --- completely negating the space complexity benefits from earlier. Going from exponential to linear space complexity is worth the additional time cost.
As such, the time complexity of iterative deepening is
§ Uniform Cost Algorithm
Suppose we want to find the goal node with least path cost with varying operator costs. Then, the path with the fewest number of acctions is not necessarily the min-cost path --- in this case, BFS is no longer optimal.
However, we can reuse the iterative search algorithm but use a priority queue for the frontier. This way, we keep the unexplored expansion nodes sorted in increasing path cost.
Our node object must now contain a data member corresponding to the cost of the path leading upto said node. When generating successors, we compute the cost of a node via
function BestFirstSearch(problem, f: Function -> bool) -> Node? {
let node := Node(state: problem.initial)
let frontier := PriorityQueue(ordering: f, contents: [ node ])
let visited := HashTable([ problem.initial: node ])
while not frontier.empty() {
let node := frontier.pop()
if problem.goal_test(state: node.state)
return node
for child in problem.expand(node) {
let successor := child.state
if not visited.contains(successor) or child.cost < visited[successor].cost {
visited[successor] := child
frontier.push(child)
}
}
}
return None
}
class Problem {
expand(self, node: Node) -> [Node]? {
let state := node.state
let successors : [Node] = []
for action in .actions {
let next := .perform(action, state)
let cost := node.cost + .cost(action, initial: state, final: next)
successors.push(Node(state: next, cost))
}
return successors
}
}
Note: we assume all operators have positive costs (strictly greater than zero).
Uniform cost search explores nodes in order of increasing total path cost.
Suppose that if we reached a goal node and there exists another --- cheaper --- goal node
There is always some node along the path to each node that is in the queue.
Using the assumption on the operator costs (strictly greater than zero), each succeeding node along a path is increasing in path length, monotonically.
Note that by definition of the priority queue, the pop operation always returns a node whose path cost is less than all others in the queue.
If node was on the path to and then and would have been popped out of the queue before Contradiction.
Therefore, if UC yields a goal node there cannot exist a with less cost.
§ Comparison to Djikstra's
Both UC and Djikstra's solve the single-source shortest-path problem, but the key distinction is that Djikstra's depends on dynamic programming.
Again, as space is the primary limiting factor in AI, dynamic programming is oft unsuited for AI.
§ UC Analysis
First we must introduce . We've already made the assumption that , this implies that there must exist some constant such that and
Then, both time and space complexity of UC is given by , where
- is the L-notation,
- is the total path cost of the cheapest path.
We use in the complexity, because each step costs at minimum, so the goal must occur at a depth of in the worst case.
Like ID, UC is both complete and optimal.
§ Summary of Uninformed Search
Algorithm | Time | Space | Complete | Optimal |
---|---|---|---|---|
BFS | yes [1] | yes [2] | ||
DFS | yes [1:1] [3] | no | ||
DLS | no | no | ||
ID | yes [1:2] | yes [2:1] | ||
UC | yes [1:3] [4] | yes |