State Space Search in Artificial Intelligence

State space search is a problem-solving technique used in Artificial Intelligence (AI) to find the solution path from the initial state to the goal state by exploring the various states. The state space search approach searches through all possible states of a problem to find a solution. It is an essential part of Artificial Intelligence and is used in various applications, from game-playing algorithms to natural language processing.

Introduction

A state space is a way to mathematically represent a problem by defining all the possible states in which the problem can be. This is used in search algorithms to represent the initial state, goal state, and current state of the problem. Each state in the state space is represented using a set of variables.

The efficiency of the search algorithm greatly depends on the size of the state space, and it is important to choose an appropriate representation and search strategy to search the state space efficiently.

One of the most well-known state space search algorithms is the A algorithm. Other commonly used state space search algorithms include breadth-first search (BFS) , depth-first search (DFS) , hill climbing , simulated annealing , and genetic algorithms .

Features of State Space Search

State space search has several features that make it an effective problem-solving technique in Artificial Intelligence. These features include:

Exhaustiveness: State space search explores all possible states of a problem to find a solution.

Completeness: If a solution exists, state space search will find it.

Optimality: Searching through a state space results in an optimal solution.

Uninformed and Informed Search: State space search in artificial intelligence can be classified as uninformed if it provides additional information about the problem.

In contrast, informed search uses additional information, such as heuristics, to guide the search process.

Steps in State Space Search

The steps involved in state space search are as follows:

steps-chart-of-state-space-search

  • To begin the search process, we set the current state to the initial state.
  • We then check if the current state is the goal state. If it is, we terminate the algorithm and return the result.
  • If the current state is not the goal state, we generate the set of possible successor states that can be reached from the current state.
  • For each successor state, we check if it has already been visited. If it has, we skip it, else we add it to the queue of states to be visited.
  • Next, we set the next state in the queue as the current state and check if it's the goal state. If it is, we return the result. If not, we repeat the previous step until we find the goal state or explore all the states.
  • If all possible states have been explored and the goal state still needs to be found, we return with no solution.

State Space Representation

State space Representation involves defining an INITIAL STATE and a GOAL STATE and then determining a sequence of actions, called states, to follow.

  • State: A state can be an Initial State, a Goal State, or any other possible state that can be generated by applying rules between them.
  • Space: In an AI problem, space refers to the exhaustive collection of all conceivable states.
  • Search: This technique moves from the beginning state to the desired state by applying good rules while traversing the space of all possible states.
  • Search Tree: To visualize the search issue, a search tree is used, which is a tree-like structure that represents the problem. The initial state is represented by the root node of the search tree, which is the starting point of the tree.
  • Transition Model: This describes what each action does, while Path Cost assigns a cost value to each path, an activity sequence that connects the beginning node to the end node. The optimal option has the lowest cost among all alternatives.

Example of State Space Search

The 8-puzzle problem is a commonly used example of a state space search. It is a sliding puzzle game consisting of 8 numbered tiles arranged in a 3x3 grid and one blank space. The game aims to rearrange the tiles from their initial state to a final goal state by sliding them into the blank space.

To represent the state space in this problem, we use the nine tiles in the puzzle and their respective positions in the grid. Each state in the state space is represented by a 3x3 array with values ranging from 1 to 8 , and the blank space is represented as an empty tile.

The initial state of the puzzle represents the starting configuration of the tiles, while the goal state represents the desired configuration. Search algorithms utilize the state space to find a sequence of moves that will transform the initial state into the goal state.

example-state-space-search-1

This algorithm guarantees a solution but can become very slow for larger state spaces. Alternatively, other algorithms, such as A search , use heuristics to guide the search more efficiently.

Our objective is to move from the current state to the target state by sliding the numbered tiles through the blank space. Let's look closer at reaching the target state from the current state.

example-state-space-search-2

To summarize, our approach involved exhaustively exploring all reachable states from the current state and checking if any of these states matched the target state.

Applications of State Space Search

  • State space search algorithms are used in various fields, such as robotics, game playing, computer networks, operations research, bioinformatics, cryptography, and supply chain management. In artificial intelligence, state space search algorithms can solve problems like pathfinding , planning , and scheduling .
  • They are also useful in planning robot motion and finding the best sequence of actions to achieve a goal. In games, state space search algorithms can help determine the best move for a player given a particular game state.
  • State space search algorithms can optimize routing and resource allocation in computer networks and operations research.
  • In Bioinformatics , state space search algorithms can help find patterns in biological data and predict protein structures.
  • In Cryptography , state space search algorithms are used to break codes and find cryptographic keys.
  • State Space Search is a problem-solving technique used in AI to find a solution to a problem by exploring all possible states of the problem.
  • It is an exhaustive, complete, and optimal search process that can be classified as uninformed or informed.
  • State Space Representation is a crucial step in the state space search process as it determines the efficiency of the search algorithm.
  • State space search in artificial intelligence has several applications, including game-playing algorithms, natural language processing, robotics, planning, scheduling, and computer vision.
  • Increase Font Size

2 State Space Search I

Bhushan Trivedi

Introduction

Many AI problems are hard to solve because they are difficult to be characterized. Not only the problem but a path to the solution is also hard to be characterized. In this module we will learn the process of solving AI problems using state space search. We will also see the difficulties a designer might face.

Let us take an example of a chess problem. How can we represent chess as a problem which a program can solve? That means if we want our program to play a game of chess, how can we go about it?

First of all let us begin with an example which showcases how the chess problem can be represented in a computer understandable form. One can think of a matrix of 8*8 with different values indicating the piece which occupies that position. If we design our problem that way, following figure 2.1 describes an opening position. The values on the leftmost column and lowest raw indicate coordinates which we will use to identify the position of each piece. In actual matrix, these column and raw are not present. In subsequent figures, we will not show these additional raw and column.

One can also define a move as follows using that matrix representation.

The move shown in figure 2.2 has two states, the state on the LHS of the -> operator indicates what was the position of matrix before the move taken and RHS describes the position after that. This figure indicates the movement of a back pawn in the beginning. A move is described by one typical position (using a matrix) on the LHS while having a final position on the right. We need eight such moves for pawns on one side for such a case.

In fact, we can go on and describe many chess situations (called moves) in form of a matrix -> another matrix. How many moves do you thing we need to describe all possible instances? If your answer is infinity, you are not far.

State Space

A state space is a space which describes all possible valid states of this chess board. The initial board position in figure 2.1 and another position as an outcome of a move described in 2.2 are two examples. You can easily understand that describing all chess moves is a humongous task. If we go and write every chess move in this fashion, we may not be able to complete it in our lifetime. We must find an alternate way.

One can also describe such moves in form of rules which may encompass multiple moves together. For example we can write a rule describing many moves (including rule described in figure 2.2) in figure 2.3. It says that if a pawn is at position (i,2) and both the positions in forward direction (for black pieces, forward direction actually is reduction in value of j) are empty, a pawn can move two places up. As this is only allowed in the initial run where j is 7 (second raw from the player’s perspective), this is a general move. In fact a pawn can move from any position to one position forward when it is empty. That rule is shown in figure 2.4. This rule encompasses many rules for a black pawn movement.

If we write rules that way, it is still a very large but manageable set of rules for moves of chess. First type of rules is known as specific while next type is called general rules. One can write more generic rules using more generic variables as shown in figure 2.4. This rule defines a move for moving a black pawn anywhere in chess board.

Exploring chess problem is very complicated so we take two more trivial problems.

Solving AI Problems

First problem is called 3-4 gallon water jug problem as described in the book by Elaine Rich. We begin with two empty jugs of capacity of 3 and 4 gallons respectively. We have an infinite supply of water available and we want the 4 gallon jug to be filled with exactly 2 gallons of water.

Another problem is called 8-5-3 milk jug problem which begins with 8 gallon milk jug completely full while other two are empty. We want exactly 4 gallons of milk in the 8 gallon jug.

In either case, there are no other measuring devices available to us. One solution for the first problem is

0-0, 0-4, 3-1, 0-1, 1-0, 1-4, 3-2.

(Where n-m indicates quantity of water in 3 and 4 gallon jugs respectively) One solution of the second problem is

8-0-0, 3-5-0, 3-2-3, 6-2-0, 6-0-2, 1-5-2, 1-4-3, 4-4-0.

Where n-m-l are respective volumes of milk in 8, 5 and 3 gallon jugs.

What do these solutions indicate? Let us discuss some important points.

  • In both cases we have an initial state which we indicate as (0,0) in first and (8,0,0) in second problem. We had a similar type of initial state in chess. The states to represent both problems are represented in form of a vector.
  • We also have a final state in both cases which we indicate as (3-2) in problem 1 and (4-4-0) in problem 2. Please note that other final states are also possible. In a game like chess, numerous final states are possible.
  • Solution is a process of moving from one valid state to another. There must be a rule for moving from some valid state to another valid state. For example we cannot move from 0-0 to 1-0 or 2-  0 as we have no other measuring device, on the contrary 0-0 to 3-0 is possible when we can pour water into the 3 gallon water jug till it is full. Check in above solution sequences; we have used all valid moves.
  • For every problem, there are some are implicit assumptions which dictates validity of some  possible moves. For example, from 1-3 to 1-0 is possible in problem 1 as water can be poured out (as it is inexpensive). The milk jug problem cannot be handled that way. Finding out such implicit assumptions about any problem and code them into a move is a challenging job.
  • How each state change is chosen, or how each move is chosen over other applicable moves?For  example, how we decide to move from 6-2-0 to 6-0-2? In fact we must ask two questions here, for example, what possible moves are at the state 6-2-0, and why we chose 6-0-2 from that list. However simple both questions may look like, we will soon see that answering these two questions for a real world problem is much harder. Almost all AI problems are harder than one can possibly think at the first sight. Take any middle game position in chess to check.
  • One can enumerate in both of above cases to list all valid states like (0,0)…(4,3) or (8,0,0), (5,0,3),… (4, 4, 0). One can also write a formula, for example one can write (x,y) where xE (0,4), y E (0,3) to represent the collection of states for the first problem.The collection of all possible states is called a state space. A state space can be represented using an enumeration of all states or some formula describing the same.
  • The solution to the problem now is summarized as beginning from a start state, move around in  the state space using valid rules for movement and reach to a final state.

Examples of production rules

One important outcome of above discussion is that we need to have some rules (called production rules) for movement in state Space. We already have seen examples of specific and general rules for chess. Let us see a specific and a general rules for two trivial problems that we discussed so far.

A 3-4 gallon water jug problem

(0,3) -> (3,0) [specific]

(0,Y) , Y> 0, Y<=3 -> (Y,0) [general]

An 8-5-3 gallon milk jug problem

(5, 0, 3) -> (5, 3, 0)[specific]

(X,0,Z) and Z>0 -> (X,Z,0)[general]

In both cases one can go and write many possible rules out of which only a subset is needed for solving the problem. Many books including Nilsson’s and Rich and Knight’s include complete set of rules for the water jug and few other problems.

Let us take one typical set for the milk jug problem.

We have two things to state before proceeding further.

1. The state space is represented by a three element integer vector (X,Y,Z) where X+Y+Z = 8, X,Y,Z

>=0 , X <=8, Y<=5 and Z<=3

2. The initial state is (8,0,0).

Now let us state the rules.

1. (X,Y,Z) , Y+Z< 3 -> (X,0,Y+Z) –pouring milk from 5 gallon jug to a 3 gallon jug if total milk from 5 and 3 gallon jugs is less than 3 gallon

2. (X,Y,Z) , Y>=3 -> (X,Y+Z -3,3)- filling the three gallon jug from five gallon jug while total milk from 5 and 3 gallon jugs is more than 3 gallons

3. (X, Y,Z), Y+Z < 5  -> (X,Y+Z , 0) –pouring milk from 3 gallon jug to a 5 gallon jug when total milk from 3 and 5 gallon is less or equal to 5 gallon

4. (X, Y,Z),Y+Z > 5   -> (X,5, Y+Z-5) –pouring milk from 3 gallon jug to a 5 gallon jug when total milk from 3 and 5 gallon is more than 5 gallon

5. (X,Y,Z) , X+Z>=3 -> (X+Z -3,Y,3)- filling the three gallon jug from eight gallon jug while 8 and 3 gallon jug total having more milk than 3 gallons

6. (X,Y,Z) , X+Z<=3 -> (0,Y,X+Z) – filling the three gallon jug from eight gallon jug while both having less total milk than 3 gallons

7. (X,Y,Z), X+Y < 5 ->  (0, X + Y, Z), filling the five gallon jug from eight gallon jug while both having less total milk than 5 gallons

8. (X,Y,Z), X+Y > 5 ->  (X + Y-5 ,5 , Z) filling the five gallon jug from eight gallon jug while both having more total milk than 5 gallons

9. (X,Y,Z) -> (X + Y ,0 , Z)pouring all milk from a five gallon jug into an 8 gallon jug.

10. (X,Y,Z) -> (X + Z ,Y , 0)pouring all milk from a three gallon jug into an 8 gallon jug.

Let us try to see how these rules are written. First, X Y and Z represent amount of milk contained by an 8,5 or 3 gallon jug. Each rule has two parts, a left part and a right part. A left part contains the values of variables describing a state where the rule is to be applied. The left part additionally contains a condition under which this rule is to be applied. For example, take rule 8, it is applicable only if the total milk in 8 and 5 gallon jugs exceed 5 gallons. When the rule is applied, a move is taken and the result is as shown in the right hand side. For example, if 8th rule is applied, the five gallon jug is full and rest of the total of 8 and 5 gallon jug remains in 8 gallon jug.

Applying rules to solve the problem

Now let us see the sequence of rules applied for the solution.

4-4-1  is a final state.

Let us summarize. We have an initial state called (8,0,0), we apply a typical rule, record the new status, apply another rule and again look at the status and continue until we reach to a state which is qualified as a final state.

Can you guess the types of components required for solving a problem using a state space? Let us list down.

1. Not all rules are used in solving the problem in a typical way. For example rule 3,5,6,7 and 9 are not applied in the proposed solution sequence. Anyway, they may be needed in solving some other problem involving the three jugs. Which rules are needed and which are not for a given case is hard but one needs to have at least a temporary set to begin with. One can continue augmenting it when there is a need.

2. A clear indication of what consists of start and end state(s) and also clear indication of what are valid states, in other words, a clear definition of a stat space for a given problem.

3. A set of rules to provide legal movement in state space.

4. A data structure (in our case a vector with three integer values) for indicating current state. When a typical rule is applied, this data structure changes to hold the new state as a result of application of this rule.

5. A logic which changes the state as per the applied rule, something which help the current vector value to change to a new vector value.

6. When more than one rule is applicable, a mechanism to decide which rule to be applied.

Last three requirements sound exaggerating, especially looking at the problem at hand. It is not really if one looks at real problem. Consider an auto navigator problem which helps the driver to navigate to the destination. How on earth, the navigator represents the problem itself (the data structure design)? Consider frames coming from various cameras in from various places in the car, its speed and other measurements coming from various gauges implemented on the dashboard, information coming from app like map and so on. The data structure is going to be a huge challenge, especially while considering the real time navigation requirements. When a driver decides to take a left turn while the program is deliberating over going straight or taking a right turn (due to some obstacle or police restrictions auto driver is not aware of), changing the landscape that fast is an equally difficult challenge.

In fact the requirement no 6 is daunting even for our trivial case. Consider our seven step solution. In many cases, the solution could have gone in different direction had we apply another rule there. For example in case of 4th step, we could have applied rule 9, 8, 6 etc but we chose rule 1, why? How do we know that rule 1 is more likely to lead to the solution?

In fact, the problem was so trivial that we could solve it manually and decide the next move based on the manual solution. Solving a problem that way is cheating as the program does not decide but the designer decides the next move. Such a cheating is not advisable for two reasons. The program acts like  a dumb following designer’s instructions which we do not want. We want a program which can decide like human being. Second, such a cheating cannot help solve real world problems like chess. How can we decide each possible move sequences in chessa priory and encode them? Not only a human cannot enter all such move sequences, even if one somehow gathers them, it would be a non-trivial problem to store such a huge volume of data and also search such humongous database in real time.

You might be getting the feel for the problems AI researchers are facing over the years. You may have seen or heard about chess playing programs which can actually play at reasonably good level. That is possible due to some advances and efforts of the researchers in this domain.

You might be wondering that if we could  produce chess playing programs why can’t we produce programs which can manage other tasks, for example rating an essay or driving a car, or building a house robot which can work like a servant. The reason is, even though the chess playing program looks pretty complex on the face of it, it is quite structured. The moves are chosen based on information like the number of pieces, their respective ratings (for example the rating of king is higher than the summation of ratings of all others, a queen has a higher rating than a rook and so on), the center control (if pieces are at the center they have more control over the game) and many other things. A good chess playing program is equipped with heuristics based on those measures and thus capable to assume reasonably good move in real time.

We will take two more trivial but little different examples in next module to understand conversion to state space further.

  • Machine Learning Tutorial
  • Data Analysis Tutorial
  • Python - Data visualization tutorial
  • Machine Learning Projects
  • Machine Learning Interview Questions
  • Machine Learning Mathematics
  • Deep Learning Tutorial
  • Deep Learning Project
  • Deep Learning Interview Questions
  • Computer Vision Tutorial
  • Computer Vision Projects
  • NLP Project
  • NLP Interview Questions
  • Statistics with Python
  • 100 Days of Machine Learning
  • Meta-Learning in Machine Learning
  • Generative Adversarial Networks (GANs) | An Introduction
  • Impacts of Artificial Intelligence in everyday life
  • Introduction to Multi-Task Learning(MTL) for Deep Learning
  • Robotics Process Automation - An Introduction
  • Exposing ML/DL Models as REST APIs
  • Multidimensional data analysis in Python
  • Python | Lemmatization with TextBlob
  • AI | The Wumpus World Description
  • Introduction to Ontologies
  • ML | Text Summarization of links based on user query
  • ML | Natural Language Processing using Deep Learning
  • ML | Monte Carlo Tree Search (MCTS)
  • Resolution Algorithm in Artificial Intelligence
  • How AI will affect our lives in next decade ?
  • Types of Human Intelligence
  • Eye Tracking Metrics - Machine Learning
  • Proofs and Inferences in Proving Propositional Theorem
  • What are the Advantages and Disadvantages of Chatbots in Business?

Search Algorithms in AI

Artificial Intelligence is the study of building agents that act rationally. Most of the time, these agents perform some kind of search algorithm in the background in order to achieve their tasks. 

  • A State Space. Set of all possible states where you can be.
  • A Start State. The state from where the search begins.
  • A Goal State. A function that looks at the current state returns whether or not it is the goal state.
  • The Solution to a search problem is a sequence of actions, called the plan that transforms the start state to the goal state.
  • This plan is achieved through search algorithms.

Types of search algorithms: 

There are far too many powerful search algorithms out there to fit in a single article. Instead, this article will discuss six of the fundamental search algorithms, divided into two categories, as shown below. 

Categories of search algorithms in AI

Note that there is much more to search algorithms than the chart I have provided above. However, this article will mostly stick to the above chart, exploring the algorithms given there. 

Uninformed Search Algorithms: 

The search algorithms in this section have no additional information on the goal node other than the one provided in the problem definition. The plans to reach the goal state from the start state differ only by the order and/or length of actions. Uninformed search is also called Blind search . These algorithms can only generate the successors and differentiate between the goal state and non goal state.  The following uninformed search algorithms are discussed in this section.

  • Depth First Search
  • Breadth First Search
  • Uniform Cost Search

Each of these algorithms will have: 

  • A problem graph, containing the start node S and the goal node G.
  • A strategy, describing the manner in which the graph will be traversed to get to G.
  • A fringe, which is a data structure used to store all the possible states (nodes) that you can go from the current states.
  • A tree, that results while traversing to the goal node.
  • A solution plan, which the sequence of nodes from S to G.

Depth First Search :

Depth-first search (DFS) is an algorithm for traversing or searching tree or graph data structures. The algorithm starts at the root node (selecting some arbitrary node as the root node in the case of a graph) and explores as far as possible along each branch before backtracking. It uses last in- first-out strategy and hence it is implemented using a stack. Example: 

Question. Which solution would DFS find to move from node S to node G if run on the graph below?   

DFS ques

Solution. The equivalent search tree for the above graph is as follows. As DFS traverses the tree “deepest node first”, it would always pick the deeper branch until it reaches the solution (or it runs out of nodes, and goes to the next branch). The traversal is shown in blue arrows. 

DFS soln

Path:   S -> A -> B -> C -> G 

= the depth of the search tree = the number of levels of the search tree.  = number of nodes in level  .  Time complexity: Equivalent to the number of nodes traversed in DFS.  Space complexity: Equivalent to how large can the fringe get.  Completeness: DFS is complete if the search tree is finite, meaning for a given finite search tree, DFS will come up with a solution if it exists.  Optimality: DFS is not optimal, meaning the number of steps in reaching the solution, or the cost spent in reaching it is high. 

Breadth First Search :

Breadth-first search (BFS) is an algorithm for traversing or searching tree or graph data structures. It starts at the tree root (or some arbitrary node of a graph, sometimes referred to as a ‘search key’), and explores all of the neighbor nodes at the present depth prior to moving on to the nodes at the next depth level. It is implemented using a queue. Example:  Question. Which solution would BFS find to move from node S to node G if run on the graph below? 

BFS ques

Solution. The equivalent search tree for the above graph is as follows. As BFS traverses the tree “shallowest node first”, it would always pick the shallower branch until it reaches the solution (or it runs out of nodes, and goes to the next branch). The traversal is shown in blue arrows. 

BFS solution

Path: S -> D -> G 

  = the depth of the shallowest solution.  = number of nodes in level  .  Time complexity: Equivalent to the number of nodes traversed in BFS until the shallowest solution.  Space complexity: Equivalent to how large can the fringe get.  Completeness: BFS is complete, meaning for a given search tree, BFS will come up with a solution if it exists.  Optimality: BFS is optimal as long as the costs of all edges are equal. 

Uniform Cost Search: 

UCS is different from BFS and DFS because here the costs come into play. In other words, traversing via different edges might not have the same cost. The goal is to find a path where the cumulative sum of costs is the least.  Cost of a node is defined as: 

Example:  Question. Which solution would UCS find to move from node S to node G if run on the graph below? 

UCS problem

Solution. The equivalent search tree for the above graph is as follows. The cost of each node is the cumulative cost of reaching that node from the root. Based on the UCS strategy, the path with the least cumulative cost is chosen. Note that due to the many options in the fringe, the algorithm explores most of them so long as their cost is low, and discards them when a lower-cost path is found; these discarded traversals are not shown below. The actual traversal is shown in blue. 

UCS solution

Path: S -> A -> B -> G  Cost: 5  Let  = cost of solution.  = arcs cost.  Then  effective depth  Time complexity:    , Space complexity:   

Advantages: 

  • UCS is complete only if states are finite and there should be no loop with zero weight.
  • UCS is optimal only if there is no negative cost.

Disadvantages: 

  • Explores options in every “direction”.
  • No information on goal location.

Informed Search Algorithms: 

Here, the algorithms have information on the goal state, which helps in more efficient searching. This information is obtained by something called a heuristic.   In this section, we will discuss the following search algorithms. 

  • Greedy Search
  • A* Tree Search
  • A* Graph Search

Search Heuristics: In an informed search, a heuristic is a function that estimates how close a state is to the goal state. For example – Manhattan distance, Euclidean distance, etc. (Lesser the distance, closer the goal.) Different heuristics are used in different informed algorithms discussed below. 

Greedy Search:

In greedy search, we expand the node closest to the goal node. The “closeness” is estimated by a heuristic h(x).  Heuristic: A heuristic h is defined as-  h(x) = Estimate of distance of node x from the goal node.  Lower the value of h(x), closer is the node from the goal.  Strategy: Expand the node closest to the goal state, i.e. expand the node with a lower h value.  Example: 

Question. Find the path from S to G using greedy search. The heuristic values h of each node below the name of the node.   

greedy algo ques

Solution. Starting from S, we can traverse to A(h=9) or D(h=5). We choose D, as it has the lower heuristic cost. Now from D, we can move to B(h=4) or E(h=3). We choose E with a lower heuristic cost. Finally, from E, we go to G(h=0). This entire traversal is shown in the search tree below, in blue. 

greedy solution

Path:   S -> D -> E -> G 

Advantage: Works well with informed search problems, with fewer steps to reach a goal.  Disadvantage: Can turn into unguided DFS in the worst case.   

A* Tree Search:

f(x) = g(x) + h(x)

  • Here, h(x) is called the forward cost and is an estimate of the distance of the current node from the goal node.
  • And, g(x) is called the backward cost and is the cumulative cost of a node from the root node.
  • A* search is optimal only when for all nodes, the forward cost for a node h(x) underestimates the actual cost h*(x) to reach the goal. This property of A* heuristic is called admissibility . 

0 \leqslant h(x) \leqslant h^*(x)

Strategy: Choose the node with the lowest f(x) value.  Example:  

Question. Find the path to reach from S to G using A* search. 

a* question

Solution. Starting from S, the algorithm computes g(x) + h(x) for all nodes in the fringe at each step, choosing the node with the lowest sum. The entire work is shown in the table below.  Note that in the fourth set of iterations, we get two paths with equal summed cost f(x), so we expand them both in the next set. The path with a lower cost on further expansion is the chosen path.   

A* Graph Search :

  • A* tree search works well, except that it takes time re-exploring the branches it has already explored. In other words, if the same node has expanded twice in different branches of the search tree, A* search might explore both of those branches, thus wasting time
  • A* Graph Search, or simply Graph Search, removes this limitation by adding this rule: do not expand the same node more than once. 
  • Heuristic. Graph search is optimal only when the forward cost between two successive nodes A and B, given by h(A) – h (B), is less than or equal to the backward cost between those two nodes g(A -> B). This property of the graph search heuristic is called consistency . 

h(A) - h(B) \leqslant g(A \to B)

Question. Use graph searches to find paths from S to G in the following graph. 

graph search question

the Solution. We solve this question pretty much the same way we solved last question, but in this case, we keep a track of nodes explored so that we don’t re-explore them. 

problem solving state space search in ai

Path:   S -> D -> B -> E -> G  Cost:   7 

Please Login to comment...

Similar reads.

  • Machine Learning
  • Technical Scripter
  • What are Tiktok AI Avatars?
  • Poe Introduces A Price-per-message Revenue Model For AI Bot Creators
  • Truecaller For Web Now Available For Android Users In India
  • Google Introduces New AI-powered Vids App
  • 30 OOPs Interview Questions and Answers (2024)

Improve your Coding Skills with Practice

 alt=

What kind of Experience do you want to share?

Book cover

  • © 1999

State-Space Search

Algorithms, Complexity, Extensions, and Applications

  • Weixiong Zhang 0

Information Sciences Institute and Department of Computer Science, University of Southern California, Marina del Rey, USA

You can also search for this author in PubMed   Google Scholar

1333 Accesses

16 Citations

3 Altmetric

  • Table of contents

About this book

Authors and affiliations, bibliographic information.

  • Publish with us

Buying options

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Other ways to access

This is a preview of subscription content, log in via an institution to check for access.

Table of contents (9 chapters)

Front matter, state-space search for problem solving.

Weixiong Zhang

Algorithms for Combinatorial Optimization

Complexity of state-space search for optimal solutions, computational complexity transitions, algorithm selection, a study of branch-and-bound on the asymmetric traveling salesman problem, state-space transformation for approximation and flexible computation, forward pruning for approximation and flexible computation, part i: single-agent combinatorial optimization, forward pruning for approximation and flexible computation, part ii: multiagent game playing, back matter.

  • combinatorial optimization
  • complexity theory
  • optimization
  • problem solving
  • search algorithm
  • algorithm analysis and problem complexity

Book Title : State-Space Search

Book Subtitle : Algorithms, Complexity, Extensions, and Applications

Authors : Weixiong Zhang

DOI : https://doi.org/10.1007/978-1-4612-1538-7

Publisher : Springer New York, NY

eBook Packages : Springer Book Archive

Copyright Information : Springer-Verlag New York, Inc. 1999

Hardcover ISBN : 978-0-387-98832-0

Softcover ISBN : 978-1-4612-7183-3

eBook ISBN : 978-1-4612-1538-7

Edition Number : 1

Number of Pages : XVI, 201

Topics : Logics and Meanings of Programs , Algorithm Analysis and Problem Complexity , Artificial Intelligence

Policies and ethics

  • Find a journal
  • Track your research

IMAGES

  1. State Space Search and Problem Solving in Artificial Intelligence

    problem solving state space search in ai

  2. State Space Search in AI |Eight Puzzle Problem in Artificial

    problem solving state space search in ai

  3. State Space Search In Artificial Intelligence

    problem solving state space search in ai

  4. Problem Solving in AI

    problem solving state space search in ai

  5. 3. AI using Python- State Space search

    problem solving state space search in ai

  6. State Space Search to represent problem in Artificial Intelligence by

    problem solving state space search in ai

VIDEO

  1. 4 March 2024

  2. State Space Search

  3. State Space Search in Artificial Intelligence by Mr. B Harikrishna Reddy

  4. Problem Solving

  5. Defining the Problem as a State Space Search by Dr. S Satheesh Kumar

  6. Structure and Strategies for State Space Search

COMMENTS

  1. State Space Search In Artificial Intelligence

    Conclusion. State Space Search is a problem-solving technique used in AI to find a solution to a problem by exploring all possible states of the problem.; It is an exhaustive, complete, and optimal search process that can be classified as uninformed or informed. State Space Representation is a crucial step in the state space search process as it determines the efficiency of the search algorithm.

  2. PDF 16.410 Lecture 02: Problem Solving as State Space Search

    Most problem solving tasks may be formulated as state space search. State space search is formalized using. graphs, simple paths, search trees, and pseudo code. Depth-first and breadth-first search are framed, among others, as instances of a. generic search strategy.

  3. PDF UNIT 2 :Problem Solving: State-Space Search and Control Strategies

    2.3 State Space Search for Solving problems State space is another method of problem representation that facilitates easy search similar to PS. In this method also problem is viewed as finding a path from start state to goal state. A solution path is a path through the graph from a node in a set S to a node in set G.

  4. What Is State Space Search?

    State space search is a method used widely in artificial intelligence and computer science to find a solution to a problem by searching through the set of possible states of the problem. Furthermore, a state space search algorithm uses the state space to navigate from the initial state to the goal state. Additionally, it generates and explores ...

  5. Exploring State Space Search in Artificial Intelligence

    State Space Search in AI exhibits several key advantages that make it a powerful and versatile problem-solving technique. Here are some of its prominent attributes: Systematic Exploration: State space search systematically explores the possible states and transitions of a problem, ensuring a comprehensive examination of potential solutions.

  6. PDF State Space Representations and Search Algorithms

    Summary. • Generate the search space by applying actions to the initial state and all further resulting states. • Problem: initial state, actions, transition model, goal test, step/path cost • Solution: sequence of actions to goal • Tree-search (don't remember visited nodes) vs. Graph-search (do remember them)

  7. State space search revisited from perspective of deep learning

    States and state space search State space search serves as the foundation for traditional AI problem solving and appears in the first chapter of every textbook of traditional AI (e.g., [21]). In the context of state space search, a state is a time snapshot representing certain aspects of the problem.

  8. PDF CS 188 Introduction to Artificial Intelligence Summer 2023 Note 2 State

    one, we're almost ready to begin solving search problems. The final piece of the puzzle is that of state space graphs and search trees. Recall that a graph is defined by a set of nodes and a set of edges connecting various pairs of nodes. These edges may also have weights associated with them. A state space graph is constructed with states repre-

  9. State space search

    State space search is a process used in the field of computer science, including artificial intelligence (AI), in which successive configurations or states of an instance are considered, with the intention of finding a goal state with the desired property.. Problems are often modelled as a state space, a set of states that a problem can be in. The set of states forms a graph where two states ...

  10. Searching State Spaces

    The modeling of the state space as a graph enables the development of algorithms that leverage the graph structure of the space for search. In other words, state-space search is transformed to graph search. The advantage of this approach is that the problem of graph search is a well studied problem in graph theory, mathematics, and computer ...

  11. PDF Search Problems

    64. Allowing any grouping of parts as a valid subassembly would make the state space much bigger and more difficult to search. Search and AI. Search methods are ubiquitous in AI systems. They often are the backbones of both core and peripheral modules An autonomous robot uses search methods:

  12. State Space Search

    Abstract. State space search is one of the three fundamental requirements to achieve AI. This chapter present the basic techniques, called uninformed search, of searching the goal in problem solution. A search requires the representation of state space in the forms of a directed graph. The basic methods—depth-first search (DFS), breadth-first ...

  13. PDF Structures and Strategies For Space State Search

    AI Classnotes #4, John Shieh, 2011 1 Structures and Strategies For Space State Search 3.0 Introduction 3.1 Graph Theory 3.2 Strategies for Space State Search 3.3 Using Space State to Represent Reasoning with the Predicate ... •arcs -- steps in a problem-solving process •solution -- a goal state or a path from the start state ...

  14. What is State Space Search

    👉Subscribe to our new channel:https://www.youtube.com/@varunainashots Breadth First Search (BFS): https://youtu.be/qul0f79gxGs Depth First Search (DFS): ht...

  15. STATE SPACE SEARCH IN AI

    In this video, you will learn the State Space Search in Artificial Intelligence.Before an AI problem can be solved, it must be represented as a state space.T...

  16. State-Space Search for Problem Solving

    The goal of a search is to quickly find an optimal or near-optimal solution from a finite or infinite but countable set of solutions. The size of the solution space of an NP-hard problem is exponential in term of the problem size in the worst case. Even in an average case, a search algorithm typically explores an exponential number of solutions ...

  17. State Space Search I

    In this module we will learn the process of solving AI problems using state space search. We will also see the difficulties a designer might face. ... Solving AI Problems . First problem is called 3-4 gallon water jug problem as described in the book by Elaine Rich. We begin with two empty jugs of capacity of 3 and 4 gallons respectively.

  18. Search Algorithms in AI

    Artificial Intelligence is the study of building agents that act rationally. Most of the time, these agents perform some kind of search algorithm in the background in order to achieve their tasks. A search problem consists of: A State Space. Set of all possible states where you can be. A Start State. The state from where the search begins. A ...

  19. State-Space Search: Algorithms, Complexity, Extensions, and ...

    Heuristic state-space search is one of the fundamental problem-solving techniques in Computer Science and Operations Research, and usually constitutes an important component of most intelligent problem-solving systems. The search algorithms considered in this book can be classified into the category of branch-and-bound. Branch-and-bound is a ...

  20. Search-1

    Introduction to Artificial Intelligence. State Space Search. 1. Problem solving as search. Intelligence is often displayed during problem-solving processes. In many situations, "to solve a problem" can be described as to change the current situation, step by step, from an initial state to a final state. If each state is represented by a node ...

  21. AI

    CIS 5603. Artificial Intelligence Searching State Space 1. Problem solving as graph search One of the earliest AI approaches is to see intelligence as problem-solving capability, and to specify problem-solving as state-space search, that is, by selecting applicable actions, changing the initial state into a goal state, step by step.. Example: 8-puzzle A generic searching algorithm: a sequence ...

  22. State Space Search and Problem Solving in Artificial Intelligence

    State space search is a process used in the field of computer science, including artificial intelligence (AI), in which successive configurations or states o...