Grid: 15 × 15
Start
Goal
Visited
Frontier
Path
Select an algorithm from the dropdown and press to explore the grid. Press to pause the algorithm. Press to generate a new grid with random obstacles. You can click or drag on the grid to move the goal or create obstacles when the algorithm is not running.
Dijkstra's Algorithm
Published by Edsger Dijkstra in A Note on Two Problems in Connexion with Graphs (1959). At each step it selects the unvisited node with the smallest accumulated cost from the source, expanding outward in order of increasing distance:
where is the exact cost of the cheapest known path from the start to node . Because the search uses no information about where the goal is, it explores uniformly in all directions. This guarantees an optimal path on any non-negative weighted graph such as this grid, but at the cost of visiting many nodes that lie away from the goal.
With a binary min-heap the time complexity of Dijkstra's algorithm is , where and are the numbers of vertices and edges. On this unweighted grid that simplifies to .
A* (A-star)
A* was introduced by Hart, Nilsson, and Raphael in A Formal Basis for the Heuristic Determination of Minimum Cost Paths (1968). At each step A* selects the open node that minimises:
where is the exact cost from the start to node , and is an admissible heuristic estimate of the cost to the goal. On this grid we use Manhattan distance:
where and are the row and column indices of node . Because never overestimates the true cost, A* is guaranteed to find the shortest path.
The A* algorithm's worst-case time and space complexity is , where is the branching factor and is the depth of the optimal solution, but in practice the heuristic prunes the search dramatically.
Bidirectional A*
Hart, Nilsson, and Raphael noted that the same heuristic framework could be applied in both directions simultaneously. Ira Pohl later formalised this into a complete bidirectional algorithm in Bi-directional and Heuristic Search in Path Problems (1969). The key idea here is two A* searches run in parallel, where one runs forward and one backward from the goal, each scored by the Manhattan heuristic toward the opposite endpoint:
An incumbent best cost is maintained. Whenever a node has been settled by both directions, the algorithm updates:
Finally, the search terminates once the minimum -values of both open sets satisfy:
No unexamined path can then improve on , so the meeting node yields an optimal solution. Because each frontier only needs to grow halfway, the explored area roughly halves, reducing complexity from to .
Breadth-First Search (BFS)
First applied to shortest-path problems by E. F. Moore in The Shortest Path Through a Maze (1959). BFS maintains a FIFO queue and explores nodes layer by layer. At each step it dequeues the front node and enqueues all unvisited neighbours:
Every node at hop-count is visited before any at . On an unweighted grid every edge has cost 1, so hop-count equals path cost, making BFS optimal without the priority-queue overhead of Dijkstra's. Both time and space complexity of BFS are , where is the number of vertices.
Greedy Best-First Search (GBFS)
Described by Doran and Michie in Experiments with the Graph Traverser Program (1966). At each step it selects the open node that minimises only the heuristic estimate, ignoring accumulated cost entirely:
On this grid is the same Manhattan distance used by A*. Because is dropped, the search dashes straight toward the goal. It is very fast and highly directed, but it no longer guarantees an optimal path. A well-placed obstacle can force the algorithm to search down a long detour while the shortest path lay nearby.
The worst-case time and space complexity of GBFS matches A* at , but average-case performance is often much better.
Depth-First Search (DFS)
Formalised by Robert Tarjan in Depth-First Search and Linear Graph Algorithms (1971). DFS maintains a LIFO stack and at each step pops the top node, pushing all unvisited neighbours:
This strategy drives the search as deep as possible along a branch before backtracking, systematically exploring each path to its fullest extent prior to considering alternative branches.
The time complexity of DFS is and space complexity is only , where is the maximum depth. This is much more memory-efficient than BFS in deep narrow graphs. However, the path it discovers is generally far from optimal.