21.2 Dijkstra's Algorithm
In Section 20.3, we discussed Prim's algorithm for finding the minimum spanning tree (MST) of a weighted undirected graph: We build it one edge at a time, always taking next the shortest edge that connects a vertex on the MST to a vertex not yet on the MST. We can use a nearly identical scheme to compute an SPT. We begin by putting the source on the SPT; then, we build the SPT one edge at a time, always taking next the edge that gives a shortest path from the source to a vertex not on the SPT. In other words, we add vertices to the SPT in order of their distance (through the SPT) to the start vertex. This method is known as Dijkstra's algorithm.
As usual, we need to make a distinction between the algorithm at the level of abstraction in this informal description and various concrete implementations (such as Program 21.1) that differ primarily in graph representation and priorityqueue implementations, even though such a distinction is not always made in the literature. We shall consider other implementations and discuss their relationships with Program 21.1 after establishing that Dijkstra's algorithm correctly performs the singlesource shortestpaths computation.
Dijkstra's algorithm solves the singlesource shortestpaths problem in networks that have nonnegative weights.
Proof: Given a source vertex s, we have to establish that the tree path from the root s to each vertex x in the tree computed by Dijkstra's algorithm corresponds to a shortest path in the graph from s to x. This fact follows by induction. Assuming that the subtree so far computed has the property, we need only to prove that adding a new vertex x adds a shortest path to that vertex. But all other paths to x must begin with a tree path followed by an edge to a vertex not on the tree. By construction, all such paths are longer than the one from s to x that is under consideration.
The same argument shows that Dijkstra's algorithm solves the source–sink shortestpaths problem, if we start at the source and stop when the sink comes off the priority queue. ▪
The proof breaks down if the edge weights could be negative, because it assumes that a path's length does not decrease when we add more edges to the path. In a network with negative edge weights, this assumption is not valid because any edge that we encounter might lead to some tree vertex and might have a sufficiently large negative weight to give a path to that vertex shorter than the tree path. We consider this defect in Section 21.7 (see Figure 21.28).
Figure 21.10 shows the evolution of an SPT for a sample graph when computed with Dijkstra's algorithm; Figure 21.11 shows an oriented drawing of a larger SPT tree. Although Dijkstra's algorithm differs from Prim's MST algorithm in only the choice of priority, SPT trees are different in character from MSTs. They are rooted at the start vertex and all edges are directed away from the root, whereas MSTs are unrooted and undirected. We sometimes represent MSTs as directed, rooted trees when we use Prim's algorithm, but such structures are still different in character from SPTs (compare the oriented drawing in Figure 20.9 with the drawing in Figure 21.11). Indeed, the nature of the SPT somewhat depends on the choice of start vertex as well, as depicted in Figure 21.12.
Figure 21.10. Dijkstra's algorithm
Figure 21.11. Shortestpaths spanning tree
Figure 21.12. SPT examples
Dijkstra's original implementation, which is suitable for dense graphs, is precisely like Prim's MST algorithm. Specifically, we simply change the assignment of the priority P in Program 20.6 from
P = e.wt()
(the edge weight) to
P = wt[v] + e.wt()
(the distance from the source to the edge's destination). This change gives the classical implementation of Dijkstra's algorithm: We grow an SPT one edge at a time, each time updating the distance to the tree of all vertices adjacent to its destination while at the same time checking all the nontree vertices to find an edge to move to the tree whose destination vertex is a nontree vertex of minimal distance from the source.
With Dijkstra's algorithm, we can find any SPT in a dense network in linear time.
Proof: As for Prim's MST algorithm, it is immediately clear, from inspection of the code of Program 20.6, that the nested loops mean that the running time is proportional to V^{2}, which is linear for dense graphs. ▪
For sparse graphs, we can do better by using a linkedlist representation and a priority queue. Implementing this approach simply amounts to viewing Dijkstra's algorithm as a generalized graphsearching method that differs from depthfirst search (DFS), from breadthfirst search (BFS), and from Prim's MST algorithm in only the rule used to add edges to the tree. As in Chapter 20, we keep edges that connect tree vertices to nontree vertices on a generalized queue called the fringe, use a priority queue to implement the generalized queue, and provide for updating priorities so as to encompass DFS, BFS, and Prim's algorithm in a single implementation (see Section 20.3). This priorityfirst search (PFS) scheme also encompasses Dijkstra's algorithm. That is, changing the assignment of P in Program 20.7 to
P = wt[v] + e.wt()
(the distance from the source to the edge's destination) gives an implementation of Dijkstra's algorithm that is suitable for sparse graphs.
Program 21.1 is an alternative PFS implementation for sparse graphs that is slightly simpler than Program 20.7 and that directly matches the informal description of Dijkstra's algorithm given at the beginning of this section. It differs from Program 20.7 in that it initializes the priority queue with all the vertices in the network and maintains the queue with the aid of a sentinel value for those vertices that are neither on the tree nor on the fringe (unseen vertices with sentinel values); in contrast, Program 20.7 keeps on the priority queue only those vertices that are reachable by a single edge from the tree. Keeping all the vertices on the queue simplifies the code but can incur a small performance penalty for some graphs (see Exercise 21.31).
The general results that we considered concerning the performance of priorityfirst search (PFS) in Chapter 20 give us specific information about the performance of these implementations of Dijkstra's algorithm for sparse graphs (Program 21.1 and Program 20.7, suitably modified). For reference, we restate those results in the present context. Since the proofs do not depend on the priority function, they apply without modification. They are worstcase results that apply to both programs, although Program 20.7 may be more efficient for many classes of graphs because it maintains a smaller fringe.
For all networks and all priority functions, we can compute a spanning tree with PFS in time proportional to the time required for V insert, V delete the minimum, and E decrease key operations in a priority queue of size at most V.
Proof: This fact is immediate from the priorityqueue–based implementations in Program 20.7 or Program 21.1. It represents a conservative upper bound because the size of the priority queue is often much smaller than V, particularly for Program 20.7. ▪
With a PFS implementation of Dijkstra's algorithm that uses a heap for the priorityqueue implementation, we can compute any SPT in time proportional to E lg V.
Program 21.1 Dijkstra's algorithm (priorityfirst search)
This class implements a singlesource shortestpaths ADT with lineartime preprocessing, private data that takes space proportional to V, and constanttime member methods that give the length of the shortest path and the final vertex on the path from the source to any given vertex. The constructor is an implementation of Dijkstra's algorithm that uses a priority queue of vertices (in order of their distance from the source) to compute an SPT. The priorityqueue interface is the same one used in Program 20.7 and implemented in Program 20.10.
The constructor is also a generalized graph search that implements other PFS algorithms with other assignments to the priority P (see text). The statement to reassign the weight of tree vertices to 0 is needed for a general PFS implementation but not for Dijkstra's algorithm, since the priorities of the vertices added to the SPT are nondecreasing.
class GraphSPT { private double[] wt; private Edge[] spt; GraphSPT(Graph G, int s) { int V = G.V(); wt = new double[V]; spt = new Edge[V]; for (int v = 0; v < V; v++) wt[v] = maxWT; doublePQi pQ = new doublePQi(V, wt); for (int v = 0; v < V; v++) pQ.insert(v); wt[s] = 0.0; pQ.lower(s); while (!pQ.empty()) { int v = pQ.getmin(); // wt[v] = 0.0; if (v != s && spt[v] == null) return; AdjList A = G.getAdjList(v); for (Edge e = A.beg(); !A.end(); e = A.nxt()) { int w = e.other(v); double P = wt[v] + e.wt(); if (P < wt[w]) { wt[w] = P; pQ.lower(w); spt[w] = e; } } } } Edge pathR(int v) { return spt[v]; } double dist(int v) { return wt[v]; } }
Proof: This result is a direct consequence of Property 21.4. ▪
Given a graph with V vertices and E edges, let d denote the density E/V. If d < 2, then the running time of Dijkstra's algorithm is proportional to V lg V. Otherwise, we can improve the worstcase running time by a factor of lg(E/V), to O(E lg_{d} V) (which is linear if E is at least V^{1+∊}) by using a ⌈E/V⌉ary heap for the priority queue.
Proof: This result directly mirrors Property 20.12 and the multiwayheap priorityqueue implementation discussed directly thereafter. ▪
Table 21.1. Priorityfirst search algorithms
These four classical graphprocessing algorithms all can be implemented with PFS, a generalized priorityqueue–based graph search that builds graph spanning trees one edge at a time. Details of search dynamics depend upon graph representation, priorityqueue implementation, and PFS implementation; but the search trees generally characterize the various algorithms, as illustrated in the figures referenced in the fourth column. 

algorithm 
priority 
result 
Figure 
DFS 
reverse preorder 
recursion tree 
18.13 
BFS 
preorder 
SPT (edges) 
18.24 
Prim 
edge weight 
MST 
20.8 
Dijkstra 
path weight 
SPT 
21.9 
Table 21.1 summarizes pertinent information about the four major PFS algorithms that we have considered. They differ in only the priority function used, but this difference leads to spanning trees that are entirely different from one another in character (as required). For the example in the figures referred to in the table (and for many other graphs), the DFS tree is tall and thin, the BFS tree is short and fat, the SPT is like the BFS tree but neither quite as short nor quite as fat, and the MST is neither short and fat nor tall and thin.
We have also considered four different implementations of PFS. The first is the classical densegraph implementation that encompasses Dijkstra's algorithm and Prim's MST algorithm (Program 20.6); the other three are sparsegraph implementations that differ in priorityqueue contents:

Fringe edges (Program 18.10)

Fringe vertices (Program 20.7)

All vertices (Program 21.1)
Of these, the first is primarily of pedagogical value, the second is the most refined of the three, and the third is perhaps the simplest. This framework already describes 16 different implementations of classical graphsearch algorithms—when we factor in different priorityqueue implementations, the possibilities multiply further. This proliferation of networks, algorithms, and implementations underscores the utility of the general statements about performance in Properties 21.4 through 21.6, which are also summarized in Table 21.2.
Table 21.2. Cost of implementations of Dijkstra's algorithm
This table summarizes the cost (worstcase running time) of various implementations of Dijkstra's algorithm. With appropriate priorityqueue implementations, the algorithm runs in linear time (time proportional to V^{2} for dense networks, E for sparse networks), except for networks that are extremely sparse. 

algorithm 
worstcase cost 
comment 
classical 
V^{2} 
optimal for dense graphs 
PFS, full heap 
E lg V 
simplest implementation 
PFS, fringe heap 
E lg V 
conservative upper bound 
PFS, dheap 
E lg_{d}V 
linear unless extremely sparse 
As is true of MST algorithms, actual running times of shortestpaths algorithms are likely to be lower than these worstcase time bounds suggest, primarily because most edges do not necessitate decrease key operations. In practice, except for the sparsest of graphs, we regard the running time as being linear.
The name Dijkstra's algorithm is commonly used to refer both to the abstract method of building an SPT by adding vertices in order of their distance from the source and to its implementation as the V^{2} algorithm for the adjacencymatrix representation, because Dijkstra presented both in his 1959 paper (and also showed that the same approach could compute the MST). Performance improvements for sparse graphs are dependent on later improvements in ADT technology and priorityqueue implementations that are not specific to the shortestpaths problem. Improved performance of Dijkstra's algorithm is one of the most important applications of that technology (see reference section). As with MSTs, we use terminology such as the "PFS implementation of Dijkstra's algorithm using dheaps" to identify specific combinations.
We saw in Section 18.8 that, in unweighted undirected graphs, using preorder numbering for priorities causes the priority queue to operate as a FIFO queue and leads to a BFS. Dijkstra's algorithm gives us another realization of BFS: When all edge weights are 1, it visits vertices in order of the number of edges on the shortest path to the start vertex. The priority queue does not operate precisely as a FIFO queue would in this case, because items with equal priority do not necessarily come out in the order in which they went in.
Each of these implementations puts the edges of an SPT from vertex 0 in the vertexindexed array spt, with the shortestpath length to each vertex in the SPT in the vertexindexed array wt and provides methods that gives clients access to this data. As usual, we can build various graphprocessing methods and classes around this basic data (see Exercises 21.21 through 21.28).
Exercises

21.19 Show, in the style of Figure 21.10, the result of using Dijkstra's algorithm to compute the SPT of the network defined in Exercise 21.1 with start vertex 0.

21.20 How would you find a second shortest path from s to t in a network?

21.21 Write a client that uses a GraphSPT object to find the most distant vertex from a given vertex s (the vertex whose shortest path from s is the longest).

21.22 Write a client that uses a GraphSPT object to compute the average of the lengths of the shortest paths from a given vertex to each of the vertices reachable from it.

21.23 Develop a class based on Program 21.1 with a path method that returns a Java Vector containing references to the edges on the shortest path connecting s and t in order from s to t.

21.24 Write a client that uses your class from Exercise 21.23 to print the shortest paths from a given vertex to each of the other vertices in a given network.

21.25 Write a client that uses a GraphSPT object to find all vertices within a given distance d of a given vertex in a given network. The running time of your method should be proportional to the size of the subgraph induced by those vertices and the vertices incident on them.

21.26 Develop an algorithm for finding an edge whose removal causes maximal increase in the shortestpath length from one given vertex to another given vertex in a given network.

21.27 Implement a class that uses GraphSPT objects to perform a sensitivity analysis on the network's edges with respect to a given pair of vertices s and t: Compute a VbyV matrix such that, for every u and v, the entry in row u and column v is 1 if uv is an edge in the network whose weight can be increased without the shortestpath length from s to t being increased and is 0 otherwise.

21.28 Implement a class that uses GraphSPT objects to find a shortest path connecting one given set of vertices with another given set of vertices in a given network.

21.29 Use your solution from Exercise 21.28 to implement a client that finds a shortest path from the left edge to the right edge in a random grid network (see Exercise 20.17).

21.30 Show that an MST of an undirected graph is equivalent to a bottleneck SPT of the graph: For every pair of vertices v and w, it gives the path connecting them whose longest edge is as short as possible.

21.31 Run empirical studies to compare the performance of the two versions of Dijkstra's algorithm for the sparse graphs that are described in this section (Program 21.1 and Program 20.7, with suitable priority definition) for various networks (see Exercises 21.4–8). Use a standardheap priorityqueue implementation.

21.32 Run empirical studies to learn the best value of d when using a dheap priorityqueue implementation (see Program 20.10) for each of the three PFS implementations that we have discussed (Program 18.10, Program 20.7 and Program 21.1) for various networks (see Exercises 21.4–8).

21.33 Run empirical studies to determine the effect of using an indexheaptournament priorityqueue implementation (see Exercise 9.53) in Program 21.1 for various networks (see Exercises 21.4–8).

21.34 Run empirical studies to analyze height and average path length in SPTs for various networks (see Exercises 21.4–8).

21.35 Develop a class for the source–sink shortestpaths problem that is based on code like Program 21.1 but that initializes the priority queue with both the source and the sink. Doing so leads to the growth of an SPT from each vertex; your main task is to decide precisely what to do when the two SPTs collide.

21.36 Describe a family of graphs with V vertices and E edges for which the worstcase running time of Dijkstra's algorithm is achieved.

21.37 Develop a reasonable generator for random graphs with V vertices and E edges for which the running time of the heapbased PFS implementation of Dijkstra's algorithm is superlinear.

21.38 Write a client program that does dynamic graphical animations of Dijkstra's algorithm. Your program should produce images like Figure 21.11 (see Exercise 17.56 through 17.60). Test your program on random Euclidean networks (see Exercise 21.9).