An Interview with Robert Sedgewick on the Analysis of Algorithms
Robert Sedgewick is probably best known among programmers for his books on algorithms. Within computer science though, his reputation rests on his work in the mathematical analysis of algorithms. He is currently a professor of computer science at Princeton University. Andrew Binstock interviews him on the occasion of the publication of the second edition of his seminal work with his late co-author Philippe Flajolet, An Introduction to the Analysis of Algorithms.
Andrew Binstock: What is new and different in this edition? Why should someone with the first edition buy this volume?
Robert Sedgewick: The main difference is a new chapter in the middle that provides an introduction to analytic combinatorics. The first half of the book is an introduction to the math that serves as the basis for analytic combinatorics; the second half of the book covers applications that show off the power of the approach.
Andrew: Has the role of analysis of algorithms in computing changed significantly in the time span between the two editions? If so, how?
Robert: Mathematical models that can serve as the basis for the scientific study of the performance of computer programs have played a critical role in the development of our computational infrastructure. They are no less important now, as that infrastructure continues to evolve. One surprise (to me) in the two decades Philippe and I worked on analytic combinatorics is that the models and methods we address are no less important in many other scientific disciplines. That is, the potential impact of our work is much broader than I imagined when we embarked on this work more than 25 years ago.
The methods of teaching the analysis of algorithms have changed significantly as well. For example, I'll shortly be teaching an online course from this book. I’m also doing a course on analytic combinatorics in November.
Andrew: You wrote in Sec 1.1 of the new book, "…analysis of algorithms has been used to describe…general approaches to putting the study of the performance of computer programs on a scientific basis." And the book takes this approach, strictly. So, am I right that analysis of algorithms does not involve proving the success of the algorithm? For example, if I write an algorithm to generate random numbers, analysis of the quality of my results is not, strictly speaking, part of analysis of algorithms. Just the performance. Is that right?
Robert: No, "performance," as I use the term, is intended to mean "any property of interest." Most often, it's some cost measure like time or space, but others are certainly within scope.
Andrew: In the context of pure performance, the only reference to parallel processing or concurrency in the book is this statement near the end (p. 465) which comes in a discussion of strings. You observe "This [work in parallel] can lead to substantial speedups for some applications." In fact, parallelizing algorithms, such as sorts and searches can profoundly change the performance profile. Why would parallel processing and concurrency not be major topics in a treatment of analysis of algorithms?
Robert: We just cover the basic foundations, which are broadly applicable. Any demonstration of the effectiveness of parallelism and concurrency will have to use the scientific method: build appropriate mathematical models, develop hypotheses about performance, and validate the hypotheses through experimentation. This situation is quite familiar in other fields of science. For example, the same basic foundations can predict where a stone thrown by a child will land and where a mortar shell will land, but no calculus text describes the properties of all sorts of projectiles in detail.
Andrew: In section 9.3, you briefly discuss the effects of caches. But nowhere do you mention cache-aware and cache-oblivious algorithms. Is this material for a 3rd edition or are they irrelevant to analysis of algorithms? If the latter, why would something as closely tied to algorithm performance as caches not be relevant?
Robert: Yes, I'd like to write about cache-oblivious algorithms. That's a good example where people have demonstrated that these methods are effective via the scientific method.
Andrew: Your book involves considerable mathematics. However, if I turn to an article in Dr. Dobb’s written by you and Jon Bentley on radix sorts, you state with regard to quicksort: "On most systems, [Quicksort] is indeed an efficient and powerful general-purpose tool. But the power of the general compare function comes at a price in performance. Experiments we've conducted show that our specialized string sort can be faster than qsort by a factor of four." (emphasis mine) What then is the balance between empirical determination of performance optimization capabilities and the mathematical approach embraced by analysis of algorithms? Extending the question: How should programmers use analysis of algorithms--should it be an adjunct to hands-on testing? How would this work out in a real-world situation where a new algorithm is being implemented?
Robert: That's a fine example of the scientific method in action. We run experiments, develop mathematical models that help us develop hypotheses about performance and choose values of parameters, then run more experiments and to validate and refine the models. Several research papers followed that Dr. Dobb’s article, developing detailed mathematical models that have helped us better understand string sorting. No one should "implement a new algorithm" without having some way to predict performance and validate the implementation.
Andrew: A common complaint about Knuth's The Art of Computer Programming is that the material relies very heavily on mathematics, and so it deters working programmers (as opposed to computer scientists). Your work embraces math. Is there no way for working programmers to approach analysis of algorithms save though the portal of calculus?
Robert: In many situations, it is not difficult at all. We give a simple model in Algorithms, 4th edition that is often effective: Ask yourself the following question: what happens to the running time if the input size is doubled? If it doubles, you have a linear time algorithm; if it goes up by a factor of 4, you have a quadratic algorithm, which does not scale to huge input sizes. All too often products with this simple but glaring defect are shipped to customers.
Sure, the mathematics can get complicated, and no one is advocating that every programmer fully analyze every algorithm, but every programmer should know to avoid poor performance for applications that can have huge inputs.
Not everything that one studies need have direct application to the job at hand. Lots of programmers are intrigued by the sorts of problems addressed by Knuth. One goal of our book is to give people the basic tools they need to learn more.
I'm finding that many programmers nowadays have much more math and science background than in the not-so-distant past, precisely because mathematics and the scientific approach are essential in a broad swath of modern development
Andrew: Can data structures be analyzed as rigorously as algorithms? For example, the performance of one kind of tree vs. another or open vs. closed hash tables—and so on.
Robert: Yes. Algorithms and data structures go hand-in-hand, so that analyzing properties of a data structure can tell us what we need to know to analyze an algorithm.
Andrew: Algorithms today are primarily implemented as libraries. Like data structures, developers rarely write them from scratch anymore. While this might signal the triumph of reusable code, it seems to me that developers are losing the ability to write or even to understand algorithms. Do you see an erosion of skills with regard to algorithm design and implementation? If so, what needs to change?
Robert: One thing that we have learned is that people need to focus on narrow interfaces because wide interfaces contain performance traps. Programmers tend to assume that library implementations will be efficient, but it is often the case that various combinations of methods in a wide interface will lead to poor performance, unbeknownst to the programmer. Again, every programmer needs to validate any assumptions about performance through analysis and experimentation.
Andrew: In a 1999 interview, you were asked to list the top 10 books every developer should have. Here is the list you gave then. What would that list be today?
- Structure and Interpretation of Computer Programs, by Abelson and Sussman. MIT Press, 1996.
- PostScript Language Reference Manual, by Adobe Systems. Addison-Wesley.
- The AWK Programming Language, by Aho, Weinberger, and Kernighan. Addison-Wesley, 1988.
- The Java Programming Language, Second Edition, by Arnold and Gosling. Addison-Wesley.
- Maple V Reference Manual, by Char, Geddes, Gonnet, Leong, Monagan, and Watt. Springer Verlag, 1991.
- The C Programming Language, by Kernighan and Ritchie. Prentice Hall, 1988.
- The Art of Computer Programming, by Knuth. Addison-Wesley, 1997, 1998.
- The TeXbook, by Knuth. Addison-Wesley, 1986.
- The C++ Programming Language, by Stroustrup. Addison-Wesley.
- The Visual Display of Quantitative Information, by Tufte. Cheshire Press, 1992.
Robert: Let me respond with a list of good books on algorithms. Including my own, I’d point to these volumes:
- The Art of Computer Programming, Volumes 1-4A, by Knuth. Addison-Wesley, 1997, 1998, 2011
- An Introduction to Algorithms, by Cormen, Leiserson, Rivest, and Stein, 3rd edition, 2009
- Algorithm Design, by J. Kleinberg and E. Tardos, 2005
- Randomized Algorithms, by R. Motwani and P. Raghavan, 1995
- Algorithms, by Dasgupta, Papadimitriou, and Vazirani, 2008
- The Design of Approximation Algorithms, by Williamson and Shmoys, 2011
- Network Flows, by Ahuja, Magnanti, and Orlin, 1993
Andrew: Terrific. Thank you!
Andrew Binstock is the editor in chief of Dr. Dobb’s. He previously interviewed Donald Knuth for InformIT.