Managing Software Debt: An Interview with Chris Sterling
He has been a consultant, Agile coach and certified Scrum trainer and is currently the vice president of engineering at AgileEVM.com and a partner at Sterling Barton, a hi-tech and management consultancy. Yet, I am most interested in Chris Sterling as the author of Managing Software Debt.
Part programmer, part Dave Ramsey, and part motivational speaker, Chris uses the book as an opportunity to talk about how software deals with change over time, and the long-term consequences of short-term "short cuts." From allowing bugs to failing to build test automation to short-sighted design—not to mention just ugly code—Chris's book includes a litany of poor choices combined with some alternatives.
I took the opportunity to ask the man himself a few questions: Just what is this debt metaphor, how we can introduce it into an organization, and use it to communicate? Most importantly, how can we use the "Software Debt" metaphor to create lasting change?
Matt Heusser: I suspect all our readers are familiar with investments and debt, and most of us are involved in creating software. Can you tell us how these ideas combine to make "software debt"? What exactly do you mean?
Chris Sterling: Sure. As software development team members, it is quite probable that we have all been guilty of writing code that is not perfect, not automating that test so we can just get it "done," or checked in code before it was tested well. Technical debt, a term coined on the c2.com wiki by Ward Cunningham, was used to describe these shortcuts that we take to seemingly finish faster but that will have downstream consequences. The relationship between these shortcuts and financial debt stems from the idea that we are extending ourselves and our software now but will have to pay it back plus interest later. When we make the choice to use shortcuts, it does cost us more to "fix" them later than it would have to build them without the shortcuts most of the time.
For me, technical debt, as described by Ward and others, only told some of the story around the types of shortcuts that can bite software development teams and organizations over time. So, I coined the term "software debt" to include all of the following:
- Technical debt
- Quality debt
- Configuration management debt
- Design debt
- Platform experience debt
My reason for making these other types of software debt explicit is that it allows teams to be more focused in their efforts to identify and clean it up. Also, I have found that these types of software debt all have different ways to monitor and manage them from tools, process, and people perspectives.
Matt: Now that we understand the metaphor, can you tell me how to take advantage of it —what I should do differently or how can I use it to make a difference on my projects?
Chris: When working with teams and technical leadership in companies, I have found that there are some basic principles that they can work towards to start managing and reigning in their software debt. First and foremost, maintain one list of work. In software organizations it is common to find at least three lists of work that teams and team members are pulling work from. Usually there is a list of features, a bug database, and a list of technology improvement items from someone in technical leadership. Team members now have to figure out which list to pull from and are making a decision on behalf of the business about what is most important to work on. This reduces focus and leads to all 3 lists being neglected somewhat by either delivering features late, deciding not to fix severe defects, or not making necessary infrastructure and development enhancements that lead to more issues down the line. By getting all of these work streams into one list of work we are able to help make holistic priority decisions to best benefit the business rather than micro-decisions at too granular a level in the organization.
On top of maintaining one list of work, there are some technical things that we can do to help keep software debt manageable. Emphasize quality from the start. Make assertions about what will be delivered from a quality point of view that includes all aspects of the software development life cycle. This will enhance the accountability of team members to their quality objectives. Next, evolving your tools and infrastructure frequently keeps teams up-to-date with new tools and capabilities and reduces the effects of decay in the infrastructure. Make software design a part of every days activities rather than something we do at the start of a project. Keeping whiteboards handy and close collaboration amongst team members and across teams helps remove some of the obstacles to getting on the same page around software designs.
This all leads to an aspect of software development that should not be forgotten, and that is tending to people that involved in developing software. Software development is a knowledge worker-inhabited environment. We must ensure that knowledge is moved around our organizations organically and intentionally. The ways that our team members and teams are aligned in our organization from product and project focus to where they sit can have an impact on the software delivered. Architectures and duplication can be profoundly affected by the organizational structure and therefore we should pay attention to this and find ways to support intra and inter team communication patterns.
Matt: Let's say we have one list of work. That still leaves the poor "product owner"—probably a non-technical person who has ownership of the team (or worse, a committee!) to decide what to fix—bugs, features, or infrastructure. How do they decide? How do we help them decide?
Chris: Product Owners in Scrum and other roles in additional methods supporting agile values and principles cannot make all of the priority decisions in a vacuum. In fact, the development team themselves, technical management, and project stakeholders can all help to manage the single list of work. This, of course, is not easy to do and can lead to unhealthy conflict, working around the decision-makers, and ultimately dissent. If the technical and project stakeholders are able to resolve conflict in a healthy way to create the single list of work, the single voice will provide an ability for the team to focus and go beyond deciding what to do next to how they can do it well. Building integrity into software as we go is difficult enough. Having to do this while juggling project priorities with multiple stakeholders makes building integrity in near impossible.
Beyond this explanation about why we should make the effort to maintain one list of work, there are some tools that I discuss in the book that can help. First, teams can define what their quality criteria by creating a Definition of Done. This is a subjective definition of the team's definition of quality for all deliverables, some of which can be turned into more objective measurements and trending data to identify when the team is not meeting their asserted quality levels. Also, teams can turn some of their technical requirements into narrative so that folks who are not technical will understand their value. In the book, one way to do this is to create "abuse stories," which I learned from a session that Mike Cohn did on user stories at Agile 2006, where we describe the value of technical requirements based on the cost of not addressing them. For instance, instead of a requirement described as, "Add security to user credit card transactions," we can write, "As a Malicious Hacker, I want to gain access to credit card information so that I can make fraudulent charges." The details of this abuse story will describe how we will stop the malicious hacker from accessing credit card information in our application. Abuse stories tend to be easier for less-technical folks to understand the value of software quality attributes that aren't easily identified through a user interface. One more way that teams can help is to define which software quality attributes should be a focus from the technical perspective of a particular application under development and also understand which attributes the project stakeholders are most interested in. From this, both perspectives can be discussed and a small amount of software quality attributes, rather than every potential type of quality, can be integrated and focused on during development.
Matt: Are you familiar with the "feature beast"—the idea that the technical team needs to crank out features in order to make sales? Thus, even if bugs and improvements are on the list, they get ignored. Over time, at least in my experience, this tends to lead to a large number of open bugs and slowly decreasing system response time/performance. Have you seen teams "break the back" of this feature beast? If yes, how did they do it?
Chris: I have not heard of "feature beast" to describe the focus on features to a point of disregarding bugs and improvements, but I have seen this occur on a frequent basis when working with teams and organizations. In fact, I have been on a team that had to overcome such a "feature beast" mentality in code that we inherited during a professional services engagement. The codebase we inherited was 1 million lines of code, 16 SQL Server databases, 15 programming languages, and absolutely no tests, not even manual test cases. In order to emerge from the software debt that accrued in this codebase before we even started touching it, we used many of the practices and concepts discussed in Michael Feathers' book, Working Effectively with Legacy Code. During that particular project we even created an open source tool called StoryTestIQ (http://storytestiq.sourceforge.net) that allowed us to create a safety net of regression tests through the user interface and web services API, which eventually enabled us to get the "beast" back under control. We ended up delivering an amazing amount of features on top of this seemingly untamable beast of code within a few months using the techniques we found in the book and through our own practice of good application stewardship.
My book, Managing Software Debt, does not go into detail about how to start your way back out of the software debt hole of a "feature beast" mentality. It does, however, discuss the tools, processes, and practices that can be used to get an understanding of the software debt, create a strategy for paying it back in a practical way, and put in infrastructure that will provide early indicators of the reemergence of software debt into application deliverables. In that particular project, we used automated acceptance testing, continuous integration, push-button release, and much more of the topics discussed in the book to get the application platform back into a state we could deliver successfully on top of it.
Matt: I can see how talking in these terms can be helpful, but what's the next step—how do we measure this software debt thing?
Chris: Teams and organizations that I work with all start at different points. Some find that adopting an Agile software development approach starts to work on the collaboration and integration aspects of software debt. Others might find that continuous integration will help build integrity into software earlier and also enhance communication. In an environment with extreme legacy code, techniques such as those found in Michael Feathers' book, Working Effectively with Legacy Code, could be a first step. And there are many more potential starting points depending upon what your issues are.
The first step that I usually take with a team is running an exercise to generate a list of software debt that includes all five types. Then we take a step back and look for groupings of items that are seemingly related. Then we decide which areas are most important to work on and make a prioritized list of work to begin working to reduce the software debt while still maintaining current delivery milestones.
For teams that are looking to identify how much software debt they currently have and find out when they are adding software debt accidentally, I have been advising the use of static code analysis tools such as Sonar (http://www.sonarsource.org/) to identify areas of exposure. It is great to be able to drill down into the duplication numbers and find actual lines of code identified in specific files, and then figure out a strategy to pull this code out into a single location. When going to work on a new feature, it is great to do some quick research about the area of code you are going into and see if there are any opportunities to refactor duplication and complexity. If anybody reading this has not checked out Sonar go right now...oh...I mean after you read the rest of this interview.
Matt: Tell us a little bit about your work on AgileEVM, and how you think portfolio management can fit into the mix to help create "software debt relief" (for lack of a better term).
Chris: One of my passions is the idea that software development will be thought about less as a cost of doing business and more of an investment that businesses use to add value to their bottom lines. While working with enterprises and their adoption of agile approaches to delivering software, Brent Barton, my partner in crime at AgileEVM.com , continued to run into an issue. Teams that were adopting agile methods were reporting their progress based on abstract ideas such as points and velocity. Although these were quite helpful to teams in tracking their own progress, it was difficult for the business to compare these for strategic planning efforts.
It just so happens that Brent had written an IEEE paper with Tamara Sulaiman and Thomas Blackburn that was published in 2006 about the "Agile EVM" method, which showed traditional Earned Value methods that business was used to using for project reporting and assessment could be used with Scrum. Brent, being a mathematics geek, had written the mathematical proof that supported this claim in the paper. At first we thought this was just a nice contribution to those organizations that needed to produce Earned Value reporting while adopting an agile approach to their software delivery. But over time we started to notice that this was an essential piece for solving the gap between the adaptive planning techniques of teams adopting agile software development approaches and the needs for business to discuss progress and strategic plans using dollars and dates.
http://www.agileEVM.com now gives teams using agile methods the ability to manage up to their business counterparts with dollars and dates using the Agile EVM method along with additional release, project, product, program, and project portfolio rollup calculation engine features. The tool has gone beyond what Agile EVM, as a method, does for tracking release progress to providing a project portfolio dashboard that incorporates impediment management, assertion of quality criteria for each release (aka Definition of Done as described earlier), what if analysis, printable reports, and enterprise integration capabilities. We are excited and passionate about helping the business take advantage of what agility has to offer beyond a single project.
Matt: I know you spent awhile teaching at the University of Washington in their certificate program. Can you tell us a bit about that? It seemed more rigorous than some other technical certificates.
Chris: I had a great time helping shape the original curriculum for the University of Washington's Agile Developer certificate program. I worked with many other folks in the local Agile community putting together the certificate, which takes about a year to earn through three courses. Those courses start with Scrum, go into XP technical practices, and then (the portion that I taught) Advanced Topics in Agile Software Development. For the Scrum and XP portions there is specific focus on the use of these methods from theory into practice. The advanced topics portion allowed me to venture wherever the class wanted to go. From user stories to Kanban to working with legacy code to innovation games and more we were able to decide what would help the participants round out their capabilities with Agile methods. What I liked about the courses was that we could get real feedback from participants that were also working in the real world about how they were able to adopt specific capabilities at work. We would provide a learning environment to introduce and practice capabilities outside of their work, and then they would take those capabilities into the workspace, and return with specific questions to share with the class.
Just so you know. I am a Certified Scrum Trainer with the Scrum Alliance. I find that the introductory courses, Certified ScrumMaster and Certified Scrum Product Owner, can be helpful to those getting started. We have found that people come away with real ways to implement Scrum and have success when I meet with them later or decide not to take it all on yet because of their environment. Both are potentially good outcomes after a two-day course. I find it helpful to tell the participants that they are not going to be masters of Scrum once they sit through a two-day course; rather they will know what is not Scrum once the class is completed. Also, I hope that even if they do not implement Scrum they take away aspects of it and new capabilities to help make their work life a bit better, including a primer on Managing Software Debt with techniques and tools that can help Scrum teams make the transition to iterative and incremental development.
Matt: Thank you for participating. Where can we go for more?
Chris: Please check out our blog at gettingagile.com; it has over 100 articles on topics from software debt to agile software development to architecture and more. We have had this blog since 2005 and continue to provide our thoughts on a frequent basis. You can contact me via email at email@example.com to discuss software debt, agile software development, hiring me, or get some time with me to help you get started with Agile EVM.