Home > Articles > Software Development & Management > Agile

'But I Don't Want the Minimum!' Understanding the Concept of the Minimum Viable Product (MVP)

  • Print
  • + Share This
  • 💬 Discuss
From the author of
Does the word 'minimum' immediately raise your hackles? Aaron Erickson, author of The Nomadic Developer, emphasizes that learning to prioritize, getting to production early, and subsequently delivering in small increments are key disciplines in the practices of Agile and continuous delivery.

So you've decided to go Agile. You've hired the best Agile practitioners you can find, and you're proceeding on your first Agile project. The first few sessions have gone well, and you have a large and bountiful tree of stories, each of which has tangible business value. You're happy—excited, even—as you can start to imagine these stories getting done and your product being realized!

But wait—some of these consultants have started to talk about "just doing the minimum"! Why are they asking me to do this? Are they lazy? I want all the features, not just some of them! Why am I being asked to help define a "minimum viable product"? Did I just get hoodwinked?

No. In fact, "asking for the minimum" is one of the most important practices in the canon of Agile software development. First of all, this exercise establishes that the product owner(s) can prioritize effectively. Second, it establishes an early mark at which you can start collecting feedback from early customers. Finally, it minimizes the work that you do prior to moving into a mode where you're making continuous deliveries to real customers.

Early Practice in Prioritization

Put simply, if you can't prioritize, you will fail. I explained in my article "Want to be Agile? Learn to Fail!" that failing early is one of the best ways to make sure that you don't end up "throwing good money after bad." No proposed system has a set of potential features that are all equally valuable. Even if you rated all features equally, you can't realistically ascertain how valuable they are until you have real users touching a system in production. At this stage, the best we can do is take educated guesses about which features are likely to be more valuable than others.

This educated guessing can be accomplished a number of ways. One typical way involves collecting product owners, potential users/customers, user-experience experts, and the software development team (representing all roles, not just developers). The team creates a "bucket" of priority categories—P0 for minimum, P1 for second-highest, P2 for third, and so on—and then each constituency independently assigns a priority to each story. Debate ensues around any conflicts of priority; for example, product owner assigns story X into P0, while customer assigns X into P2. The end result from such a process is a raw, prioritized list of stories. While this list may not be final (other factors could adjust priorities up or down), often it's a very good draft.

Think for a second about what this process accomplishes, beyond giving you the initial draft of story priorities. The process integrates the delivery team into the decision process, based around priorities. It forces socialization among groups—product owner, customer, delivery team—that usually, in "throw it over the wall"-style methodologies, don't talk to each other until late-stage delivery problems occur. It instills a sense of ownership into the delivery team, who can no longer complain that priorities were imposed on them without input. This early collaboration lays the foundation for future collaboration that is needed on any software project—Agile or not!

Early Customer Feedback

As good as early collaboration is, the success of a product depends on having customers see it, use it, and tell the team what's right and wrong about it. This exercise isn't a mere convenience for the customer. In companies where customers are paying members of the general public, the financial performance of the product utterly depends on making sure that early customer feedback is incorporated. Without such feedback, conversion—that is, getting customers to pay for things—suffers.

This practice isn't important only for the Amazon.coms, Googles, and Expedias of the world. It's even more crucial that internal business applications get this right. Stories abound of internal IT shops investing in applications that barely get used. A time-tracking application might get used, even if it has horrible user experience, because you don't get paid if you fail to put in your timesheet. However, even in such mandatory-use scenarios, a bad user experience may cause delays: People don't turn in their timesheets, thus creating administrative overhead of managers having to berate otherwise well-performing employees into using the system. Understanding conversion—that is, guiding users toward goal behaviors (in this case, turning in the timesheet)—matters for all sorts of apps.

In the past, and in many places here and now, the idea of early user feedback was considered "beta testing." However, in practice, beta testing often became "free QA from other departments to find functionality bugs." Beta testing frequently fails to uncover user-experience bugs in time for anyone to be able to do anything about them. By the time you get to beta testing under traditional methodologies, the kinds of changes you can make to the core user experience are extremely limited. By this stage, you may be able to change colors and font sizes, but not substantive things like bad workflow or poor information architecture.

The minimum viable product (MVP) concept is crucial for getting early feedback. Once the minimum is done, you get it out to a group of users as soon as possible. You'll start getting feedback by tracking usage patterns right away. However, the more important result is that you can start delivering small increments to these same users and A/B testing the results. Every release going forward can be a micro-release to be measured for how well it converts into user behaviors you want the effort to achieve. Getting this done early—and being able to react—is far more effective than guessing throughout the product development cycle and just hoping you're right!

Clearing the Delivery Path

Most technology practitioners have experienced the gap between being "code complete" and actually going to production. This problem, known as the "last mile" problem, is best summed up by the writings of my ThoughtWorks colleague, Jez Humble, in his excellent book Continuous Delivery: Reliable Software Releases Through Build, Test, and Deployment Automation. One important point he makes in the book is that the further apart your releases are, the more risk you introduce that the changes are wrong.

The minimum viable product concept is key to making release increments as small as possible. It also helps to clear the delivery path, in the sense that it makes you confront going to production as soon as possible. In many startups, going to production may mean merely uploading some files to a server. However, in large and more complex organizations, going to production can be as difficult as getting a bill through the United States Congress. If your organization is closer to making national law than to uploading some files, embracing the MVP concept is even more important, as it reduces the risk of an area that will surely be a problem down the line. Bringing the pain forward, working through problems sooner rather than later, is essential to achieving frequent and continuing feedback that helps to make the product better.

The practice of continuous delivery can cause real issues in certain kinds of organizations. Prior to going to production, some places insist that you specify the required hardware—not just for now, but for three years into the future. Others flat-out require a detailed database, class, and interaction diagram—not just of what is, but of what is to be for a "final" release—before you can pass a production gate. While such organizations often cite the Sarbanes-Oxley Act (SOX) or other regulations requiring this kind of documentation, clearly some companies are SOX-compliant without doing these things. You're really just experiencing organizational red tape that you need to cut through, simplify, or work around. Many creative solutions to this kind of problem can make all sides happy—especially if everyone can understand the importance of the broader business goals of a given project.

Of course, it may be that your organization is going to stand firm—under no conditions will it modify its gating process to allow for anything like continuous delivery. If so, attempting to clear the path at least tells you that you may likely fail and that you may need to cancel the project—or at least reconsider the economics of the project, given the decreased value delivered because you have less-frequent user feedback.

Bring on the MVP!

The term minimum viable product has been around, in various forms, for a long time. I've also heard minimally marketable feature set (but who wants something that's "minimally marketable"?), minimal feature set, minimum feature set, and others. I, for one, am glad to settle on minimum viable product, as it emphasizes viability over marketability, enforcing that it is a product and needs to appeal to users as such, and it has a nice abbreviation—who wouldn't want an "MVP"?

That said, the name isn't nearly as important as what it does. Having the MVP is critical for getting early feedback. It's crucial for testing the hypothesis that the software you're delivering is really valuable. And it's essential to any practice that depends on failing early, so you don't end up with software projects that are "too big to fail." Adult conversations about software projects involve real prioritization, real collaboration, and real decisions. The MVP concept is all about promoting all three.

  • + Share This
  • 🔖 Save To Your Account

Discussions

comments powered by Disqus