- 10.3 Overview of Features
- 10.4 Benefits of Feature Preparation
- 10.5 Feature Preparation Activities
- 10.6 Timing of Feature Preparation
- 10.7 Assessing Readiness
- 10.8 Accounting for Preparation Work: Tasks and Spikes
- 10.9 Specifying Features and Their Acceptance Criteria
- 12.4 MVP Planning
- 17.3 Why Do We Need a Scaled Agile Approach?
- 17.4 Planning: Choosing an Approach That Supports Inter-team Collaboration
- 17.8 Scaling the Agile Organization
- 18.6 Agile Corporate Culture
- 18.7 Overview of Principles and Practices for an Agile Corporate Culture
- 18.8 Three Principles for Applying Agile Practices
12.4 MVP Planning
When a product is a new-market innovation, you can’t prioritize features reliably upfront because customers themselves often won’t know what they want until they see it. The lean startup approach,2 introduced earlier in this book, addresses this problem by running experiments on customers—short-circuiting “the ramp by killing things that don’t make sense fast and doubling down on the ones that do.”3
12.4.1 What Is an MVP?
A minimum viable product (MVP) is a low-cost, experimental version of the product or feature used to test hypotheses and determine if it’s worth fully investing in it. According to Eric Ries, the inventor of lean startup, an MVP is “that version of the product that enables a full turn of the Build-Measure-Learn loop with a minimum of effort and the least amount of development.”4 MVP is not (as often thought) the first version of the product released to the market. It’s a version meant for learning—a means to test hypotheses and to determine the minimum set of features to include in a market-ready product. The minimal releasable version of the product is referred to as the minimum marketable product (MMP).
12.4.2 MVP Case Study: Trint
You only really understand why MVPs are so crucial to the success of innovative product development when you see a real example of the process. That was the case as I followed the story of Trint, a company founded by Emmy-winning reporter, foreign and war correspondent (and good friend) Jeffrey Kofman. Like many late-stage entrepreneurs, Kofman set out to solve a problem he understood intimately because it had bothered him throughout much of his previous professional life: every time Kofman had to transcribe an interview by hitting PLAY, STOP, TRANSCRIBE, and REWIND, he couldn’t understand why he was still using a process that had remained virtually unchanged since the 1960s and 1970s. Why wasn’t artificial intelligence (AI) being used to automate the speech-to-text transcription? He knew the reason: journalists can’t risk inaccuracies. Since AI makes mistakes, journalists wouldn’t use an AI-based product unless there was a way to verify the content. The real problem, then, was how to leverage automated speech-to-text in order to get to 100 percent accuracy.
Kofman knew that if he could solve that problem, he would have a winning product. Furthermore, he knew that if his team could solve it for journalists—whom he knew to be unforgiving—they could solve it for anybody. He concluded, therefore, that the most important leap of faith hypothesis for the product was that the company could find a way for users to correct errors in place in order to deliver transcripts that could be verified and trusted. As Kofman saw it, his team needed to create a layer on top of AI (the automated speech-to-text component) so that the AI part would do the heavy lifting of transcription, allowing the user to focus on quicker tasks: search, verify, and correct. He believed that by using this approach, he could reduce the time to perform a task that would normally take hours to complete down to minutes or even seconds. From earlier chapters of this book, you’ll recognize Kofman’s steps as the beginning of the MVP process: the articulation of the problem, vision, and leap of faith hypotheses for the product.
To create the MVP, Kofman gathered a team of developers with experience in audio-to-text alignment using manually entered text. He challenged them to hack together an MVP version that would automatically transcribe speech to text and allow a user to edit it.
The company’s first MVP was built in just three months. Kofman decided to use some of his limited seed funding to invest in user lab testing. He brought in a group of journalists for the testing day. Interestingly (as is often the case), the first MVP was “wrong.” While the journalists liked the concept, they struggled to use the product, finding it annoying to switch back and forth between editing and playback modes. (The original design used the space bar as a toggle between modes and as the text space character during editing, confusing users.) As Kofman told me, “Good innovative products should solve workflow problems; this was creating new ones.” And so, using feedback from the MVP, he asked the developers to build a new user experience with a better workflow.
MVP isn’t just about one test; it’s a process. Fifteen months into the project, in early 2016, the company developed a more refined version of the MVP. Kofman was ready to prove his hypothesis that there was a strong market for the product. At this point, the product provided much of the core functionality needed by users, such as the ability to search for text to locate key portions of an interview. However, it still lacked key components required to make it fully ready for the market. For example, there were no mechanisms for payments or pricing.
Through his extensive network of journalistic colleagues, Kofman let it be known that they would be opening up the product for free usage during one week of beta testing. When the testing began, things proceeded normally until an influential journalist at National Public Radio sent out a highly enthusiastic tweet, causing usage to soar. At ten thousand users, the system crashed. It took the company two days to get back online, but the test proved beyond a doubt that there was a market for the product.
Today, Kofman views that one day of MVP lab testing as perhaps the most important action taken by the company in its early days because it caused developers to change direction before spending a lot of time and money on a failed solution. The lesson, as Kofman tells it, is this: “You have to test your ideas out on real people”—the people who will actually use your product.
In previous chapters, we examined how to identify the leap of faith hypotheses that must be tested and validated for the product to be viable. Now, we focus on the next step: planning the MVPs that will test those hypotheses.
12.4.3 Venues for MVP Experiments
Since an MVP is only a test version, one of the first things to consider is where to run the test and who the MVP’s testers will be. Let’s explore some options.
126.96.36.199 Testing in a Lab
A user testing lab may be internal or independently operated by a third party. Testing labs provide the safest venue for testing, making them appropriate for testing in highly regulated mainstream business sectors, such as banking or insurance, where there is minimal tolerance for errors. Because the lab setting provides an opportunity to gain deep insight into users’ experience of the product, it’s also an ideal venue for MVP testing at the beginning of innovative product development when it’s critical to understand customer motivations and the ways they use the product.
The testers should be real users. However, in cases where the requirements are stable, proxies may be used (e.g., product managers with a strong familiarity with the market). Include testers familiar with regulations governing the product, such as legal and compliance professionals, to identify potential regulatory issues.
188.8.131.52 Testing MVPs Directly in the Market
The most reliable feedback comes from MVP-testing in the marketplace to a targeted group of real customers. Consider this option for new-market disruptions, where first adopters are often willing to overlook missing features for novelty. This option is also advised for low-end disruptions, where customers are willing to accept reduced quality in return for a lower price or greater convenience.
184.108.40.206 Dark Launch
Another way to limit negative impacts during MVP feature testing is to dark-launch it—to stealthily make it available to a small group of selected users before broadening its release. If the feature is not well received initially, it can be pulled back before it impacts the product’s reputation; if customers like it, it is developed fully, incorporated in the product, and supported.
220.127.116.11 Beta Testing
A beta version is an “almost-ready-for-prime-time” version—one that is mostly complete but may still be missing features planned for the market-ready version. Beta testing is real-world testing of a beta version by a wide range of customers performing real tasks. Its purpose is to uncover bugs and issues, such as usability, scalability, and performance issues, before wide release.
Feedback and analytics from beta testing are used as inputs to fix remaining glitches and address user complaints before releasing the product or change to the market. Split testing may also be performed at this time—whereby one cohort of users is exposed to the beta version while a control group is not.
Beta testing is not just for MVPs; it should be a final testing step after internal alpha testing for all new features and major changes before they are widely released.
12.4.4 MVP Types
When planning an MVP, the objective is to hack together a version of the product or feature that delivers the desired learning goals as quickly and inexpensively as possible. The following are strategies for achieving that. One MVP might incorporate any number of these strategies.
Value Stream Skeleton
These MVPs are described in the following sections.
18.104.22.168 Differentiator MVP
At the start of new product development, the most common strategy is to develop a low-cost version that focuses on the product’s differentiators. This was the approach we saw taken earlier by Trint. Using existing components, the company was able to piece together an MVP demonstrating the differentiating features of its product (speech-to-text auto-transcription plus editing) and validating its value in just three months.
22.214.171.124 Smoke-and-Mirrors MVP (or Swivel Chair)
A Smoke-and-Mirrors MVP approach provides the user with an experience that is a close facsimile of the real thing but is, in fact, an illusion—like the one created by the magician pulling strings behind the curtain in the movie The Wizard of Oz.
One of my clients, a cable company, used this approach to provide an MVP frontend for customers to configure their own plans. The site operated in a sandbox, disconnected from operational systems. Behind the scenes, an internal support agent viewed the inputs and swivel-chaired to an existing internal system to process the request. The customer was unaware of the subterfuge. The MVP allowed the company to test the hypothesis that customers would want to customize their own plans before investing in developing the capability.
126.96.36.199 Walking Skeleton
A Walking Skeleton, or spanning application, validates technical (architectural) hypotheses by implementing a low-cost end-to-end scenario—a thin vertical slice that cuts through the architectural layers of the proposed solution. If the Walking Skeleton is successful, the business will invest in building the real product according to the proposed solution. If it is unsuccessful, the technical team goes back to the drawing board and pivots to a new technical hypothesis.
For example, in the Customer Engagement One (CEO) case study, the organization plans an end-to-end scenario for ingesting text messages from a social-network application, saving the messages using the proposed database solution, retrieving them, and viewing them as a list. Another example is Trint, whose first MVP incorporated the end-to-end scenario from speech to text to editing in order to validate the architectural design for the product.
188.8.131.52 Value Stream Skeleton
A Value Stream Skeleton implements a thin scenario that spans an operational value stream—an end-to-end workflow that ends with value delivery. It’s similar to a technical Walking Skeleton except that it validates market instead of technical hypotheses. It covers an end-to-end business flow but does not necessarily use the proposed architectural solution.
The intuitive sequence for delivering features is according to the order in which they’re used. For example, you might begin by delivering a feature to add new products to the product line for an online store and follow with features to receive inventory, place an order and fulfill an order. Not only does this sequence minimize dependency issues, but it also enables users to perform valuable work while waiting for the rest of the system to be delivered. I usually took this approach in my early programming days. The problem with it, though, is that it results in a long lag until an end customer receives value (e.g., a fulfilled order). In a business environment where there is a strong advantage in being fast to market, that kind of lag is unacceptable. Another problem is that it can delay the time until a company can begin receiving revenue from customers.
A Value-Stream Skeleton avoids these problems by delivering quick wins that implement thin versions of the end-to-end value stream, often with reduced functionality.
The first version of a Value-Stream Skeleton focuses on the value stream’s endpoints—the entry point where the customer makes a request and the endpoint where the customer receives value. Workarounds are often used for the missing steps. For example, the first MVP for an online store allows a customer to purchase a few select products. The product descriptions and prices are hardcoded into the interface instead of being pulled from a database. This lowers development costs. The products are offered only in a single geographic region—simplifying the business rules and delivery mechanisms that the MVP implements. Despite the thinness of the MVP, it provides learning value to the business and real value to an end customer, who can already order and receive the products with this early version. As the business grows, the MVP evolves to handle more products and a broader geographical region.
184.108.40.206 Concierge MVP
The Concierge MVP7 is based on the idea that it’s better to build for the few than the many. Early versions are aimed at a small submarket that is very enthusiastic about the product, and the learning gained from the experience is used to scale the product. One example of a Concierge MVP is Food on the Table,8 an Austin, Texas, company that began with a customer base of one parent. The company met with the parent once a week in a café to learn the parent’s needs and take orders. The orders were filled manually. The process was repeated for a few other customers until the company learned enough to build the product.
As the example illustrates, you begin the Concierge MVP approach by selecting a single, real customer. The first customer can be found through market research, using analytics to determine the desired customer profile and inviting a customer who fits the profile to act as an MVP tester. Alternatively, you can select the first customer from among individuals who have previously indicated an interest in the product. This customer is given the “concierge treatment”—served by a high-ranking executive (e.g., vice president of product development) who works very closely with the customer, adding and adjusting features as more is learned.
At this stage, internal processes are often mostly manual. A company might spend a few weeks working with the first customer in this way, learning what that person does and does not want, and then select the next customer. The process is repeated until the necessary learning has been obtained and manual operations are no longer viable—at which point the product is built and deployed.
220.127.116.11 Operational MVP
An MVP isn’t always created to validate software hypotheses and features; it can also be used to test operational hypotheses and changes. In a real-life example (which I’ll keep anonymous to protect the company), a company created an MVP to test the impact of a price hike on sales. The MVP displayed the higher price to a select group of customers, but behind the scenes, the customers were still being charged the regular, lower price. Once the learning objective was achieved, customers received an email notifying them that they had been part of a test group and that no extra charges were actually applied.
18.104.22.168 Preorders MVP
The most reliable and cost-effective way to test a value hypothesis that customers will pay for an innovative product is to offer a means to order it before it’s actually ready. The MVP can be something as simple as a promotional video or demonstration prototype. It may employ a stripped-down ordering process, such as order by email attachment, order by phone, or an online ordering site with hardcoded options. An MVP of this type might not require any stories—or it might need a few small stories (e.g., to set up a simple frontend for placing orders).
My own company, Noble Inc., used this approach when we were considering developing a product to provide a 360-degree evaluation of the business analysis practice in an organization. For the MVP, we developed a facsimile of the product and demonstrated it to our clients in an attempt to generate presales. What we learned was that there wasn’t enough interest to justify building the real thing. Despite the failure of the test, I consider it money well spent. Imagine if we had learned it only after a large investment!
Dropbox’s version of this MVP strategy played out much better. Dropbox posted a video of its product,9 illustrating its main features. The video received enthusiastic and voluminous feedback from potential customers—making the case for the product and generating important suggestions about features and potential issues that were incorporated into the first marketed version.
12.4.5 MVP’s Iterative Process
You don’t just create an MVP and test it once. The MVP process is iterative. Its steps are as follows:
Establish an MVP to test hypotheses.
Specify an MVP to test one or more leap of faith hypotheses (e.g., using any of the MVP types discussed in the prior section).
Tune the engine.
Make incremental adjustments to fine-tune the product on the basis of feedback from customers as they use the product.
Decision point: persevere or pivot.
After tuning for a while, decide whether to persevere with the business model or pivot to a different hypothesis.
12.4.6 The Pivot
A pivot is a switch to a different hypothesis based on a failure of the original premise. A company may decide to pivot near the start of a product’s development due to the MVP process described previously. Alternatively, the pivot may occur at any time in a product’s life if it becomes apparent there is no market for the product, and the product should be reoriented toward a new market or usage.10 An example of a pivot to an established product is Ryanair, once Europe’s largest airline (based on passenger numbers).11 Back in 1987, when the company realized it was failing financially, it pivoted to a low-end, disruptive revenue model based on the hypothesis that customers would be willing to pay for meals and other perks in return for cheap fares. The hypothesis was borne out when customers flocked to the airline.12 More recently, in response to Brexit, the company has again pivoted—this time away from the United Kingdom to a business model based on growth outside of it.13
22.214.171.124 Constructive Failures
A pivot represents a failed premise, but, as the Ryanair example shows, the failure can often be constructive. In fact, many of today’s successful companies are a result of such failures. For example, Flickr resulted from the failure of a previous offering—Game Neverending.14 When the original product failed, the company pivoted by turning it into a successful photo-sharing app, leveraging the lessons it had learned about the value of community and the social features it had developed for the game (such as tagging and sharing). Groupon is another example. Conceived initially as an idealistic platform for social change, it then pivoted to become a platform for those seeking a bargain.
12.4.7 Incrementally Scaling the MVP
An effective way to develop a product is to start with a manual MVP and automate and scale it incrementally as the product grows. This approach was used by Zappos, an online shoe store.
Here’s how the process played out, as described by the company’s founder: “My Dad told me … I think the one you should focus on is the shoe thing. … So, I said okay, . . . went to a couple of stores, took some pictures of the shoes, made a website, put them up and told the shoe store, if I sell anything, I’ll come here and pay full price. They said okay, knock yourself out. So, I did that, made a couple of sales.”15 In 1999, the company signed on a dozen brands—all men’s brown comfort shoes. As they added more respected brands, such as Doc Martens, the company and market grew and, in tandem, Zappos automated and scaled its business systems and processes.
12.4.8 Using MVPs to Establish the MMP
Using the MVP process, a company can quickly and inexpensively validate through experimentation which features will make the most difference. These features are referred to as the minimal marketable features (MMFs). An MMF is the smallest version of a feature (the least functionality) that would be viewed as valuable by customers if released to the market. MMFs may deliver value in various ways, such as through competitive differentiation, revenue generation, or cost savings. Collectively, the MMFs define the minimum marketable product (MMP)—the “product with the smallest feature set that still addresses the user needs and creates the right user experience.”16