- 5.1 Framing the Problem
- 5.2 Activity-oriented Teams
- 5.3 Shared Services
- 5.4 Cross-functional Teams
- 5.5 Cross-functionality in Other Domains
- 5.6 Migrating to Cross-functional Teams
- 5.7 Communities of Practice
- 5.8 Maintenance Teams
- 5.9 Outsourcing
- 5.10 The Matrix: Solve It or Dissolve It
- 5.11 Summary of Insights
- 5.12 Summary of Actions
5.2 Activity-oriented Teams
Sales, marketing, product development, support, recruitment, and finance are all examples of specialized competencies. It is quite conventional to have a separate team per competency of this sort. Often called specialist teams, we call them activity-oriented teams to convey that they are formed around activities rather than outcomes (Section 4.1). Activity-oriented teams are a form of functional organization. In terms of traditional staff and line terminology,1 all staff and line functions are activity-oriented teams when they are organized separately by function.
For example, it is common to organize by specialization for a given line of products and assign a manager (full or part time) per line item below:
- Inside sales
- Field sales
- Sales engineers (pre sales)
- Marketing—content
- Marketing—advertising, social media
- Marketing—SEO, product web site
- Marketing—strategy
- Product management
- Product development
- Architecture
- UX
- Analysis
- Development
- QA
- Release management
- IT operations
- Product support
- Product solutions (custom installations, add-ons)
- Product training and certification
This effectively results in a dozen activity-oriented teams per product. Organizing teams like this isn’t the best way to serve the business outcome—that is, a successful product. It results in multiple, high-latency handoffs across teams to get anything done, whether it be developing a new feature, launching a marketing campaign for a product release, fixing a bug identified by a customer, or closing a new deal. Yet, it is what happens when IT-B is organized as a matrix.
5.2.1 Hamstrung by High-Latency Handoffs
As defined in Section 2.4.3, a value stream is a series of activities required to deliver an outcome. N activities require N – 1 handoffs for a work item (or batch) to pass through the value stream. Handoffs are simply a result of activity specialization. However, when a value stream is serviced by a series of activity-oriented teams (functional organization), each handoff is a handoff between teams. This makes it slower and more expensive.
Consider the case where this work item is a software build. If the testing team is separate from the development team, they will not accept builds on a continuous basis but rather have their own schedule by which to take new builds. This means that each new build accepted by QA will have a lot more changes (large batch size) than in the case where new builds from development are automatically deployed into a QA environment on an ongoing basis.
Expensive handoffs encourage large batch sizes to reduce the total number of handoffs. A separate database team will not entertain piecemeal requests for query optimization. They’d rather own the data model and enforce indexing conventions across the board. They won’t review or help with unit-level database migration scripts. They’d rather review the whole set of migrations when the application is ready for UAT or some other similar state of maturity. On the other hand, a database specialist embedded in a development team will be much more responsive to piecemeal requests.
Large batch sizes lengthen cycle times. Items in the batch have to wait their turn for processing and, after processing, have to wait until all other items are processed before the batch can be handed over to the next stage. Even when all items are taken up for processing at once, the cycle time of the batch is at least equal to the cycle time of its slowest item. Long cycle times won’t do. There is mounting pressure to bring new capabilities to the market faster than ever.
- In any system of work, the theoretical ideal is single-piece flow, which maximizes throughput and minimizes variance. You get there by continually reducing batch sizes.
- —The Phoenix Project2
Short cycles require small batch sizes. Reinertsen3 argues that reducing batch size helps reduce cycle time, prevent scope creep, reduce risk, and increase team motivation. Reducing batch size is impractical when handoffs are expensive. Recall that a value stream with N activities requires N – 1 handoffs per batch. Halving batch size doubles the total number of handoffs needed. This is only feasible when handoffs are inexpensive; that is, when we move away from using multiple activity-oriented teams to service a value stream. Figure 5-1 summarizes the discussion thus far in this section.

Figure 5-1 Team design influences batch size.
5.2.2 The Traditional Lure of Functional Organization
Why has functional organization persisted over the years despite the drawbacks described above? The traditional motivation for specialized teams can be traced to a legitimate desire for:
- Efficient utilization of specialist resources across a line of products: Rather than dedicate, say, two specialists to each of four products with an average specialist utilization of say 60%, it is more efficient to create a shared activity-oriented team of five (since 2 * 4 * 0.6 4.8) people available on demand to any of the four products. This is also an attractive option in a situation where supply of the said specialty in the market is scarce.
- Standardization: As members of a single specialty team, say, a marketing content team, it is easier to standardize templates and formats, achieve consistent messaging across product lines, and coordinate product releases.
- Nurturing the competency by localizing it: When people of a common specialization sit together, it is easier to share knowledge and help each other with troubleshooting, think through a solution, review each other’s work, etc. It is also easier for the team manager to ask for a training budget and other resources.
The traditional model has come under question because of the increasingly shorter time to market and time in market.4 Software products have a very short window available to monetize new features or capabilities. We can no longer take for granted an entrenched customer base; it is likely their patience will wear out unless they see a steady delivery of valuable capability. Even in the case of enterprise IT, being responsive to the business is more important than minimizing cost per function (or story) point. The traditional model of activity-oriented teams may be good for cost-efficiency, but it is bad for end-to-end cycle time. It is therefore worthwhile to trade off some efficiency for the sake of responsiveness. As we will see in Section 5.4, a cross-functional team is a good way to achieve this tradeoff.
Just enough standardization and consistency can still be achieved without being part of the same team. It is harder but possible, as we will see later from the Spotify example. On the other hand, specialist teams have a tendency to adhere to a mindless uniformity across all sorts of unnecessary things in the name of consistency across the product line.
As for nurturing competencies, it is important, but not at the expense of the business outcome. Organization design ought to cater to first things first. There are other ways of nurturing competencies like cultivating communities of practice. More on this in Section 5.7.
5.2.3 When Is It OK to Have Activity-oriented Teams?
What about departments like HR, admin, legal, and finance? Are they organized around outcomes or activities? If we go by how we distinguish between outcomes and activities in Section 4.1, it is clear that these support functions don’t own independently valuable business outcomes. Therefore, they are activity-oriented teams. Does it then mean they automatically become silos and therefore candidates for being disbanded?
Some activities are closer to the outcome than others. For example, UX is closer than admin to the outcome of product success. Ask whether the realization of the outcome is dependent on repeated successful iterations through some core value stream. If yes, then the activities belonging to this value stream should not be conducted in separate activity-oriented teams. Activities that aren’t an integral part of a business outcome’s core value stream may be spun off into separate teams without much risk.
Even where they are not part of a value stream, activity-oriented teams tend to standardize their operations over time. Their appetite for offering custom solutions begins to diminish. Complaints begin to surface—“They threw the rule book at us,” “What bureaucracy!” and so on. However, as long as they don’t directly affect business outcomes, they are allowed to exist.
For example, it is an anti-pattern to maintain a long-lived knowledge management (KM) team. It is an activity-oriented team for what is meant to be a collective activity. Disband it after initial rollout of the KM system. KM is everyone’s responsibility. Knowledge is documented via recorded conversations, videos, blog posts, proposals, and reports. Let the relevant community of practice (Section 5.7) curate its content on the KM system. It is generally so specialized that it doesn’t help to hire a generalist technical writer or content curator.
5.2.4 Independent Testing, Verification, and Validation
Independent testing is the notion that the team that tests should be different and separate from the team that develops in order to achieve greater rigor in testing. Many IT services vendors offer independent testing services. Doesn’t this justify a separate activity-oriented team for testing? In my experience, there is no loss of rigor or conflict of interest in including developers and testers on the same team. Any deficiency in testing is bound to show up in UAT or production and reflect poorly on the team or the vendor. Given the cost of acquiring new clients, IT suppliers are generally extremely keen to land and expand, that is, cultivate long-term relationships and grow accounts.
On the contrary, independent testing wrecks the flow of work through the development value stream. It discourages collaboration between developers and testers and leads to all sorts of suboptimization by both teams to protect their reputations. The chapter on metrics (Chapter 12) describes a number of scenarios of suboptimization resulting from independent testing.
Hiving off testing for lack of in-house skills is a different matter altogether. For example, it is common to engage a third party for testing security—vulnerability assessments, penetration testing, etc. However, this doesn’t come in the way of the development value stream as much because it is somewhat removed from the functionality being built.
Then there are those who argue that verification and validation activity should be conducted at arm’s length from each other. But the traditional distinction between software verification and validation5 is old school. One distinction is that validation is akin to field tests while verification is closer to lab tests. In case of pure software, A/B tests6 and beta customer programs come close to field tests whereas tests of functionality and simulated performance tests are closer to lab tests. Although the distinction makes sense, it is no reason to separate the people that perform field and lab tests from each other and from the rest of the development team. A second oft-quoted distinction also makes sense in this light but is rarely applied correctly. It is said that verification checks whether we have built the thing right, and validation checks whether we have built the right thing. However, in practice, we frequently find no provision for field tests and so-called validation teams are responsible only for end-to-end lab tests, while verification teams are limited to component-level lab tests.