Do You Really Want Your Teams to Experiment? Are you Sure? If So Try This

Grant Gadomski
Bootcamp
Published in
7 min readApr 8, 2023

--

Of the many pervasive buzzwords that are hot in the software development landscape right now, one that’s stood the test of time has been “experimentation”.

You may know it as “fail fast”, “test and learn”, or it’s most grown-up expression: “data-driven iteration”. Whatever form you prefer it boils down to teams taking an educated guess straight to customers, gathering feedback on how they react, then adjusting product strategy and tactics based on the results.

The Power of Experimentation

Pictured below is the Cynefin Framework, which breaks down the four types of decision making situations that teams may find themselves in.

Credit: https://hbr.org/2007/11/a-leaders-framework-for-decision-making

Traditional engineering practices tend to assume a complicated domain, where the problem may not be easy, but with enough analysis and thought one can “crack the code” on how to best accomplish one’s goals via a product. This assumes (relatively) stable parameters throughout the product’s lifecycle. Between designing, building, and tearing down a bridge one probably doesn’t expect expectations for it to vary beyond providing passage over a river, and the dimensions of the river probably aren’t going to change a ton either.

The early days of software made this same assumption through a standard development framework called Waterfall, which prescribed sequential, clearly separated steps from design through development through testing, and a single “big bang” release to users at the end. Waterfall implicitly assumes that one can understand the need and parameters clearly from day one, and said needs/parameters won’t change enough to matter over the development lifecycle.

Unfortunately most situations where software’s created live in the complex to chaotic domains, where parameters are constantly shifting throughout the life of the product. Business strategy shifts, market opportunities appear and disappear, and customer sentiment changes sometimes on a near-weekly basis. If you’re trying to build a product that’ll thrive for 10+ years, there’s no way you can know what it should always be when you first start.

Unlike simple and complicated domains, the only way to succeed in a complex or chaotic domain is to do, then sense and respond appropriately. A.k.a. release a best guess, see how it goes, and adjust your product quickly as possible in accordance to what you learned. A.k.a. experiment.

Why Don’t My Teams Experiment?…

The good news is that most IT executives understand the need for experimentation when building software products, and communicate its importance to their teams. The bad news is that most teams don’t follow the experimentation formula (release a best guess ASAP, gather feedback, and release a better guess based on said feedback) as closely as these executives may like.

Instead most teams get caught in what Melissa Perri calls the Build Trap, where they continuously ship their “best guess” without gathering and leveraging feedback well enough to consistently steer the ship in the right direction.

Escaping the Build Trap by Melissa Perri

To understand why, I think it’s worth mapping the problem of creating product goals onto the Cynefin Framework. Often we think of outcome-oriented goal definition as being a simple to complicated problem. “Create a product that increases revenue by x% in our financial advice business”, or “Leverage automation to reduce call center overheads by y%”. Simple, right?

But because these can take years to accomplish (or even see progress on), we often create shorter-term goals to track progress, assuming that if we hit all of these smaller goals, we’ll consequentially hit our big goal. Examples are “Increase product adoption amongst 18–34 year-olds by z% in Q1”, or “Reduce call drop rate by a% in Q2”.

Now “success” is easier to track, but the team’s focus is narrowed towards shorter-term business goals. Suddenly we’re back in a step-by-step problem solving mindset, where we think we can define exactly what the team should try to solve each quarter to meet our larger goals.

I’d argue that in most modern product environments, even this shorter-term goal definition can live in a complex to chaotic state. What if markets shift and suddenly the 35–50 year-old demographic becomes more feasibly lucrative? What if the best way to reduce call center overheads is to eliminate calls entirely, and there’s a more feasible way to do that than we initially thought?

By focusing teams on these shorter-term goals, they’ll naturally optimize their problem solving and feedback loops for said goals. Smaller goals require less attention on feedback loops, incentivizing teams to focus more on pure delivery, and less on experimentation.

This disincentivizes experiments that show fewer short-term returns, but could unlock more significant value (discovering new business avenues, creating value through unexpected synergy with other products, etc.) in the long-term.

So if you really want teams to experiment, and have a reasonable shot at discovering outsized value, I’d recommend placing longer-term, process-oriented experimentation goals alongside shorter-term, outcome-oriented business goals. These goals can’t be as business outcome-oriented since by definition you can’t know what the business outcome may be at the start of the experiment. What starts as a simple tech demo may unexpectedly become your company’s new 10-year cash cow.

Some example goals could be “Review user data from all pages on the site on a weekly basis”, or “Deliver at least three A/B feature experiments to production in Q3”.

Adding experimentation goals does mean you’ll have to create space by dialing back your team’s short-term, outcome-oriented business goals. In most complex to chaotic situations this is probably worth it, since in the long-term the experimentation “hits” via better product direction are likely to outweigh the less direct delivery path taken to achieve them. Consider this a tradeoff where the right balance between product experimentation and delivery depends on the context in which the product’s being developed.

Now you may be thinking: “Grant, if I push back our delivery timeliness solely so the team can simultaneously faff about on moonshots, my business partners are going to rip my head off!”. And you may be right, for one of two reasons:

  1. They don’t quite understand the cost of the experimentation they want, and think their cake can be both had and eaten (an expression I’ve never understood).
  2. The product’s being developed in less of a complex to chaotic environment and more of a simple to complicated environment.

While #1 can be (usually) overcome through education and frank conversations, #2 should make teams question what amount of experimentation (if any) is the right amount, and the right answer usually isn’t “as much as possible”.

Are you sure you want this team to experiment?

The software industry’s notorious for having more golden hammers than a construction-working Midus, meaning that every company wants to leverage the hottest new tech tool (cloud computing, AI/ML, etc.) as much as possible, sometimes without enough consideration for its effectiveness in each specific context. The recent trend of cloud repatriation is a great example of some companies committing to a trend (cloud computing), only to realize that it’s more of a tool than a guarantee, and its effectiveness depends on the context.

I see experimentation as being another golden hammer. Although most modern development situations gain benefit from it (their environments are complex to chaotic), some don’t, and gain more from different types of experimentation besides product direction (like process or technology decisions).

Not every project needs to create room for consistent experimentation. Not every team needs to spend significant resources on refining their feedback mechanisms. For example, the possibility of “revolutionizing the way employees send documents to office printers” probably isn’t worth the amount of experimentation (which brings cost & risk) necessary to make that happen, if those saved resources could be aligned to other more strategic products. If the goals are straightforward, the domain is stable, and the best path forward is easily identifiable, a more delivery-focused approach may be the better approach.

Conclusion

Iterative delivery methods have been one of the best things to happen in software-making. With them teams no longer have to solve the unsolvable in complex to chaotic domains (where most software is built), and instead can tune product design & direction as the surrounding context simultaneously changes and is better understood.

But this sort of product experimentation is a tool, and though it’s highly useful in most domains, like any other tool teams and leadership need to be discerning about how it’s leveraged, and to what extent. That’s because it doesn’t come for free. Time taken to try outside-the-box ideas and build strong feedback mechanisms detracts from raw delivery speed against simple, straightforward goals.

In our current-state I think more teams should focus on deploying earlier, running more experiments, and building stronger feedback mechanisms. Most software’s built in domains where this is hands-down the best way to deliver outsized value, and yet most teams aren’t given enough leeway to do so due to short-term, delivery-focused business objectives. But we should be careful of the golden hammer fallacy, and discerning in finding the best context-dependent balance between experimentation and delivery.

--

--