In the latest (May-Jun) issue of the Harvard Business Review, City University of London Professor Paolo Aversa talks about why, “Sometimes, Less Innovation is Better“. In an extensive study of over 300 Formula 1 race cars, spanning 30+ years, they cross-checked these cars with the F1 race results. Surprise, surprise: “In certain situations, more innovation led to poorer performance”. When they plotted this relationship, they got an ‘inverted U’. This means that an increase in innovation initially helps the performance, but after a certain point, it begins to hurt it.
Why does this happen? Prof Aversa and his colleagues attribute this to the environment. The chances of failing with an innovation in a dynamic, uncertain environment are higher. In order to understand the when and the how, they have come up with a “Turbulence Framework” (my label for it). This framework evaluates the environment around three factors:
1. Magnitude of Change: Asks questions around radical shifts in competitive space, industry, regulation, demand and prices.
2. Frequency of Change: Asks questions around the rate of change in the industry and competitive space.
3. Predictability of Change: Asks questions around the ability to predict industry forces and competition.
Based on this framework, an enterprise must decide whether it is worth innovating further or to hold out and ride the turbulence. If the rate of change of technology is too much too fast, it may be wiser to actually slow down. In this video (about five minutes long), Prof Aversa explains this work:
But then, aren’t environments generally uncertain? Can the framework actually measure and provide a reliable prediction? How do you find out if you have hit the peak of the ‘inverted U’ relationship between innovation and performance? Is the curve steep? Are there multiple curves, depending on how you define performance and innovation? How does it apply to different types of innovation, like sustaining and disruptive? Lots of questions to dive into.
Over-Innovation has been around for a while and folks have written about this.
In their Harvard Business Review article “Innovation Versus Complexity: What is Too Much of a Good Thing?”, Mark Gottfredson and Keith Aspinall introduce the concept of innovation fulcrum. This is how they explain it:
“Innovation fulcrum is the point at which the number of products strikes the right balance between customer satisfaction and operating complexity.
The fact is, companies have strong incentives to be overly innovative in new-product development. Introducing distinctive offerings is often the easiest way to compete for shelf space, protect market share, or repel a rival’s attack. Moreover, the press abounds with dramatic stories of bold innovators that revive brands or product categories. Those tales grab managerial and investor attention, encouraging companies to focus even more insistently on product development. But the pursuit of innovation can be taken too far. As a company increases the pace of innovation, its profitability often begins to stagnate or even erode. The reason can be summed up in one word: complexity. The continual launch of new products and line extensions adds complexity throughout a company’s operations, and, as the costs of managing that complexity multiply, margins shrink.
Traditional financial systems are simply unable to take into account the link between product proliferation and complexity costs because the costs end up embedded in the very way companies do business. Systems introduced to help manufacturing and other functions cope with the added complexity are usually categorized as fixed costs and thus don’t show up on variable margin analyses. That’s why so many companies try to solve what really are product problems by tweaking their operations—and end up baffled by the lack of results.”
The issue of innovating too much via over-engineering lies with startups too. CBInsights analyzed more than 101+ startup failure post-mortems and identified that 42% of the time, the reason for failure of a startup was: “No Market Need”. In a tearing hurry to shove something out of the door and see if it sticks, feature-bloat and rejections are bound to happen. Which puts the concept of Minimum Viable Product (MVP) under a rather stern lens. What should a startup do? Part of the answer may be in the nature of the beast. In the spirit of experimentation, one has to try something and see if it works. Play with the boundaries. Else, move on. Part of the answer also comes from ‘innovation accounting‘, courtesy of Eric Reis, which focuses on actionable metrics, rather than vanity metrics. Another part of the answer comes from this blog post from Yevgeniy (Jim) Brikman, the founder of Atomic Squirrel. He shares this useful illustration that MVP is a process:
As an example, the Tiko Printer, a crowd funded 3D printer startup failed and many attribute its failure to over-innovation.
In the trade-off between execution versus innovation, incumbents flounder because of over-emphasis on execution, as the performance metrics are based on financials rather than innovation. It is possible to err on the other side and thus over innovate. There are plenty of examples and stories. Let me pick just two to illustrate the point.
Lexus over-engineered its RX luxury crossover in 2016. Tom Mutchler, an Auto Engineer with Consumer Reports, begins his review with the words, “Messing with success is a dangerous, dangerous thing. Especially when it comes to Lexus RX”. Joe Lorio, at Car and Driver, summed it up: “Don’t be too eager to ditch what you are really good at.”
Some time back, Lego, the toy maker went through financial turmoil as it lost control of innovation and tried to do too many things too fast. Click here for this story.
This is just the start of this fascinating story. There’s more to come. We need to return to our questions. Hang in there …