Innovation. It’s as American as apple pie. From the US President on down, everybody is talking about innovation. From university presidents and corporate leaders to Silicon Valley tycoons, all agree that we need more of it. Airport bookstores have walls of books on innovation: a quick search on Amazon resulted in 70,140 titles containing the word “innovation”, 711 of which were published in the last 90 days alone. Many of them are little more than generic business advice books with the word “innovation” shoehorned into the title, including gems such as Creating Innovation Leaders (earning bonus points for including buzzwords “leadership” and “creativity”). So it was with some trepidation that I recently picked up Scott Berkun’s The Myths of Innovation – first published in 2007 – and found it had a refreshing and unpretentious take on the subject. Since it has become such an overused buzzword, Berkun argues that a better way think about innovation is to consider what it isn’t.
The core of the book consists of ten chapters, each tackling an innovation “myth”, such as “the myth of epiphany” (no Eureka moments in the bath for you), or the myth that “there is a method for innovation” (sorry, authors of The Innovator’s Method). Two of the chapters particularly stood out to me: “the lone inventor” and “innovation is always good”. The lone inventor is a staple of the Hollywood version of innovation: from the crazy Emmett “Doc” Brown, of Back to the Future fame, to the fictionalized Mark Zuckerberg in the The Social Network. This version of the inventor is a social misfit loner, who creates something out of whole cloth with very little input from others. My own experience in research and development is far from these stereotypes: intense discussions, and social interactions are very much part of the fabric of this world. A recent large-scale study of citation networks backs up this intuition: scientists that have long-term collaborations are more widely cited. Unfortunately the myth of the lone inventor continues to be reinforced both culturally as well as by the legal system. As Berkun notes:
Today, years away from the Renaissance, we’re still attached to the myth of lone inventors. We do recognize collaboration and partnerships, but we often fall back on tales of lone innovators as heroic figures for reasons of convenience. We insist on isolated credit and dismissing the importance of others. Patent law, by design, credits one or a handful of individuals, assuming not only that ideas are unique and separable, which is dubious, but that individual names can be given legal ownership of ideas. Patents, as currently applied in the U.S., do solve problems, but they create just as many. They distort popular understanding of how inventions happen, as well as which innovations are most valuable to the world.
Berkun goes into some detail describing how most innovations are often dependent on a web of other products, people, ideas, and are as much dependent on the existing social and technological environment as on their creators.
The second chapter of note in The Myths of Innovation challenges the idea that innovation is always good. The edge cases seem easy: a renewable clean fusion power that could replace all fossil fuel plants in a few years? Unquestionably innovative and good. An updated version of Angry Birds? Maybe not so much. (The HBO series Silicon Valley satirizes the shallowness of much of this kind of “innovation”). But what about the cases in middle that constitute most of what goes under the umbrella of “innovation”? It can be difficult to tell. One kind of innovation in one sector of the economy, say in improving the efficiency of solar cell technology, may challenge innovation in another sector, say the development of so-called “clean coal” (leaving aside for the moment, whether “clean coal” could ever really exist). Deciding which innovation is “better” thus involves value judgements informed by economics and politics and requires us to think carefully about what kind of world we want to live in. Innovation does not exist in a vacuum. For example, the late 1990s and early 2000s were arguably a period of intense innovation in financial sector but as Berkun points out:
the increased use of two innovations in finance-derivatives and CDOs-was a major contributor to the subprime crisis of 2007. As these ideas gained favor in the financial world, and banks of all sizes put an increasingly dangerous percentage of their assets in the them, the stage was set for the greatest economic crisis since the Great Depression.
Although Berkun doesn’t delve too deeply into these issues, he does caution us to more carefully examine who benefits from the overinflation of the virtues of innovation. (Based on the title, I had partly hoped for a more trenchant analysis of the rhetoric of innovation and technological utopianism, but you may need to look elsewhere for that). As is customary for the genre, Berkun has his own recommendations for developing more innovative thinking – it is still essentially a business book – added as an epilogue to the 2010 paperback edition. But where The Myths of Innovation is most effective is when it reinforces the notion that breakthroughs in any creative field – whether in science, technology or the arts – ultimately require both dedicated work on tough problems, and a high degree of luck, and no amount of sprinkling of magic “innovation” pixie dust can change that.