On Merits Alone

When we are confronted with something new – whether it is something as concrete as a new haircut or something as abstract as a new idea – most of us believe that we can provide an objective response.  We believe that we can evaluate something new and different based on its merits alone to determine if it is good or bad, worth pursuing or rejecting.  However, both nature and nurture suggest otherwise.

Industrial designers know how to use the way human beings are wired to experience their surroundings to elicit reactions that are not based on the actual physical characteristics of what is encountered but on subtle environmental cues. Cabin design for business class air travel is a perfect example of how designers manipulate our all too human senses so that those who shell out the big bucks for luxury air travel feel they are getting what they pay for.

Alternating upholstery tones on the seats in a business class cabin creates “a pattern that causes the brain to register less than the entire expanse.”(1) The checkerboard effect prevents people from being able to perceive the whole. As a result, business class passengers entering the cabin do not become overwhelmed by a cascade of seats. Contrast that with the phenomenon of entering the economy or coach class cabin – the deflating vista of tightly packed row upon row of identical seats and the immediate claustrophobia it induces.

Designers use another trick in business class cabins with seats that open up to become fully flat beds to distort reality. While most passengers in these cabins sit facing forward, they sleep on a diagonal, “an innovation that makes it possible to create what looks like a first-class experience in a significantly smaller space.”(2) These passengers feel that they have more room than they do because every centimeter of space is designed to “de-crowd” their experience. How much space they actually have at their disposal is irrelevant. In fact, most of them would be shocked to learn how little it really is, even in luxury class.

If we were asked to describe the business class cabin compared to coach, we would most likely call it “roomy,” and “private.” In reality it is not particularly roomy or private. But we wouldn’t really be able to tell. Even if we were industrial designers ourselves and could appreciate how we were being manipulated, we would still be manipulated. It’s just the way we are made. It’s our nature.

Over the past decade or so, it’s become increasingly accepted in the business world that human nature affects our decisions and actions at work. We are wired to respond to risks in certain ways that are divorced from reality. We are likely to take action if there is a 90% chance of success but will avoid a 10% chance of failure like the plague. If asked to place an economic value on something that is completely outside of our expertise, we come up with numbers that are anchored to whatever numbers are floating around in our heads from our most recent experiences. We shut down disconfirming points of view under all sorts of pretexts – the person expressing them is not a team player or is simply obnoxious.  Even if these observations are true, they serve to keep us from having to absorb unsettling information. Behavioral economics has emerged as a field of inquiry because traditional economic theory with its assumption of rational decision-making that is aimed at maximizing utility fails to explain the way in which people really seem to go about making economic decisions.

It’s also becoming increasingly clear that nurture, the lives we lead, profoundly shape how we evaluate new things, even in the world of work. I remember when I was first entering the work world, one of the big messages about how to conduct yourself was “leave your personal life at home.” “It’s just business” is still a catch phrase that is often used to explain away decisions that deeply impact others on not just a professional, but also a personal level. When I was younger, I struggled with an inability to completely inhabit this impersonal, highly rational business self. When I was upset, I would find myself crying in the women’s bathroom; quietly, of course, but crying nonetheless. This was NOT something that you were supposed to do in business. And perhaps it was because the way that men typically channel their frustrations – bluster and bravado – was considered businesslike, it wasn’t clear to me that no one was really leaving their true selves or their personal lives at home.

Recently reported research from a team of business school professors at Wharton and Temple University examined how the marital status of 1,500 CEOs affected the riskiness of their decisions and actions. The researchers looked at CEO decisions such as capital expenditures, innovation, R&D and acquisitions and used their company’s stock return volatility as a market-based measure of enterprise risk.

“…we find that there is a still sizable difference — about 10% greater investment [in risky activities] by firms led by single CEOs compared to firms run by CEOs who are married. And differences in stock return volatility are also quite substantial… Managerial decisions are affected by what is happening in those individual’s personal lives in ways that most of our views of business decision making do not account for.”(2)

Apparently nobody leaves his or her personal life at home – not even the CEO.  How we live our lives affects the way we perceive and respond to our options.  This happens without any conscious awareness on our part. However, we act as if this were not true. We act as if we are dispassionate decision-makers who respond to the new and different without bias. The evidence is mounting that this is a lie we tell ourselves to shut down the discomfort that we experience when confronted with something new and different. Instead of sitting in that discomfort with an understanding that both nature and nurture are doing their best to maintain the status quo, we react as quickly as possible to keep the new and different at bay. Perhaps we should take a page from the comedian Louis C.K. whose approach to developing material is all about unease.

 ”You’ve got to embrace discomfort. It’s the only way you can put yourself in situations where you can learn, and the only way you can keep your senses fresh once you’re there.”(3)

Nature and nurture could be our best friends when it comes to the new and different. But only if we can learn to resist the urge to get back to what feels safe and hang in there with discomfort long enough to have a shot at evaluating whatever it is that we haven’t experienced before on its merits.


(1) “Game of Thrones,” David Owen, The New Yorker, April 21, 2014

(2) “Risk and the Unmarried CEO,” sourced on 5/21/14 at: https://knowledge.wharton.upenn.edu/article/risk-single-ceo/  based on Marriage and Managers’ Attitudes to Risk, Nikolai Roussanov and Pavel G. Savor.

(3) Quote sourced on 5/25/14 at http://www.huffingtonpost.com/2014/04/22/louis-ck-gq-cover-story-embracing-discomfort-photo_n_5191708.html

The Drudgery of Discovery

Science:  Knowledge or a system of knowledge covering general truths or the operation of general laws especially as obtained and tested through the scientific method (principles and procedures for the systematic pursuit of knowledge involving the recognition and formulation of a problem, the collection of data through observation and experiment, and the formulation and testing of hypotheses) (1).

What is more important in advancing discovery – the science of discovery or the discoveries themselves?

Of course, both are important. Discoveries are newsworthy and sometimes earth-shattering. They are sexy stuff. Science creates a repeatable path towards future discoveries. Science, especially its reliance on the scientific method, is decidedly unsexy. Its end point (a discovery) might be sexy, but the process, for the most part, is not.

I recently finished reading a book about decrypting an ancient language known as Linear B. While “The Riddle of the Labyrinth” is an excellent account of cracking the code that unlocked Linear B; the author, Margalit Fox, also seeks to restore credit for this achievement to a woman whose arduous and lengthy efforts created the science of discovery that made it possible to decipher this written and spoken Mycenaean language (an early variant of Greek from the Bronze Age).

Linear B


Linear B was discovered on tablets at the turn of the 20th century by the English archaeologist Arthur Evans. The tablets represent the earliest known European writing from around 1450 BC, 700 years before the Greek alphabet (which before Linear B was believed to be the first European writing).  If history comes into being with the written record, the Linear B tablets transformed a period that had been considered pre-history into history.

Kana Chinese

Writing systems are less common than most of us think. Spoken language can exist without them (the author notes that of the estimated 6,000 languages that are spoken today only 15% are believed to have written forms). In ancient times writing systems appear to have been even rarer than they are today.  Linear B is a syllabic writing system in which the symbols stand for syllables (such as Japanese kana). There are two other types of writing systems. Logographic languages are those in which the symbol stands for a concept (such as Chinese).  Alphabetic languages are those in which symbols stand for specific sounds (such as English).

Just how difficult was it to decrypt Linear B?

When attempting to read a script, a reader can find herself in one of four possible situations:

Language_Script 2x2

A known language in a known script such as the text you are reading right now is immediately intelligible – no deciphering is needed. However, when one unknown is introduced into the picture, everything changes, making decipherment extremely challenging. The author cites two cases which to date have not been resolved.  Rongorongo, a script believed to have written a Polynesian language that is still spoken on Easter Island, fell into disuse. So even though the language is known it is not possible to associate sounds with the symbols. Etruscan, a non Indo-European language of ancient Italy, has a script that survives and can be read (it is based on the Greek alphabet). However, lacking an understanding of word breaks and grammar, the string of sounds cannot be parsed into meaning.

If just one unknown can render some decipherment impossible, two seems like a locked box of impossibility. However, by creating a science of graphics – painstakingly inventing a framework to uncover the hidden rules of Linear B’s grammar, syntax, and structure – Alice Kober made it possible to unlock the language. By rejecting ALL assumptions, she avoided the trap of circular logic that had stymied previous decoding attempts. Others made starting assumptions that led them to what turned out to be false conclusions, dead ends. Had she not died from what many assume was cancer at the age of 43, Kober might have been able to complete her life’s work.

There are many themes that lace the story Fox tells, but three stand out:

Sexism:  In the 1930s and 40s when Alice Kober was conducting her research, the prevailing culture of sexism made it all too easy to diminish the accomplishments and contributions of a rather plain-looking and self-effacing middle-aged woman who had no time to be bothered with social niceties. The Alice Kober described in this book does not seem all that likable (or interested in being likable). She comes across as a brilliant obsessive who was denied a seat at the table precisely because she was a woman. At the time she was being considered for an associate professorship at the University of Pennsylvania (in the 1940s), women were not deemed viable candidates for such positions by men. What makes for painful reading, though, is to be reminded that at that time not even women thought that women should hold such positions.

Hero Worship: The competitive nature of discovery, even in a field most of us wouldn’t give a second thought – early history. When only one person will  ultimately get credit for the discovery even though many others have made the “ah-ha” moment possible, knowledge hoarding is a reasonable position to take even if pooling knowledge would advance the discovery. A corollary to this theme is the cultural obsession (which seems to span many cultures) with a “hero” – the person (usually a man) who ultimately solves the problem that many others have been working on for a long, long time. The credit for cracking the code of Linear B was entirely ascribed to Michael Ventris whose solution, the author makes plain, relied on at least three ground-breaking insights that came from Alice Kober but were never credited to her.

The Ends versus the Means: The relegation of methodology or science to a lower importance status compared with discovery itself. Alice Kober spent the better part of 15 years building a framework that did not presuppose anything about the language she was attempting to understand. Even though the tablets on which the language was inscribed were found on the island of Crete in the purported remains of the palace of Minos at Knossos, Kober did not assume that the language was Minoan. She did not assume that it was a remnant of Etruscan, the “lost” language of a civilization that preceded the Roman civilization. She did not assume that it was logographic (like Chinese or Japanese) even though many of the symbols made it tempting to do so. She painstakingly constructed a methodology for discovery – a science of graphics – which integrated rules and basic theories of how languages work to allow the origins of the language – its grammar, syntax, and sound – to emerge.

Kober died before she could crack the code and it isn’t certain even if she had lived that she would have been the one to decipher Linear B. However, the fact that a drab and discounted woman pursued the drab and discounted side of discovery – the science side –has consigned her to the ranks of unsung heros. I am in awe of the tremendous intellectual and emotional conviction required to let the process work, resisting the impulse to make assumptions and trusting that the truth will out. While it is easier today to source and mine data than it was for Kober, who had to do it all by hand using an intricate paper-based system, it is no easier to tease meaning out of data. Someone still has to construct a framework that makes meaning out of masses of information. And someone has to be fearless enough to look at the results without blinders to grasp its implications. Someone has to be willing to pursue the unsexy, but necessary, task of inventing a science.


(1) Source:  http://www.merriam-webster.com/dictionary/science?show=0&t=1396526463

The Tempo of History

…[Auerbach] viewed life on earth as a purposeful unfolding in which the tempo of history is continually roiled by events.  So, even as the world changes in front of us, it should be viewed in retrospect, since only then can such changes become part of the tempo.

Often when I’m in the midst of writing a post, I come across something that expresses the exact point that I have been mulling over. I read the above quote in a review of the work of Erich Auerbach, a philologist whose seminal books on the history of Western literature laid the foundation for the field of comparative literature. Auerbach’s perspective on change brought the seemingly disparate stories I had been reading over the past few weeks into sharp focus – they all called into question how we understand the tempo of history.

The older we get, the more acutely we are aware of not only how things (including ourselves) have changed, but also how things are changing all the time. Is it possible to identify the precise moment which initiates a significant change? Can we ever hope to see this moment clearly if change is always unfolding ahead of us, altering our understanding of the significance of past events and our sense of how they lead to what transpires in the future? Or, as the reviewer wrote about Auerbach, can we only look backwards, deep into the past, to gain any perspective on how an event has changed what follows in a significant way? And then only with the humility of knowing that what we believe with certainty in the present has the potential to be undone in the future as the long arc of perspective lengthens, exposing previously hidden information and connections?

Nobody ever goes to bed middle-aged and wakes up and says, “oh no I’m old.”

This from Richard Dawkins, an evolutionary biologist and former professor of public understanding of science at Oxford University, explaining how hard it is to pinpoint the moment at which it is clear that there has been a change in evolutionary terms. You could argue this point and respond, it depends on how long you sleep (witness Rip Van Winkle). But, I suspect that is EXACTLY Dr. Dawkins’ point – it is incredibly hard to know when a new era has dawned because it is only when the new becomes the norm that you can look back and say that things have changed. It may be possible to identify the tipping or inflection point, but it is virtually impossible to find the first point.

Scientists recently discovered a human fossil that provides evidence of the oldest human DNA. But instead of pointing back to the Neanderthals, the DNA points to another group, the Denisovans, a Paleolithic human group which scientists had believed was genetically and anatomically distinct from Neanderthals. However, the anatomical evidence suggests that the Denisovans shared physiological characteristics with the Neanderthals. This new connection has muddied the waters, opening up the possibility that there were many more proto-human populations than scientists had originally thought and that they were not as different from one another as had been assumed. Human beings appear to have had many more diverse ancestors than previously believed. The origin of homo sapiens, the first point, is proving to be incredibly elusive.

Not only might it be impossible to isolate a first point, but the nature of change itself is clearly lumpy.  At the extremes are two kinds of change. One might be defined as “slow change” – the kind of incremental adaptation that we associate with Darwin’s theories of evolution – and “fast change” – the kind that a contemporary of Darwin’s, Georges Cuvier, identified which is caused by catastrophic conditions that annihilate the existing order. You don’t stand much of a chance when confronted with fast change – you either luck out or you don’t. But, under some conditions, slow change offers options. It’s possible to see signs of slow change along the way, if you can find the clues and interpret them. Many times, of course, you can’t. For example, in the search for human ancestors described above, only recently have new clues emerged and it hasn’t been easy to interpret them.

Georges Cuvier is credited with founding the discipline of paleontology and conceiving of and proving that mass extinctions which occurred before recorded history wiped entire species off the face of the earth. But his theory was incompatible with that of Darwin who viewed extinction as “a routine side effect of evolution.” For about a century, Cuvier’s theory languished until it was possible for the scientific community to understand that one type of change did not obviate the other. If Darwin correctly surmised the process of slow change in the natural world, Cuvier correctly surmised the process of fast change. So, now we are faced with a second change challenge. In addition to the difficulty of identifying the initial point of change (its origin), we cannot know beforehand if we are in for a fast or a slow ride.

The thread of change underlies much of what we call “the news.”  Two of the news stories that received a great deal of attention in 2013 and will continue to play out in the new year share a a common characteristic – the impetus for change has been building for a long time but its origin is murky and its endpoint uncertain.  In these two instances, will we witness fast or slow change?  Are we seeing a first point or will future events force us to look further back into the past?

Story #1:  The failure of the first Massively Open Online Courses or MOOCs captured headlines in December 2013.  MOOCs have been hailed as the harbinger of a new era in secondary education, a destabilizing force that is poised to topple the traditional university, providing access to great education for those who cannot afford the dazzlingly expensive experience of a four year education at private or even many public institutions. MOOCs are seen by many as the logical next step in the evolution of secondary education, part of slow change. But will they prove to be a dead end? Because, it appears that the first and second incarnations of the MOOC have failed.

In a recently released study of one million MOOC students, only half of those who registered for a course viewed one lecture and about 4% on average completed the course. In the case of San Jose State University which partnered with Udacity, one of the first MOOC providers, to offer a MOOC with online mentoring support, the online students fared worse than in-classroom students taking the same course with only 25% of the MOOC students receiving a passing grade.  So MOOC v 1.0 and MOOC v 2.0 have failed. But no one is counting them down and out.  If MOOCs cannot declare themselves “winners” yet, those who stand behind them believe that evolution or catastrophe is on their side.  As one observer remarked, “It’s like, ‘The MOOC is dead, long live the MOOC.’ ”

Story #2: Marijuana can now be legally grown and sold in the states of Washington and Colorado.  One significant challenge facing these states is how to design a legitimate market that makes the black market unattractive.  On the face of it, this would appear to be easy. Who wouldn’t prefer to grow, sell, purchase and use pot legally rather than run the risk of a committing a crime? Well, it depends. It depends on the price that the state sets for pot. It depends on the state’s desire to profit from the market (as it does with other sanctioned “sins” such as alcohol, tobacco, and gambling). It depends on the limits to consumption that the state imposes. (Do you have to be 18?  If you give your legally purchased pot to an underage consumer, is that legal? How much pot can one person purchase in any given period?) It depends on the limits to production that the state imposes. (Who can grow it?  How much can they grow?  Where can they grow it?) These are only a handful of the most obvious factors that will affect the transition from illegal to legal market.

Among the biggest concerns facing the state, the uncertainty about the extent to which a legal pot market will encourage greater consumption and ultimately lead to substance abuse hangs over the enterprise like a dark cloud.  It’s clear that the best customers for any product are committed customers and in “sin” markets (alcohol, tobacco and gambling) those customers are also called addicts.  The designers of legal markets for marijuana look to the way markets for these other “sin” products have been designed. But because the transition has to occur quickly, the fact that these established markets are functional has encouraged state planners to borrow freely without too much adaptation or adjustment to the way these systems currently work. This has occurred despite the fact that legal markets for “sin” have also imposed huge human health costs on the state at the same time they have delivered additional revenues.  How is this story going to unfold?  It is moving quickly and is certain to have many unintended consequences.

In the course of our daily affairs, we act as if we can in fact see the first point – that we can initiate change and control the way in which subsequent events will unfold. While we know that the world we live in is complex and uncertain, unfolding in highly irregular and unpredictable ways, we act quite differently. If we were more humble and more realistic, perhaps the most we could say is what one pot dealer named Ben Jammin had to say about the changes in Washington state:

We’re not sure what’s coming – but it’s coming.


  • “Intellectuals on a Mission, The Unbelievers’ Chronicles Road Tripping Scientists Promoting Reason,” Dennis Overbye, 12/9/13, The New York Times, sourced on 12/12/13 at: http://www.nytimes.com/2013/12/10/science/space/the-unbelievers-chronicles-road-tripping-scientists-promoting-reason.html (source of second quote)
  • “Baffling 400,000 Year Old Clue to Human Origins,” Carl Zimmer, 12/4/13, The New York Times, sourced on 12/12/13 at: http://www.nytimes.com/2013/12/05/science/at-400000-years-oldest-human-dna-yet-found-raises-new-mysteries.html
  • “After setbacks online courses are rethought,” Tamar Lewin, 12/10/13, The New York Times, sourced on 12/13/13 at: http://www.nytimes.com/2013/12/11/us/after-setbacks-online-courses-are-rethought.html
  • Penn GSE study shows MOOCS have relatively few active users, with only a few persisting to course end, sourced on 12/13/12 at: http://www.gse.upenn.edu/pressroom/press-releases/2013/12/penn-gse-study-shows-moocs-have-relatively-few-active-users-only-few-persisti
  • “The Lost World,” Elizabeth Kolbert, The New Yorker, December 16, 2013
  • “The Book of Books,” Arthur Krystal, The New Yorker, December 9, 2013 (source of first quote)
  • “Buzzkill,” Patrick Radden Keefe, The New Yorker, November 18, 2013 (source of third quote)


Just in time for the Halloween Holiday, comes scary innovation news from Singapore and the U.S. National Funeral Directors Association – an open innovation competition called Design for Death.

Mushroom Death Suit








Jae Rhim Lee wearing the Mushroom Death Suit

Even in industries with processes that would appear to be at total odds with change of any sort, there is a push to move outside the comfort zone and imagine possibilities for the future (including the afterlife).  Burial practices are for the most part dictated by religious ritual. They would have to win in the competition for process least likely to change and least likely to attract those outside the profession to participate in an innovation competition, however open.  And it’s not as if there is a concern about the market for services drying up. As the old saying goes – “there are only two certainties in life – death and taxes” (Ben Franklin).  But, the industry is in its mature phase – the critical inflection point at which transformation occurs from within or without or both. Looked at this way, an open innovation competition for Design for Death might even be entirely predictable.

First and foremost in shifting the framework for thinking about death is a language change. While it may seem exasperating (even I sighed when I read the new term of art), the funeral industry is rebranding itself as the deathcare industry. (Microsoft Word highlights this term with a red underscore because it is not standard English…yet.) Deathcare shifts the focus from a narrow one of how we dispose of bodies (funerals and burials) to a more expansive one of how we acknowledge death and incorporate its presence in life.

Design for Death is the first in a series of challenges that are co-sponsored by the Lien Foundation, a Singapore-based philanthropy whose mission is to stimulate and spark high-impact idea exchange, high-intensity collaboration, and high-end value creation by leveraging the people, private and public sectors around three issues: eldercare, water and sanitation, and early education; and ACM, a philanthropy established by the founder of a Singapore-based casket company to uplift the deathcare profession. The next competitions focus on two of death’s many prequels – hospice care and community arts engagement in hospitals.

What is particularly interesting about the winning submissions to the competition is that:

  • All of the idea submitters are young – the oldest is 37 and the youngest is 24.
  • They do not work in the deathcare industry.
  • Some of them are not even designers.
  • None of them come even close to what you would call “experts.”

As the National Funeral Directors Association’s Executive Director, Christine Pepper notes,

The many entries we received from designers around the world show that innovation in deathcare doesn’t have to come from funeral directors. Ideas for how families honor and remember their loved ones can come from anyone and anywhere. The ideas and innovations presented by the designers who participated in this contest bring fresh perspectives to our profession and challenge funeral directors to think about the services and products they offer to families in new ways.

The winning ideas (and even some that did not win) are arresting, poetic, thoughtful and novel. I encourage you to visit the website to see all the entries, but note two that I really liked here:

  1. “I wish to be rain” which transfers cremated ashes to the troposphere via a balloon that seeds clouds which ultimately release precipitation.
  2. “Mushroom death suit” which uses the properties of mushrooms to decompose the body and partially remediate toxins that are released during decomposition.

And, lest you think that there is no way to measure the impact of such innovation on how society handles death, the Lien Foundation partnered with the Economist Intelligence Unit to conduct research and create a Quality of Death index that ranks 40 countries on provision of end-of-life care. The UK ranks #1 and the US is #9 tied with Canada. You can visit this site to begin your death-venture (I kid you not). For most of us, this is the ultimate haunted house.

Happy Halloween!

Difficulty and Doubt

“It’s just too hard to get things done around here.”

Classic expression of frustration heard in almost any organization, but especially in large ones. Laboring under the assumption that important activities should not be difficult to accomplish, the standard management reaction is to make things easier. When new priorities must be merged with existing ones, it seems even more important to design processes that outmaneuver the glitches that cause tempers to flare. We strive to create friction-free processes that are repeatable, reliable, and consistent. However, removing friction might be exactly the wrong thing to do if it’s important to learn and innovate.

Much of the impetus for the wave of interest in managing innovation comes from a belief that we have so little of it because the system makes it too hard. It’s true that in most organizations, processes and resources are focused almost exclusively on sustaining existing activities. With organizations running so lean there is very little bandwith available to take on something new.  It makes sense to assume that if we can remove snags and roadblocks from an innovation process, then we can free up innovators and increase innovation. But what if that’s not so? What if innovation requires friction?

Friction is the proverbial “when the going gets tough” situation.  When circumstances are not conducive but instead are daunting, it forces acknowledgement of:

  • Doubt, the limits of knowledge, even the ability to know
  • Possibility of failure and consideration of how to persevere should it be encountered
  • Probability that progress will be incremental and fitful, requiring many adaptations

Friction provides the opportunity to practice making difficult decisions and dealing with the consequences – this is the essence of learning and is a prerequisite for innovation.  When we talk about managing innovation in organizations, we are really talking about establishing habits of being curious and open to learning.  However, it seems to me that we misdiagnose the difficulty involved with innovation as a problem that should be solved by making things easier.  In fact, without difficulty, we reduce the potential for doubt, failure, and the need to revisit assumptions over and over again.  We undermine the potential for innovation.

At the same time I was reading Malcom Gladwell’s review of Albert O. Hirschman’s theories which the preceding musings are based on (1), I came across a story posted on fastcompany.com that gave life to Hirschman’s belief about the importance of  difficulty and doubt for progress to occur in any system (2). The fastcompany.com story was about 17-year-old  Easton LaChappelle who faced difficulties (lack of knowledge, skills, funding, and institutional support) that should have prevented him from accomplishing what he set out to do (reinvent conventional prostheses).

But, as Hirschman might have predicted, LaChappelle’s difficulties proved essential to his innovation. LaChappelle had to teach himself “electronics, coding, how to use a 3-D printer,” and most importantly how to do all of it on the cheap from his bedroom.  He figured out what he needed to know and how to learn it, what materials he absolutely needed on hand and how to get them, how to confirm market demand (via a Kickstarter campaign), and how to use every available opportunity to tell people his story about making prosthetics affordable.

LaChappelle’s goal was to rethink the $80,000 prosthetic arm (a price tag that doesn’t include the costs of the surgical procedures needed to use it) and find a way to make an affordable one. LaChappelle ultimately created a $400 robotic arm that can be used by amputees without surgical preparation.  Think of it as an iPhone equivalent – if you need to upgrade in a few years, it’s not out of the question.  Contrast that with the $80K arm and the technology that you are stuck with for a long time.

Now, of course, LaChappelle in many ways is not your average teenage kid.  But he COULD be.  He has a curious mind whose dictates he follows and he is flexible in how and where he learns. These behaviors are not particularly unique to LaChappelle.  Most little kids exhibit them all the time.  These are the habits of innovation and they are rarely inhibited by the kinds of difficulties that we routinely decry as the obstacles to innovation in our organizations.  But, in the same way that we squash these habits in kids by running them through the education system, we squash them in organizations by running innovation through the exercise of the business case analysis.

I am not opposed to the basic concept of the business case analysis.  A systematic thinking through of goals, constraints, resources, and potential scenarios is time well spent.  However, the hallmark of business case analysis is the degree to which we ask people with new ideas to express an extremely high conviction that they have been able to squeeze all doubt out of the picture. We do this by identifying potential roadblocks or other surprises and describing how they will be managed. To pass the business case test, we pretend that the surprises are risks rather than uncertainties (for the difference between the two, see (3) below under Sources) and that we are skillful enough to map out strategies for avoiding or minimizing them.

However, doubt, like difficulty, is critical for innovation.  Doubt is the handmaiden of curiosity.  If you believe that you know it all and you are certain of what you know, there isn’t much to be curious about and there isn’t much new to be learned.  Doubt seems ever more important when situations are complex and uncertain which is increasingly the case in all domains of contemporary life.  Hirschman’s ideas snapped me out of my blathering on about “making innovation simple and easy.”  Because, really, how can that be?  How can innovation be anything other than replete with difficulty and doubt?  The two sources of friction that create the conditions for something new to suggest itself.


(1) “The Gift of Doubt: Albert O. Hirschman and the power of failure,” Malcolm Gladwell, The New Yorker, June 24, 2013

(2) “Meet the 17-year-old who Created a Brain-Powered Prosthetic Arm,” Liz Presson, FastCompany.com, sourced on 8/26/13.

(3)  Risk: We don’t know what is going to happen next, but we do know what the distribution looks like.  Uncertainty: We don’t know what is going to happen next, and we do not know what the possible distribution looks like.  Attributed to Michael Mauboussin, sourced on 8/16/13 at The Big Picture.

Easton LaChappelle’s TedxMileHigh Talk


The Curious Observer

When I’m asked to enumerate best practices for idea management or group decision-making or team-building or any activity that involves people trying to do something as a group, somewhere on my list is the practice of making sure that somebody (or somebodies) is an observer. In idea management, which typically occurs on-line using some kind of collaboration software, that somebody is called the moderator. For in-person groups (or mostly in-person – sometimes people are connected via video conference, but they see one another and interact in real-time) the term of art is facilitator. While everyone acknowledges that this role is important, in idea management it often goes to a more junior person or an administrative type; i.e., it’s important, but not that important.  For in-person groups, the facilitator is often a more experienced (i.e., “older” person or someone who has had group facilitation training). However, facilitators are rarely required to understand the content of the discussions or decisions that the group makes. They are expected to guide a process. The same holds true for on-line moderators.

However, when observers, whether facilitators or moderators, lack familiarity with the substance of the group’s discussions and decisions – not expertise, but just enough understanding to be dangerous – I believe that the group markedly diminishes its potential for innovation, the truly different way of figuring out how to move forward or solve a very persistent, complex problem. I believe that groups need curious observers. Curious observers play an essential role in discovery – the pivotal moment in all innovation that is perhaps the true “Eureka” moment. Because creating something and recognizing that it might be important in some way rarely occur at the same time. In our idealization of innovation, we tell stories that merge creators and discoverers into one person who has one blinding flash of insight. But more often than not, there are many insights along the way some of which are discovered by curious observers.

Case in point from a story about fungus from a recent New Yorker magazine(1). Mushroom fungus or polypore mycelium to be specific. (Stay with me on this one!)

Two seniors at Rensselaer Polytechnic Institute (RPI) were beavering away at a class project for an Inventors Studio class, which is exactly what it sounds like – a class devoted to guiding students in the process of invention with the long shot hope that their ideas might form the basis of a company that will bring innovative solutions into the market. These two seniors, Gavin McIntyre and Eben Bayer, were casting about for an idea that their very exacting professor, Burt Swersey, would approve for their project. They had pitched a few ideas to Swersey to no avail. Then, Bayer recalled an experiment that he had performed in another class at RPI responding to the challenge of making insulation out of perlite. Most of us know perlite as the little white plastic-like pellets that are mixed in with bagged potting soil. We also know how annoying those little pellets can be – they are lightweight and float around, settling in puffy clumps, making a mess. In his RPI class, Bayer had used mushroom spores to bind the perlite.

As a kid growing up on a farm where his dad made maple syrup and sold it commercially, Bayer had had a lot of chores to do outdoors.  One of his chores was to shovel wood chips from a pile to a burner that boiled the sap.  He had often noticed that the wood chip pile sprouted mushrooms whose mycelium bound the chips so tightly together that he found it difficult sometimes to shovel them.  He had remembered that binding property during his class project to create perlite insulation. He brought the results of that project – a glass jar of solid perlite and mycelium – to Swersey’s class.

Here’s what happened according to Swersey:

“He takes this thing out of his pocket…and it’s white, this amazing piece of insulation that had been grown, without hydrocarbons, with almost no energy used.  The stuff could be made with almost any waste materials – rice husks, cotton wastes, stuff farmers throw away, stuff they have no market for – and it wouldn’t take away from anybody’s food supply, and it could be made anywhere from local materials, so you could cut down on transportation costs.  And it would be completely biodegradable!  What more could you want?”

The rest of the story about Evocative Design, McIntyre and Bayer’s company that produces packaging material out of mushroom fungus, is quite an amazing read and I recommend it. But what stood out for me in the story is that without Swersey it is unlikely that the company and its subsequent success would have happened. McIntyre and Bayer both had jobs lined up after RPI – good jobs. Swersey urged them to forgo these jobs and continue developing their invention. They thought they might be able to work on their invention on an after-work-hours basis, but Swersey emphatically told them this would not be enough. He offered to take money from his retirement savings to invest in their company. He helped them get a grant from the National Collegiate Inventors and Innovators Alliance and got them situated in RPI’s incubator space for start-ups.

Swersey, a curious observer, was an essential part of the discovery process. Neither McIntyre nor Bayer on their own had the perspective to recognize the potential of what Bayer had initially created and what they both further developed in Swersey’s class. Bayer’s flash of insight was based on an idle observation made years earlier in passing. From his point of view at the time, using mycelium to bind perlite was a one-off to complete a class requirement. Bayer threw a “Hail Mary” pass when he brought the idea to Swersey’s class to see if it would pass muster there.

Swersey, while not an expert in mycology or insulating materials engineering, did however operate with a framework that enabled him to see the potential in Bayer and McIntyre’s invention. His “Eureka” moment was every bit as necessary as Bayer’s in this story of invention and innovation. Inventor’s Studio is the search for ruthlessly affordable solutions(2) to existing problems that can make a discernible difference in the lives of the vast majority of people on the planet who live on less than $1 a day. This framework is incredibly clear – expansive and targeted at the same time. Without it, Bayer’s little while disk of perlite and mycelium, would still be an interesting curiosity rather than a biodegradable packaging material which is used by companies like Dell, Crate and Barrel, and Steelcase, and who knows what else in the future.

Without a curious observer to hold this kind of framework in place for groups as they work to solve problems, the connection between creativity and discovery often fails to take place. This is especially true for groups of experts who have even more to overcome than naïve amateurs like the students in Swersey’s class. As their professor, Swersey’s students expected his observations and input to matter, whether or not he was an expert in their project’s specific materials or engineering. Experts, on the other hand, view their facilitator or moderator as someone who is supposed to keep them on time and on task but has little else to contribute to problem solving. And, most facilitators and moderators buy in to this definition of their role. However, when facilitators and moderators are also curious observers, they can help the experts overcome the limitations of expertise. They can call attention to the contrary point of view that groups are quick to dismiss and encourage its exploration. Curious observers can ask questions and offer potential solutions that might be foolish or wrong, essentially acting as a naïve amateur, to challenge a group’s assumptions that often masquerade as facts. The curious observer can catalyze the moment of discovery which grasps the potential in an invention, whether a thing or an idea, and become an integral participant in the process of innovation.

(1)    “Form and Fungus,” Ian Frazier, The New Yorker, May 20, 2013

(2)   Designing for ruthless affordability is a concept from the work of Paul Polak.

Polak Advocates the ‘Ruthless Pursuit of… by FORAtv

People Who Help People

Increasingly, more and more of us are living relationship-intensive lives – always on and always connected.  In a gross simplification of the history of human relationships across millennia, we have transitioned:

  • From a tribal past when we were born into a small set of relatives who collectively represented the total sum of human relationships we would experience during our lifetimes;
  • Through a long stretch of history during which technologies (e.g., transportation, communications) and clustering into urban areas expanded our relationship set to include non-relatives and even strangers with whom we would form temporary or episodic relationships;
  • Arriving at the current time, when technology has yet again reinvented the nature of relationships, stretching them from the physical to the virtual realm, and exploding the number that we can form and sustain.

As Steven Johnson asserts in Future Perfect, his book about the networked age in which we live, information technology has enabled us to form dense, diverse, and distributed networks through which we mediate a much greater array of relationships than ever before (1).  Johnson also describes how information technology has lowered the barriers that have historically blocked disruptive ideas from moving swiftly into the mainstream.  Transmitting a non-conforming message from the edges or boundaries of systems where social outliers and the disenfranchised congregate to the center is easier than ever before.  Technology also makes it easier for those at the edges to find one another and form coalitions, increasing the stock of relationships.

Relationships have always been the currency of human systems. But as they multiply and form more complex webs, the way in which we negotiate them appears to be changing.  And these changes seem to favor the relationship skills that women have transferred into the workforce from their traditional role in the home (2).  Sally Hegelsen and Julie Johnson describe these predominantly female approaches to relationships which women bring to people management in their book, The Female Vision.  Paraphrasing Hegelsen and Johnson, I note the following four:

  • Leading from the center rather than the top
  • Reaching across boundaries to establish connections
  • Negotiating with a long term view
  • Seeking the collective good

This final attribute – seeking the collective good by crafting the proverbial “win-win” in situations requiring team effort has recently received New York Times Magazine treatment (3).  An article that ran in early April 2013 discussed academic research that has begun to re-shape the way in which jobs are designed, specifically the factors that motivate people to do their work and do it well. Adam Grant, the Wharton professor of organizational psychology whose research is profiled in the article, has demonstrated that “the greatest untapped source of motivation…is a sense of service to others; focusing on the contribution of our work to other people’s lives has the potential to make us more productive than thinking about helping ourselves.”  Grant’s research finds that it is not the intrinsic value of the work itself or even how it helps us get ahead, but how it helps others that motivates us the most.   Relationships rule.

Grant has constructed various experiments to test his theories of pro-social behavior. For example, in one experiment, he put signs above hand-washing stations in a hospital.  Some signs reminded healthcare workers that hand hygiene prevents them from catching diseases and others focused on how it helps patients.  Patient-focused signage spurred 45% more use of hand-washing liquids than healthcare worker-focused signage.  Just knowing that your work will help someone else – even if you don’t get immediate or direct feedback – improves performance.  People who need people are not just the luckiest people in the world, apparently, they’re also extremely high performers.

Doesn’t all this emphasis on helping others sound “girly?”  Don’t worry.  It turns out that to avoid being a relationship-centric doormat you need to add a bit of old-fashioned male “what’s in it for me-ness?” to the mix.  Grant’s typology of human social behaviors lumps humankind into three big buckets:

  1. Givers – The altruists.  People who don’t hesitate to do a favor or share credit or perform any other selfless act.
  2. Matchers – The vast majority of us.  We hedge our social investments – it’s all quid pro quo for most of us.  We need to know “what’s in it for me?”
  3. Takers – The “winners.”  The folks who need to come out ahead every time.  Life is one endless stream of transactions in which they maneuver to come out on the plus side.

Among the Givers, those who achieve success for themselves as well as others sit on the border between Givers and Matchers.  They have a healthy respect for their own ambitions, but are also more inclined to recognize and further the ambitions of those who seek their help without overt regard for how it furthers their own ends.  Operating under the Golden Rule (do unto others as you would have others do unto you), they prosper.  They are perceived as playing nice.

Nowhere in the Grant research that is presented in the article does it state that women are more likely to be Givers than men, but the qualities of Givers – put others first, think long term, seek collective good – are those that have been traditionally identified with the operating style women bring to family systems. And, in today’s intensively networked workplace, these qualities have gained ascendance because, as Grant’s research demonstrates, they promote getting things done and getting them done well.

What the article doesn’t say, but which Grant’s forthcoming book (Give and Take) might, is that they are also approaches which promote system sustainability. In zero sum games, someone ends up with zero – the game ends.  That is the nature of winner-take-all scenarios.  In the Taker worldview, a series of short term wins equals long term success. But this equation is increasingly viewed as incorrect. It is even being challenged in the world of high finance where new thinking about effective risk management for long term investments is upending the traditional view that a series of short duration bets is an effective strategy for managing long term risk.  Sustainability is becoming the new watchword even in a workplace dominated by Takers.

Grant’s research suggests that the future will belong to the Givers who straddle the Giver/Matcher divide and the Matchers who can add more doing without expectation of immediate payback to their playbooks, straddling the divide from the other side. These are the relationship management modalities that seem better suited to an environment which aims to create sustainable enterprises. They also seem to rely on the traits that are more commonly associated with the way women relate to others than the way men do.  If these forms of relationships rule, not just because it’s nicer, but because it’s more effective to use them – might we be at an inflection point where we witness the glass ceiling finally shatter?

For fun, Barbra Streisand singing “People Who Need People”


(1) Future Perfect: The Case for Progress in a Networked Age, Steven Johnson, Penguin Group, 2012

(2) The Female Vision: Women’s Real Power at Work, Sally Hegelsen and Julie Johnson, Berrett-Kohler Publishers Inc., 2010

(3) “The Saintly Way to Succeed,” Susan Dominus, The New York Times Magazine, April 7, 2013

Just Generally Better All Around

In planning for disasters, the best preparation comes from making sure that things are better under normal circumstances.  Scientists and engineers who are looking at ways to help governments better prepare their citizenry for climate-related disasters (heat waves, storm surges, hurricanes, etc.) have made two significant discoveries:

  1. The most successful physical adaptations not only protect people and infrastructure when things are bad, they also improve everyday life.
  2. In addition to investing in the physical adaptations necessary to withstand severe climate conditions, investing in social adaptations is equally important.

In disasters like the ones caused most recently in the northeastern United States by Hurricane Sandy and other environmental calamities, it turns out that places where people are more neighborly fare better than those where people are isolated and have little connection to their neighbors. Communities in which neighbors look out for one another, those that exhibit a high degree of social connectedness, are more resilient than those whose inhabitants lack this quality of civic-mindedness. Social network resilience helps neighborhoods bounce back from severe damage to the physical infrastructure. Of course, this sounds reasonable and perhaps obvious (like many insights once they are explicitly stated).  But, even if it seems reasonable and obvious, we do not act as if it is.  That is, we do not think of investments that improve the quality of everyday life in our communities as a prophylactic against bad times.

Instead, when planning to manage environmental risks to our communities, we over-focus on physical infrastructure to protect us from disaster.  For water-related catastrophes, we build dams and levees and other types of storm surge barriers. Most of these measures do little or nothing to improve the quality of life under normal conditions and in some cases, seriously degrade it by consuming scarce resources.  Yet, a philosophical shift is taking place in the realm of physical adaptations. Structures are now being conceived that not only offer protection against environmental calamities, but also make everyday life nicer.

In Rotterdam, The Netherlands, where most of the densely populated country sits below sea level, the story of civilization has been a war with water.  Dikes and other systems that pump water out once it encroaches on the land have long been, and continue to be, a staple of the country’s response to water-related calamities.  However, in the past few years, engineers, architects and city planners have deployed a new water approach. In addition to disaster-prevention and –response, an approach which proactively explores what it means to co-exist with water has emerged. Today, in the middle of the city’s harbor, three transparent domes or Floating Pavilions sit on the water. These buildings are not quite boats and not quite houses but a new blended form of habitat which the city hopes will help it formulate new ways of living with water.   Singapore, another country that lies close to sea level, has always faced the one-two punch of monsoon-season flooding and, perversely, insufficient potable water. The Marina Barrage and Reservoir, located in the city center on one-sixth of Singapore’s entire land mass, is a three-pronged initiative which seeks to simultaneously, “improve drainage infrastructure, reduce the size of flood-prone areas, and enhance the quality of city life.”(1)

Today more than ever, many organizations have elevated risk management and mitigation to a primary position in their resource allocation decisions.  However, I believe that our current views of risk management have more in common with the now abandoned approaches that used to inform environmental disaster planning than those of more recent vintage. We don’t think about risk management and mitigation as making things generally work better most of the time, but rather as protecting us from disaster. We over-focus on structural adaptations in our processes to root out or prevent errors and typically ignore social adaptations altogether. As a result, we often make it harder to get everyday work done and virtually impossible to undertake highly risky activities such as innovation. This may be at least one of the reasons why innovation is so challenging for most organizations.

What if, to think about ways of creating more conducive conditions for innovation, we turned our current view of risk management on its head, borrowing from what is now understood about surviving physical disasters?  What if we focused on physical and social adaptations that not only manage risk but also improve the quality of everyday work life?

Many people who lead innovation initiatives focus on creating a supportive culture, processes and infrastructure that are designed specifically for innovation.  Bespoke.  But what if to achieve breakthrough innovation, you have to have a culture, processes and structures that improve the everyday activities of the organization?  What if designing exclusively for innovation is the wrong way to go about it?

In another potentially perverse turn of the screw, the processes and structures that are set up to encourage innovation often try to weed out the small, incremental ideas that make all aspects of work life better. In the rush to promote game changing ideas, small improvements get shoved aside. This might be a BIG mistake.  It could be that the small improvements are what make it possible for the game changers to be proposed, accepted, and implemented. It’s the slow and steady stream of little things that make life better, creating a solid foundation which strengthens the organization, building the capacity to withstand and support significant change.  Rather than pressing for BIG ideas, innovation might do better promoting a disproportionate number of small ideas.  It might need to partner more closely with those responsible for HR practices and policies so that ideas which improve the day-to-day conditions of most people in the organization are considered to be as important as those that have the potential to transform the business.

The notion that “things just being generally better all around for most of the time” is a precondition for being able to withstand seismic change requires an equally seismic shift in the way we think about effectively managing risk and organizing for innovation.  But, for innovation to succeed, it may be a non-negotiable mindset.


(1)   “Adaptation,” Eric Klineberg, The New Yorker, January 7, 2013

Doesn’t Play Well with Others

“They can’t even comply with the rules of the conference.”  This indifference to the rules was apparently the most irksome aspect about the behavior of executives at Uber, an upstart car-hiring service, to the president of the International Association of Transportation Regulators.  “Uber [is a] ‘rogue’ app…the company [behaves] in an unauthorized and destructive way.”

Uber and other similar start-ups (SideCar, Lyft by Zimride) use mobile technology to match people with different kinds of transportation services (taxis, limousines, ordinary people driving their cars) in real-time.  The technology disintermediates the infrastructure that in the past has made those connections (dispatchers) and has regulated them (the transportation authorities).  To rein these new companies in, some municipalities have attempted to pass rules that would make the services they provide illegal.  “…[But] when Washington tried to pass rules that would make Uber illegal, customers bombarded City Council members with thousands of emails in protest.”  The companies claim that since they aren’t actually providing the rides, the regulations don’t pertain to them.

It’s another one of those situations where the writing is on the wall.  While regulators can slow the tide of change in the transportation industry, they are not likely to stop it.  But what seems to really get everyone’s goat is that the new kids on the block are completely disinterested in playing the game, let alone following the rules.  After I read the article, I kept thinking about how we emphasize the importance of collaboration in creating a culture that fosters innovation.  Much of what is written about collaboration has a nicey-nice spin to it – like the classic Coca-Cola commercials from the 1970s (back when there were three network channels and commercials ran for a leisurely one entire minute).  The idea that harmonious collectivism could bring about big change was in the air.  But what if collaborating to innovate looks less like the Coke commercial and more like a nasty fight among toddlers in a sandbox?

There is good reason to suspect that the spirit of getting along, of aiming for group harmony is at odds with the kind of against the grain decision-making and action-taking that is required for innovation.  Innovation requires a calculated approach to risk-taking.  Innovators size up a situation, drawing a line between the survivable worst and the fatal worst that could happen and insure that they stay on the side that lets them live to fight another day.

An in-depth profile that appeared in The New York Times this past December about a group of 16 expert skiers and snowboarders involved in an avalanche reads like a primer on how a group of experts striving to be in harmony make horrible decisions.  This group of highly experienced skiers and snowboarders who had all been tested in extreme situations, collectively made a horrible, life-threatening and, for some, life-ending decision to pursue a run down a challenging slope in iffy weather conditions.  It was a decision that any one of them was unlikely to have made on their own, had they been less invested in being and being seen by the other members of the group as good sports.

The disaster which led to the deaths of three members of the party occurred on Cowboy Mountain which is part of The Cascade Range in the state of Washington.  The skiers (there were snowboarders in the party, but for the sake of brevity, I will refer to all of them as skiers) were drawn to an area just outside the official ski zone known as Tunnel Creek.  It’s a place where experts frequently go to enjoy snow and slope conditions that are not available within the sanctioned ski areas but are still relatively easy to access.  The lure of fresh powder and 3,000 vertical feet angled at about 40 degrees is hard for great skiers to pass up.  But, when combined with weather conditions that create a thin, fragile layer of frost sandwiched between hard packed snow below and soft fluffy powder above, Tunnel Creek becomes an avalanche waiting to happen.  The kind of avalanche that is triggered by the skiers themselves as they ski down the slope, creating stress on the layers of snow.

Individuals within the group had misgivings. They all knew that the official avalanche forecast fell into a gray zone that should have made experts like themselves sit up and take notice.  But, the fact that each of them knew the reputation of the others led them to be overconfident that the group simply could not make a bad decision.

As one skier recalled afterwards:  “This was a crew that seemed like it was assembled by some higher force,….I was thinking, wow, what a bunch of heavies….”

Another thought:  “There’s no way this entire group can make a decision that isn’t smart,” he said to himself. “Of course it’s fine, if we’re all going. It’s got to be fine.”

And yet, some remember having misgivings beforehand, but feeling conflicted about expressing them:

“I can tell circumstances, and I just felt like something besides myself was in charge. They’re all so professional and intelligent and driven and powerful and riding with athletic prowess, yet everything in my mind was going off, wanting to tell them to stop.”

But over-riding everything else was a strong need to go along and get along:

“…[T]here were sort of the social dynamics of that — where I didn’t want to be the one to say, you know, ‘Hey, this is too big a group and we shouldn’t be doing this.’ I was invited by someone else, so I didn’t want to stand up and cause a fuss. And not to play the gender card, but there were 2 girls and 10 guys, and I didn’t want to be the whiny female figure, you know? So I just followed along.”

[But she shouldn’t have worried, because the guys felt just the same.]  “I thought: Oh yeah, that’s a bad place to be. That’s a bad place to be with that many people. But I didn’t say anything. I didn’t want to be the jerk.”

Keep in mind that this was a group of experts, the same kind of domain experts we assemble in our organizations when we need to make complex decisions about taking risks.  And they represented a wide range of ages, from 29 – 53, so you can’t lay blame at the feet of youthful exuberance.  Yet, when you read their reflections on how they viewed the situation, you feel as if you are listening in on a group of teenagers for whom being part of the group and having the group operate smoothly is more important than anything else.

We place a high value on harmonious group behavior   Remember the transportation regulator’s biggest complaint about Uber – they didn’t play by the rules, they were not being good sports.  Yet, playing by the rules, whether they are literally regulations or the way things have been done in the past or the even more forceful social norms that proscribe group dynamics, does not always yield the best outcome.  We might want to rethink what playing well with others looks like in the context of innovation – maybe a few squabbles and some sand-throwing is essential to taking the kind of risks that, even if you don’t succeed, insure that you are around the next day to try again.

Perfect Harmony – Coca Cola Commercial


  • “Car-Hiring Apps in a Snarl,” Brian X. Chen, The New York Times, December 3, 2012
  • “Snow Fall: The Avalanche at Tunnel Creek,” John Branch, The New York Times, December 26, 2012


Fatal Allergies: Part 2

If we can’t avoid failure and mistakes, how can we use the fact that we will make mistakes and fail (and maybe even that we should make mistakes and fail) to our advantage?

Let’s start with some dictionary definitions:

Mistake – An error or a fault resulting from defective judgment, deficient knowledge, or carelessness.

Failure – The condition or fact of not achieving the desired end or ends.

Based on the definitions, it’s clear that a mistake is not necessarily a failure, although it is frequently a precursor to failure.  However, both mistakes and failures share the characteristic that they can only be known after the fact (post hoc).  So when and who decides whether a mistake or failure has occurred is key.

Yet, Schoemaker (whose framework for decision-making was described in Part 1 of this blog post) wants us to design mistakes and purposefully make them, so we can’t wait until the outcome to say that what we did was a mistake.  And he doesn’t want us to make just any kind of mistake.  He was us to make brilliant ones.   Shoemaker asserts that there are four basic types of mistake and one of them is the type we should take advantage of more often than we do.(1)

There are trivial mistakes, e.g., not leaving enough time to catch a plane or not putting enough money in the parking meter.  These are annoying, but not worth worrying about.  There are tragic mistakes for which the cost is extremely high and for which there little to no benefit, e.g.,  texting while driving and losing control of your car, indulging in the pleasure of addictive drugs.  These are always to be avoided when possible.  Serious mistakes are not those that you seek out, but if you have to go through them, you can in many instances turn lemons into lemonade.  Examples include losing your job, getting divorced; some might say getting married.  Brilliant mistakes are a close cousin to serious mistakes, but they are a different breed.  They are the mistakes that Shoemaker wants us to design for.

A brilliant mistake has these characteristics:

  • It is an action whose expected utility or value is less than the expected utility or value of not taking action.  It’s an action that you believe at the outset is unlikely to pay off.  All of your previous knowledge and experience would encourage you to bet against a net positive gain from undertaking the action.
  • Something goes wrong or has the potential to go wrong far beyond the range of prior expectations.  The outcome of a brilliant mistake has to surprise us in some way.  It has to be so far from what we anticipated or so difficult to fit within our current operating theory that we literally sit up and take notice.  As a result, brilliant mistakes offer the possibility that insights will emerge whose benefits far exceed the cost of the original mistake.  Brilliant mistakes offer the potential for expanding the field of knowledge and accelerating learning.  They can cross the chasm, making a giant leap forward that results in a breakthrough.  This is why brilliant mistakes are so closely associated with fundamental innovation, what we in the business world call business model innovation.  [Of course, there are many mistakes which could be called quasi-trivial or quasi-brilliant, the edges between types of mistakes are not clearly defined.]
  • Finally, a brilliant mistake occurs in a system with some slack so that even if most people are focused on supporting the status quo, a handful or more can slip free and do something different.  In most of the professional service organizations where I have worked, the little fits and starts of new ideas are often bemoaned as “hobbies.” An innovation process is supposed to cure the organization of its tendency to “indulge in hobbies.”  However, if organizations are too efficient and too effective, brilliant mistakes are virtually impossible to make.

If we want to design a brilliant mistake, where do we start?

There are two wells from which we can source brilliant mistakes:

  1. Defy conventional wisdom.
  2. Act at cross-purposes to our own views

Start with the assumptions that guide how you approach business growth – as I mentioned above, many professional service organizations want to root out and stop instances of people spending time on anything other than client-related work because it is viewed as a waste of time and resources.  A course that defies conventional wisdom might give everyone time to pursue a work-related “hobby” seeing these activities as a potential source of innovation.  We know that some organizations actively pursue this approach – most famously (today) Google, but not so long ago it was 3M.  Both set up a similar mistake-making pipeline, but came at it from different wells.  3M in my mind is the bellwether of defy conventional wisdom and Google embodies acting at cross-purposes to their own views.

Defy conventional wisdom:  3M launched its 15 percent program in 1948. If it seems radical now, think of how it played as post-war America was suiting up and going to the office, with rigid hierarchies and crisply defined work and home roles. But it was also a logical next step for the company. All of its early years in the red taught 3M a key lesson: Innovate or die.  This is an ethos that the company has carried dutifully into the 21st century.(2)

Act at cross-purposes to your own ideas:  Google has taken measures to encourage outside interests, enacting the 70/20/10 rule, which allows employees to spend 20% of their time on “innovation time off” pursuing their own ideas that relate to Google and then 10% of their time on stuff completely unrelated to Google.  This could be reading a book, drawing in Photoshop, or going to a museum.  In so doing, Google gains loyal employees who can purposely enrich their lives without Big Brother looking over their shoulder.  At the same time, the company stimulates  innovative thinking.  Think about it: how many times have your best ideas about solving work-related problems come to you while you were doing something completely unrelated to work? (3)

Once we have an assumption we’d like to test, what else might we consider to help us determine if we might be poised to make a brilliant mistake?  Schoemaker offers several additional criteria to guide us:

  • Potential benefit relative to cost is high.  This point may seem obvious, but it’s important to state it explicitly because it overcomes the major argument against making deliberate mistakes:  why would you undertake an activity that you believe at the outset is most likely to fail?  Because over the long run and across a portfolio of mistakes, the potential benefit will be greater than the cost.
  • Decision is made frequently.  This is an interesting criterion relating to how much activity within the system flows from a particular assumption.  For example, most companies assume that they have accurate insight into the markets they serve and base their investments in new product development on this assumption. Many decisions flow from this assumption.  Decision frequency is the lever that creates the potential for a large ROI – the benefits can be very large relative to the cost.
  • The environment is in flux.  During periods of rapid change, mistakes are common currency – but most of them occur inadvertently.  Instability provides an opening for trying something different because it lowers the barriers to entry.

To illustrate these points and the next two, we turn to The New Yorker’s 2012 fashion week edition that presented a story about how an enterprising individual used a rapidly changing marketplace to build a new kind of company (4).

Back in 2000, a young Italian MBA graduate was captivated by the notion of exploiting a rapidly evolving marketplace – the Internet – and mashing it together with haute couture.  By integrating these two diametrically opposed experiences – the democratic, highly individualized experience of on-line shopping and the elitist, small herd-like experience of high fashion – he hoped to unleash a new market for high end fashion among people who did not physically show up at Fashion Week, but were passionate about fashion.

This was the idea behind Yoox.com which has spawned several new business models in the market for high end fashion.  Yoox has made it possible to purchase haute couture as it debuts on the runway if you are willing to pay full price through design house websites which Yoox operates.   Those who can’t afford full price designer clothing can purchase “vintage” haute couture at deep discounts through the Yoox site (which insures that sales of remaindered clothing don’t cannibalize design house in-season sales).

  •  Experience base is limited.  The less that you know about a new opportunity, theoretically, the more open you should be about different approaches.

The Yoox story also provides a window into how this attribute can contribute to opportunities for making brilliant mistakes.   Many of the design houses initially pooh-poohed the Yoox approach.  They firmly believed that haute couture had no place on the Internet based on an assumption that people shopped for bargains on the Internet, looking for deeply discounted items on sites like eBay.  (Remember this was back in 2003.)  However, at least one design house (Marni)  was willing to experiment, acknowledging that it knew very little about the internet.  Yoox provided the technology, logistics, ability to handle customs, currency conversion and perhaps even more importantly, knowledge about what product was selling where, which it plugged into its algorithm to help companies predict trends.  Now, Marni, Armani, and Zegna, are powered by Yoox.

  • Problem is complex.  Finally and not surprisingly, the more complex the problem, the more possibilities exist for solutions.  The more opportunities to actively make mistakes.  Yet, in many fields, dogma takes hold surprisingly quickly (remember the great wrinkle that time poses for the problem of determining whether a decision outcome is good or bad) and with great tenacity, narrowing the boundaries for new ideas.  Certainly, this has been a challenge for the field of cancer research and treatment.

We wrap up our foray into failure and mistakes by teasing out one strand of Siddhartha Mukherjee’s comprehensive and compelling biography of cancer. (5)

Cancer is a big and growing health problem.  BIG:  In the US, 1 in 3 women and 1 in 2 men will develop cancer during their lifetimes. Of the 156 million women and 151 million men in the US, 128 million Americans now living will develop cancer.  Of the 2.4 million people in the US who die each year, 25% of them die of cancer.  GROWING:  As we live longer, the odds of genetic mutations that result in cancer increase.  It’s a trade-off – as life span increases, so does the incidence of cancer in the population.

Cell division is the source of life – the process by which human beings “grow, adapt, recover, and repair.”  It is also the cause of cancer because cancer cells do all of these things better than normal cells, they have achieved the chimera of eternal youth  – “they are more perfect versions of ourselves.”  But the perfection of cancer is twinned with the destruction of its human host.  Ultimately, from our human point of view, failure is encoded in the biology of growth – inseparable from it.

The awareness of cancer extends far backward in human history, over almost 4,000 years, when evidence of a disease that was most likely what we understand as cancer was first documented.  The Egyptian document from 2500 BC appears to describe a tumor of the breast.   The history of trying to understand the mechanisms by which cancer occurred however, begins almost 1,000 years later, when the Greeks undertook to explain bodily functions in terms of fluids, building on and extrapolating from their knowledge of hydraulics (fluid mechanics).  And yet, since cancer is an age-related disease, until life expectancy began to increase, other diseases (small pox, tuberculosis, the plague, cholera, etc.)  blanketed the historical record and mention of cancer is harder to find.

It is impossible to recount the long history of discoveries and theories that populate the cancer research and treatment roadmap, but the story of radical surgery embodies the hallmarks of the mistakes that have propelled the field  forward and held it back at the same time – the brilliant and the serious mistakes of cancer research and treatment.

Removing cancerous tumors by cutting them out was practiced millennia ago.  But the obstacles that plagued surgery had to be overcome before the benefits of extirpating tumors could be seriously explored.  It wasn’t until 1850 that pain was separated from surgical procedures via ether-induced anesthesia (William T.G. Morton is credited with this innovation).  In 1870 Joseph Lister introduced the use of carbolic acid, an antibacterial chemical, to promote antiseptic surgery, reducing the post-surgical complication of infection.  These two advances were the primary drivers that freed surgeons to conceive the notion of not just removing cancerous tumors, but curing cancer through surgery.  And no one epitomized this approach more than William Halsted who in 1900 pioneered and became associated with the practice of radical mastectomy as a cure for breast cancer.

Radical mastectomy involves removing the breast, chest muscles, and all of the lymph nodes under the arm.  Halsted believed that this approach, while disfiguring, would eradicate cancer from the body – an assumption that ultimately proved to be wrong in most cases.  His approach was based on a theory that cancer spread throughout the body through a kind of centrifugal force that spun metastases outward along a spiral path from the original site.  So it made sense to continually widen the surgical scope in seeking a cure.  It took almost 100 years, for another approach to replace the radical mastectomy as the dominant surgical approach to cancer.

In the 1930s a physician named Keynes combined radiation and limited surgical excision to treat breast cancer.  While his results were as successful as those achieved by practitioners of radical mastectomy, his approach was derided and sneeringly labeled “lumpectomy” – a put-down in surgical terms, implying that the surgical approach was crude, taking out a “lump” of tissue.  [Similarly, when the term “junk” was applied to non-coding DNA mentioned in Part 1 of this Post, it also had the effect of pushing research in this area to the far edges of scientific inquiry.]  It wasn’t until the 1950s that another surgeon (Criles) reconsidered the lumpectomy based on a different theory of cancer metastasis.  Criles proposed that for many breast cancers, the metastases spread not in an orderly spiral path, but in a chaotic, unpredictable fashion to far flung parts of the body, rendering a surgical procedure that extirpated tissue near the original site ineffective.  But it wasn’t for another 30 years, during which physicians were persuaded to enroll patients in trials that would provide enough data to apply statistical tests of validity, that yet another surgeon, Bernard Fisher was able to demonstrate that  “…[t]he rates of breast cancer recurrence, relapse, death and distant cancer metastasis were statistically identical among…[three treatment options – radical mastectomy, simple mastectomy, simple mastectomy followed by radiation].  It took nearly 100 years to render an accurate judgment for radical mastectomy as the correct approach for curing breast cancer – mistake.

You might think that the field of cancer oncologists, surgeons, and radiologists would take this lesson to heart and seek to avoid the path of devastation and delay that accompanied the passion for radical surgery as a cure for cancer.  But just as radical surgery was winding down, radical chemotherapy replaced it as the favored approach for curing cancer (sometimes combined with radiation).  Only recently has a new belief set emerged which does not seek to “cure” cancer, but rather to manage it as a chronic disease.  Today, many are looking to a changing array of targeted pharmaceuticals to manage cancer by inducing the body to isolate cells functioning in this damaging way and destroy them the way that the immune system takes care of other foreign and dangerous invaders.

This approach represents a radical break from two centuries of thinking about cancer.  Cancer, a genetic “mistake” that leads to system failure, is now understood as inseparable from the biology of growth.   Trying to eradicate it is increasingly viewed as fruitless, but effectively managing it is seen as possible.

This is the same sort of radical perspective that I believe Schoemaker wishes for us to adopt with respect to mistakes.  He wishes for us to understand that mistakes and failure are inseparable from individual and organizational growth.  Rather than seeking to eradicate them, Schoemaker urges us to learn how to push past the unpleasant confrontation with human limitation and fallibility that making mistakes brings about and find ways to manage mistakes so that, on balance, we gain more than we lose from our inevitable lot as human beings, the makers of mistakes.

“If a few mistakes can be good, wouldn’t a few more be even better?  Paul Schoemaker



(1)    Paul Schoemaker, Brilliant Mistakes: Finding Success on the Far Side of Failure (Philadelphia:  The Wharton Digital Press, 2011)

(2)    Sourced on 9/12/12 at http://www.fastcodesign.com/1663137/how-3m-gave-everyone-days-off-and-created-an-innovation-dynamo

(3)    Sourced on 9/12/12 at http://99u.com/tips/5766/Encourage-Daylighting

(4)    John Seabrook, The Geek of Chic, The New Yorker, September 10, 2012

(5)    Siddhartha Mukherjee, The Emperor of All Maladies: A Biography of Cancer (New York: Scribner, 2010)

These blog posts were originally delivered as a presentation to The Learning Forum’s Knowledge Council in September 2012.