Gusty Conditions

In my previous post, I made two claims:

  1. All companies are going to have to do more than create high utility products and services, they will also have to deliver positive social impact through their core business.
  2. Corporate employees can be “taught” something important about social impact by social entrepreneurs.

In this post, I’d like to discuss why I believe these claims are valid. The basis for this belief rests on shifts in: 1) the expectations that individuals and society have about the purpose of corporations (i.e., their license to operate) and 2) perhaps as a direct result of these changing expectations, the way that some corporations are beginning to view their relationship to society and their role in the economy.

In early July 2014, David Brooks, a New York Times Op-Ed columnist, wrote a piece on the sharing economy and the evolution of social trust. Brooks suggested that “…[t]here is a new trust calculus, powered by both social and economic forces.  Socially, we have large numbers of people living loose unstructured lives…” By “loose, unstructured lives,” Brooks means that many more people are operating outside the confines of large institutional structures for longer periods of time.

As any outsider can attest, the view from the outside is different from the one on the inside. Add to the fact that more people are on the outside looking in, disturbances such as growing economic polarization, generalized global political unrest and the inability of governments to mount effective responses to these and other large, systemic problems. It should come as no surprise that there is growing distrust of large institutions which seem to exist solely to perpetuate their own existence and enrich a lucky few.

Much of this sourness is directed at corporations. To many people, despite all the talk about the importance of their employees, companies appear profoundly unconcerned about them. Employees are treated as just another means of production that needs to be “right-sized” from time to time. Some companies even appear to be unconcerned about customers (as any telecommunications customer can attest).

In this climate, people have come to increasingly value the ability to directly negotiate with other people which technology has facilitated at low cost. The impersonal culture of big institutions is sitting alongside what Brooks calls a “personalistic” culture of peer-to-peer relationships.

Airbnb and Uber might be the two most visible and contentious companies of the sharing economy. They are causing headaches not only for the hospitality and car service industries, but also for city governments and other regulators in their respective industries because the operating model for these companies is a decentralized network which doesn’t easily conform to the structures and processes of centralized agencies and regulatory regimes. But, these sharing economy pioneers seem to not only care about people – both customers and employees – they also “…seem inclined to compromise and play nice with city governments.  They’re trying to establish reputations as good citizens…” They are changing the rules of the game and maybe even the game itself.

The emergence of a sharing economy facilitated by peer-to-peer relationships that combine technology-enabled virtual and face-to-face experiences (ultimately, you show up at the Airbnb and meet the host and other guests and you get into an Uber car and meet your driver) with a corporate ethos of being good corporate citizens is bleeding over into the traditional economy. Increasingly, even the most hidebound institutions are feeling the pinch of the “personalistic” culture.

New Yorker article published in July 2014 tells the story of the enormous difficulties facing parents whose children are presumed to be the ONLY ones with a medical condition. But there is a story behind this story which is about the “personalistic” culture that Brooks describes and its demand that institutions deliver social impact. “One of a Kind” tells the story of how Bertrand Might’s parents, Matt and Cristina, tried to discover what was wrong with their infant son. The Mights traveled from medical institution to medical institution only to have one diagnosis after another proved false and the medical practitioners lose interest when there was nothing more they could do.

When Bertrand turned seven, a DNA sequencing study that had started two years earlier, concluded that his severe disorder was caused by an inherited genetic mutation. However, “…without additional cases, there was virtually no possibility of getting a pharmaceutical company to investigate the disorder, no chance of drug trials, no way to even persuade the F.D.A. to allow Bertrand to try off-label drugs that might be beneficial.” Unless, somehow, more patients emerged. While the medical researchers theoretically could have shared their data with other medical researchers to find these patients, there were many institutional and professional disincentives that made it unattractive for them to do so. Primary among them – not getting exclusive credit for any forthcoming discovery.

So, the Mights decided that they would take to the social network and find others with the genetic defect themselves. Matt, a professor of computing, had developed a following online among both computer programmers and in the more general online community based on a very popular post that he had written a few years earlier. He decided to harness this visibility on behalf of his son. His post, “Hunting Down My Son’s Killer,” became the top story on Reddit less than 24 hours after it was posted. It went viral and its search engine ranking climbed, making it easier to find.

Within 13 months of making the post, the Mights had identified nine more people with the same genetic mutation and medical problems as their son. But they have done even more. Working with another family whose child has the genetic disorder, they have pushed the clinicians and researchers focused on this disease to pool their knowledge and collaborate on developing a clear clinical report on the disorder. The paper has been published. One of the researchers commented on the process, “It’s a kind of shift in the scientific world that we have to recognize – that, in this day of social media, dedicated, educated, and well-informed families have the ability to make a huge impact….Gone are the days when we could just say, ‘We’re a cloistered community of researchers, and we alone know how to do this.” Gone are the days when institutions are free to serve themselves without sufficient consideration of how their core processes impact real people.

As if on cue, McKinsey recently released the results from a global survey on “Sustainability’s strategic worth.” The study results note a marked shift from two years ago when the major reasons that executives gave for engaging in sustainability (which covers environmental as well as social impact) were managing reputation risk and increasing operational efficiency through lower costs. In 2014, executives were far more likely to say that they want to somehow align their sustainability initiatives with their business goals, mission or values. Rather surprisingly, nearly 50% of the CEOs said that sustainability is one of their top 3 strategic priorities and 13% said it was their most important priority.  At the same time, capturing the business value from sustainability initiatives is difficult “…because the more that companies prioritize sustainability, the more it needs to be integrated into (and even change) the core business.”  Nothing quite like having a McKinsey report to support your beliefs.

In my next post, I will document how these forces – the “personalistic” culture and its demand that corporations demonstrate that they are worthy of social trust in the way that they run their businesses – is playing out in the way that the media portray corporate actions that once would have focused narrowly on the economic returns that companies deliver to their shareholders. The winds of change are blowing, becoming gusty at times.

Sources:

  • “The Evolution of Trust,” David Brooks, The New York Times, Op-Ed column, July 2, 2014
  • “One of a Kind,” Seth Mnookin, The New Yorker, July 21, 2014
  • “Sustainability’s strategic worth,” McKinsey Global Survey, Shelia Bonini and Anne-Titia Bové, McKinsey & Company, 2014.

On Merits Alone

When we are confronted with something new – whether it is something as concrete as a new haircut or something as abstract as a new idea – most of us believe that we can provide an objective response.  We believe that we can evaluate something new and different based on its merits alone to determine if it is good or bad, worth pursuing or rejecting.  However, both nature and nurture suggest otherwise.

Industrial designers know how to use the way human beings are wired to experience their surroundings to elicit reactions that are not based on the actual physical characteristics of what is encountered but on subtle environmental cues. Cabin design for business class air travel is a perfect example of how designers manipulate our all too human senses so that those who shell out the big bucks for luxury air travel feel they are getting what they pay for.

Alternating upholstery tones on the seats in a business class cabin creates “a pattern that causes the brain to register less than the entire expanse.”(1) The checkerboard effect prevents people from being able to perceive the whole. As a result, business class passengers entering the cabin do not become overwhelmed by a cascade of seats. Contrast that with the phenomenon of entering the economy or coach class cabin – the deflating vista of tightly packed row upon row of identical seats and the immediate claustrophobia it induces.

Designers use another trick in business class cabins with seats that open up to become fully flat beds to distort reality. While most passengers in these cabins sit facing forward, they sleep on a diagonal, “an innovation that makes it possible to create what looks like a first-class experience in a significantly smaller space.”(2) These passengers feel that they have more room than they do because every centimeter of space is designed to “de-crowd” their experience. How much space they actually have at their disposal is irrelevant. In fact, most of them would be shocked to learn how little it really is, even in luxury class.

If we were asked to describe the business class cabin compared to coach, we would most likely call it “roomy,” and “private.” In reality it is not particularly roomy or private. But we wouldn’t really be able to tell. Even if we were industrial designers ourselves and could appreciate how we were being manipulated, we would still be manipulated. It’s just the way we are made. It’s our nature.

Over the past decade or so, it’s become increasingly accepted in the business world that human nature affects our decisions and actions at work. We are wired to respond to risks in certain ways that are divorced from reality. We are likely to take action if there is a 90% chance of success but will avoid a 10% chance of failure like the plague. If asked to place an economic value on something that is completely outside of our expertise, we come up with numbers that are anchored to whatever numbers are floating around in our heads from our most recent experiences. We shut down disconfirming points of view under all sorts of pretexts – the person expressing them is not a team player or is simply obnoxious.  Even if these observations are true, they serve to keep us from having to absorb unsettling information. Behavioral economics has emerged as a field of inquiry because traditional economic theory with its assumption of rational decision-making that is aimed at maximizing utility fails to explain the way in which people really seem to go about making economic decisions.

It’s also becoming increasingly clear that nurture, the lives we lead, profoundly shape how we evaluate new things, even in the world of work. I remember when I was first entering the work world, one of the big messages about how to conduct yourself was “leave your personal life at home.” “It’s just business” is still a catch phrase that is often used to explain away decisions that deeply impact others on not just a professional, but also a personal level. When I was younger, I struggled with an inability to completely inhabit this impersonal, highly rational business self. When I was upset, I would find myself crying in the women’s bathroom; quietly, of course, but crying nonetheless. This was NOT something that you were supposed to do in business. And perhaps it was because the way that men typically channel their frustrations – bluster and bravado – was considered businesslike, it wasn’t clear to me that no one was really leaving their true selves or their personal lives at home.

Recently reported research from a team of business school professors at Wharton and Temple University examined how the marital status of 1,500 CEOs affected the riskiness of their decisions and actions. The researchers looked at CEO decisions such as capital expenditures, innovation, R&D and acquisitions and used their company’s stock return volatility as a market-based measure of enterprise risk.

“…we find that there is a still sizable difference — about 10% greater investment [in risky activities] by firms led by single CEOs compared to firms run by CEOs who are married. And differences in stock return volatility are also quite substantial… Managerial decisions are affected by what is happening in those individual’s personal lives in ways that most of our views of business decision making do not account for.”(2)

Apparently nobody leaves his or her personal life at home – not even the CEO.  How we live our lives affects the way we perceive and respond to our options.  This happens without any conscious awareness on our part. However, we act as if this were not true. We act as if we are dispassionate decision-makers who respond to the new and different without bias. The evidence is mounting that this is a lie we tell ourselves to shut down the discomfort that we experience when confronted with something new and different. Instead of sitting in that discomfort with an understanding that both nature and nurture are doing their best to maintain the status quo, we react as quickly as possible to keep the new and different at bay. Perhaps we should take a page from the comedian Louis C.K. whose approach to developing material is all about unease.

 ”You’ve got to embrace discomfort. It’s the only way you can put yourself in situations where you can learn, and the only way you can keep your senses fresh once you’re there.”(3)

Nature and nurture could be our best friends when it comes to the new and different. But only if we can learn to resist the urge to get back to what feels safe and hang in there with discomfort long enough to have a shot at evaluating whatever it is that we haven’t experienced before on its merits.

Sources:

(1) “Game of Thrones,” David Owen, The New Yorker, April 21, 2014

(2) “Risk and the Unmarried CEO,” sourced on 5/21/14 at: https://knowledge.wharton.upenn.edu/article/risk-single-ceo/  based on Marriage and Managers’ Attitudes to Risk, Nikolai Roussanov and Pavel G. Savor.

(3) Quote sourced on 5/25/14 at http://www.huffingtonpost.com/2014/04/22/louis-ck-gq-cover-story-embracing-discomfort-photo_n_5191708.html

The Drudgery of Discovery

Science:  Knowledge or a system of knowledge covering general truths or the operation of general laws especially as obtained and tested through the scientific method (principles and procedures for the systematic pursuit of knowledge involving the recognition and formulation of a problem, the collection of data through observation and experiment, and the formulation and testing of hypotheses) (1).

What is more important in advancing discovery – the science of discovery or the discoveries themselves?

Of course, both are important. Discoveries are newsworthy and sometimes earth-shattering. They are sexy stuff. Science creates a repeatable path towards future discoveries. Science, especially its reliance on the scientific method, is decidedly unsexy. Its end point (a discovery) might be sexy, but the process, for the most part, is not.

I recently finished reading a book about decrypting an ancient language known as Linear B. While “The Riddle of the Labyrinth” is an excellent account of cracking the code that unlocked Linear B; the author, Margalit Fox, also seeks to restore credit for this achievement to a woman whose arduous and lengthy efforts created the science of discovery that made it possible to decipher this written and spoken Mycenaean language (an early variant of Greek from the Bronze Age).

Linear B

 

Linear B was discovered on tablets at the turn of the 20th century by the English archaeologist Arthur Evans. The tablets represent the earliest known European writing from around 1450 BC, 700 years before the Greek alphabet (which before Linear B was believed to be the first European writing).  If history comes into being with the written record, the Linear B tablets transformed a period that had been considered pre-history into history.

Kana Chinese

Writing systems are less common than most of us think. Spoken language can exist without them (the author notes that of the estimated 6,000 languages that are spoken today only 15% are believed to have written forms). In ancient times writing systems appear to have been even rarer than they are today.  Linear B is a syllabic writing system in which the symbols stand for syllables (such as Japanese kana). There are two other types of writing systems. Logographic languages are those in which the symbol stands for a concept (such as Chinese).  Alphabetic languages are those in which symbols stand for specific sounds (such as English).

Just how difficult was it to decrypt Linear B?

When attempting to read a script, a reader can find herself in one of four possible situations:

Language_Script 2x2

A known language in a known script such as the text you are reading right now is immediately intelligible – no deciphering is needed. However, when one unknown is introduced into the picture, everything changes, making decipherment extremely challenging. The author cites two cases which to date have not been resolved.  Rongorongo, a script believed to have written a Polynesian language that is still spoken on Easter Island, fell into disuse. So even though the language is known it is not possible to associate sounds with the symbols. Etruscan, a non Indo-European language of ancient Italy, has a script that survives and can be read (it is based on the Greek alphabet). However, lacking an understanding of word breaks and grammar, the string of sounds cannot be parsed into meaning.

If just one unknown can render some decipherment impossible, two seems like a locked box of impossibility. However, by creating a science of graphics – painstakingly inventing a framework to uncover the hidden rules of Linear B’s grammar, syntax, and structure – Alice Kober made it possible to unlock the language. By rejecting ALL assumptions, she avoided the trap of circular logic that had stymied previous decoding attempts. Others made starting assumptions that led them to what turned out to be false conclusions, dead ends. Had she not died from what many assume was cancer at the age of 43, Kober might have been able to complete her life’s work.

There are many themes that lace the story Fox tells, but three stand out:

Sexism:  In the 1930s and 40s when Alice Kober was conducting her research, the prevailing culture of sexism made it all too easy to diminish the accomplishments and contributions of a rather plain-looking and self-effacing middle-aged woman who had no time to be bothered with social niceties. The Alice Kober described in this book does not seem all that likable (or interested in being likable). She comes across as a brilliant obsessive who was denied a seat at the table precisely because she was a woman. At the time she was being considered for an associate professorship at the University of Pennsylvania (in the 1940s), women were not deemed viable candidates for such positions by men. What makes for painful reading, though, is to be reminded that at that time not even women thought that women should hold such positions.

Hero Worship: The competitive nature of discovery, even in a field most of us wouldn’t give a second thought – early history. When only one person will  ultimately get credit for the discovery even though many others have made the “ah-ha” moment possible, knowledge hoarding is a reasonable position to take even if pooling knowledge would advance the discovery. A corollary to this theme is the cultural obsession (which seems to span many cultures) with a “hero” – the person (usually a man) who ultimately solves the problem that many others have been working on for a long, long time. The credit for cracking the code of Linear B was entirely ascribed to Michael Ventris whose solution, the author makes plain, relied on at least three ground-breaking insights that came from Alice Kober but were never credited to her.

The Ends versus the Means: The relegation of methodology or science to a lower importance status compared with discovery itself. Alice Kober spent the better part of 15 years building a framework that did not presuppose anything about the language she was attempting to understand. Even though the tablets on which the language was inscribed were found on the island of Crete in the purported remains of the palace of Minos at Knossos, Kober did not assume that the language was Minoan. She did not assume that it was a remnant of Etruscan, the “lost” language of a civilization that preceded the Roman civilization. She did not assume that it was logographic (like Chinese or Japanese) even though many of the symbols made it tempting to do so. She painstakingly constructed a methodology for discovery – a science of graphics – which integrated rules and basic theories of how languages work to allow the origins of the language – its grammar, syntax, and sound – to emerge.

Kober died before she could crack the code and it isn’t certain even if she had lived that she would have been the one to decipher Linear B. However, the fact that a drab and discounted woman pursued the drab and discounted side of discovery – the science side –has consigned her to the ranks of unsung heros. I am in awe of the tremendous intellectual and emotional conviction required to let the process work, resisting the impulse to make assumptions and trusting that the truth will out. While it is easier today to source and mine data than it was for Kober, who had to do it all by hand using an intricate paper-based system, it is no easier to tease meaning out of data. Someone still has to construct a framework that makes meaning out of masses of information. And someone has to be fearless enough to look at the results without blinders to grasp its implications. Someone has to be willing to pursue the unsexy, but necessary, task of inventing a science.

 

(1) Source:  http://www.merriam-webster.com/dictionary/science?show=0&t=1396526463

Boo!

Just in time for the Halloween Holiday, comes scary innovation news from Singapore and the U.S. National Funeral Directors Association – an open innovation competition called Design for Death.

Mushroom Death Suit

 

 

 

 

 

 

 

Jae Rhim Lee wearing the Mushroom Death Suit

Even in industries with processes that would appear to be at total odds with change of any sort, there is a push to move outside the comfort zone and imagine possibilities for the future (including the afterlife).  Burial practices are for the most part dictated by religious ritual. They would have to win in the competition for process least likely to change and least likely to attract those outside the profession to participate in an innovation competition, however open.  And it’s not as if there is a concern about the market for services drying up. As the old saying goes – “there are only two certainties in life – death and taxes” (Ben Franklin).  But, the industry is in its mature phase – the critical inflection point at which transformation occurs from within or without or both. Looked at this way, an open innovation competition for Design for Death might even be entirely predictable.

First and foremost in shifting the framework for thinking about death is a language change. While it may seem exasperating (even I sighed when I read the new term of art), the funeral industry is rebranding itself as the deathcare industry. (Microsoft Word highlights this term with a red underscore because it is not standard English…yet.) Deathcare shifts the focus from a narrow one of how we dispose of bodies (funerals and burials) to a more expansive one of how we acknowledge death and incorporate its presence in life.

Design for Death is the first in a series of challenges that are co-sponsored by the Lien Foundation, a Singapore-based philanthropy whose mission is to stimulate and spark high-impact idea exchange, high-intensity collaboration, and high-end value creation by leveraging the people, private and public sectors around three issues: eldercare, water and sanitation, and early education; and ACM, a philanthropy established by the founder of a Singapore-based casket company to uplift the deathcare profession. The next competitions focus on two of death’s many prequels – hospice care and community arts engagement in hospitals.

What is particularly interesting about the winning submissions to the competition is that:

  • All of the idea submitters are young – the oldest is 37 and the youngest is 24.
  • They do not work in the deathcare industry.
  • Some of them are not even designers.
  • None of them come even close to what you would call “experts.”

As the National Funeral Directors Association’s Executive Director, Christine Pepper notes,

The many entries we received from designers around the world show that innovation in deathcare doesn’t have to come from funeral directors. Ideas for how families honor and remember their loved ones can come from anyone and anywhere. The ideas and innovations presented by the designers who participated in this contest bring fresh perspectives to our profession and challenge funeral directors to think about the services and products they offer to families in new ways.

The winning ideas (and even some that did not win) are arresting, poetic, thoughtful and novel. I encourage you to visit the website to see all the entries, but note two that I really liked here:

  1. “I wish to be rain” which transfers cremated ashes to the troposphere via a balloon that seeds clouds which ultimately release precipitation.
  2. “Mushroom death suit” which uses the properties of mushrooms to decompose the body and partially remediate toxins that are released during decomposition.

And, lest you think that there is no way to measure the impact of such innovation on how society handles death, the Lien Foundation partnered with the Economist Intelligence Unit to conduct research and create a Quality of Death index that ranks 40 countries on provision of end-of-life care. The UK ranks #1 and the US is #9 tied with Canada. You can visit this site to begin your death-venture (I kid you not). For most of us, this is the ultimate haunted house.

Happy Halloween!

Difficulty and Doubt

“It’s just too hard to get things done around here.”

Classic expression of frustration heard in almost any organization, but especially in large ones. Laboring under the assumption that important activities should not be difficult to accomplish, the standard management reaction is to make things easier. When new priorities must be merged with existing ones, it seems even more important to design processes that outmaneuver the glitches that cause tempers to flare. We strive to create friction-free processes that are repeatable, reliable, and consistent. However, removing friction might be exactly the wrong thing to do if it’s important to learn and innovate.

Much of the impetus for the wave of interest in managing innovation comes from a belief that we have so little of it because the system makes it too hard. It’s true that in most organizations, processes and resources are focused almost exclusively on sustaining existing activities. With organizations running so lean there is very little bandwith available to take on something new.  It makes sense to assume that if we can remove snags and roadblocks from an innovation process, then we can free up innovators and increase innovation. But what if that’s not so? What if innovation requires friction?

Friction is the proverbial “when the going gets tough” situation.  When circumstances are not conducive but instead are daunting, it forces acknowledgement of:

  • Doubt, the limits of knowledge, even the ability to know
  • Possibility of failure and consideration of how to persevere should it be encountered
  • Probability that progress will be incremental and fitful, requiring many adaptations

Friction provides the opportunity to practice making difficult decisions and dealing with the consequences – this is the essence of learning and is a prerequisite for innovation.  When we talk about managing innovation in organizations, we are really talking about establishing habits of being curious and open to learning.  However, it seems to me that we misdiagnose the difficulty involved with innovation as a problem that should be solved by making things easier.  In fact, without difficulty, we reduce the potential for doubt, failure, and the need to revisit assumptions over and over again.  We undermine the potential for innovation.

At the same time I was reading Malcom Gladwell’s review of Albert O. Hirschman’s theories which the preceding musings are based on (1), I came across a story posted on fastcompany.com that gave life to Hirschman’s belief about the importance of  difficulty and doubt for progress to occur in any system (2). The fastcompany.com story was about 17-year-old  Easton LaChappelle who faced difficulties (lack of knowledge, skills, funding, and institutional support) that should have prevented him from accomplishing what he set out to do (reinvent conventional prostheses).

But, as Hirschman might have predicted, LaChappelle’s difficulties proved essential to his innovation. LaChappelle had to teach himself “electronics, coding, how to use a 3-D printer,” and most importantly how to do all of it on the cheap from his bedroom.  He figured out what he needed to know and how to learn it, what materials he absolutely needed on hand and how to get them, how to confirm market demand (via a Kickstarter campaign), and how to use every available opportunity to tell people his story about making prosthetics affordable.

LaChappelle’s goal was to rethink the $80,000 prosthetic arm (a price tag that doesn’t include the costs of the surgical procedures needed to use it) and find a way to make an affordable one. LaChappelle ultimately created a $400 robotic arm that can be used by amputees without surgical preparation.  Think of it as an iPhone equivalent – if you need to upgrade in a few years, it’s not out of the question.  Contrast that with the $80K arm and the technology that you are stuck with for a long time.

Now, of course, LaChappelle in many ways is not your average teenage kid.  But he COULD be.  He has a curious mind whose dictates he follows and he is flexible in how and where he learns. These behaviors are not particularly unique to LaChappelle.  Most little kids exhibit them all the time.  These are the habits of innovation and they are rarely inhibited by the kinds of difficulties that we routinely decry as the obstacles to innovation in our organizations.  But, in the same way that we squash these habits in kids by running them through the education system, we squash them in organizations by running innovation through the exercise of the business case analysis.

I am not opposed to the basic concept of the business case analysis.  A systematic thinking through of goals, constraints, resources, and potential scenarios is time well spent.  However, the hallmark of business case analysis is the degree to which we ask people with new ideas to express an extremely high conviction that they have been able to squeeze all doubt out of the picture. We do this by identifying potential roadblocks or other surprises and describing how they will be managed. To pass the business case test, we pretend that the surprises are risks rather than uncertainties (for the difference between the two, see (3) below under Sources) and that we are skillful enough to map out strategies for avoiding or minimizing them.

However, doubt, like difficulty, is critical for innovation.  Doubt is the handmaiden of curiosity.  If you believe that you know it all and you are certain of what you know, there isn’t much to be curious about and there isn’t much new to be learned.  Doubt seems ever more important when situations are complex and uncertain which is increasingly the case in all domains of contemporary life.  Hirschman’s ideas snapped me out of my blathering on about “making innovation simple and easy.”  Because, really, how can that be?  How can innovation be anything other than replete with difficulty and doubt?  The two sources of friction that create the conditions for something new to suggest itself.

Sources:

(1) “The Gift of Doubt: Albert O. Hirschman and the power of failure,” Malcolm Gladwell, The New Yorker, June 24, 2013

(2) “Meet the 17-year-old who Created a Brain-Powered Prosthetic Arm,” Liz Presson, FastCompany.com, sourced on 8/26/13.

(3)  Risk: We don’t know what is going to happen next, but we do know what the distribution looks like.  Uncertainty: We don’t know what is going to happen next, and we do not know what the possible distribution looks like.  Attributed to Michael Mauboussin, sourced on 8/16/13 at The Big Picture.

Easton LaChappelle’s TedxMileHigh Talk

 

The Curious Observer

When I’m asked to enumerate best practices for idea management or group decision-making or team-building or any activity that involves people trying to do something as a group, somewhere on my list is the practice of making sure that somebody (or somebodies) is an observer. In idea management, which typically occurs on-line using some kind of collaboration software, that somebody is called the moderator. For in-person groups (or mostly in-person – sometimes people are connected via video conference, but they see one another and interact in real-time) the term of art is facilitator. While everyone acknowledges that this role is important, in idea management it often goes to a more junior person or an administrative type; i.e., it’s important, but not that important.  For in-person groups, the facilitator is often a more experienced (i.e., “older” person or someone who has had group facilitation training). However, facilitators are rarely required to understand the content of the discussions or decisions that the group makes. They are expected to guide a process. The same holds true for on-line moderators.

However, when observers, whether facilitators or moderators, lack familiarity with the substance of the group’s discussions and decisions – not expertise, but just enough understanding to be dangerous – I believe that the group markedly diminishes its potential for innovation, the truly different way of figuring out how to move forward or solve a very persistent, complex problem. I believe that groups need curious observers. Curious observers play an essential role in discovery – the pivotal moment in all innovation that is perhaps the true “Eureka” moment. Because creating something and recognizing that it might be important in some way rarely occur at the same time. In our idealization of innovation, we tell stories that merge creators and discoverers into one person who has one blinding flash of insight. But more often than not, there are many insights along the way some of which are discovered by curious observers.

Case in point from a story about fungus from a recent New Yorker magazine(1). Mushroom fungus or polypore mycelium to be specific. (Stay with me on this one!)

Two seniors at Rensselaer Polytechnic Institute (RPI) were beavering away at a class project for an Inventors Studio class, which is exactly what it sounds like – a class devoted to guiding students in the process of invention with the long shot hope that their ideas might form the basis of a company that will bring innovative solutions into the market. These two seniors, Gavin McIntyre and Eben Bayer, were casting about for an idea that their very exacting professor, Burt Swersey, would approve for their project. They had pitched a few ideas to Swersey to no avail. Then, Bayer recalled an experiment that he had performed in another class at RPI responding to the challenge of making insulation out of perlite. Most of us know perlite as the little white plastic-like pellets that are mixed in with bagged potting soil. We also know how annoying those little pellets can be – they are lightweight and float around, settling in puffy clumps, making a mess. In his RPI class, Bayer had used mushroom spores to bind the perlite.

As a kid growing up on a farm where his dad made maple syrup and sold it commercially, Bayer had had a lot of chores to do outdoors.  One of his chores was to shovel wood chips from a pile to a burner that boiled the sap.  He had often noticed that the wood chip pile sprouted mushrooms whose mycelium bound the chips so tightly together that he found it difficult sometimes to shovel them.  He had remembered that binding property during his class project to create perlite insulation. He brought the results of that project – a glass jar of solid perlite and mycelium – to Swersey’s class.

Here’s what happened according to Swersey:

“He takes this thing out of his pocket…and it’s white, this amazing piece of insulation that had been grown, without hydrocarbons, with almost no energy used.  The stuff could be made with almost any waste materials – rice husks, cotton wastes, stuff farmers throw away, stuff they have no market for – and it wouldn’t take away from anybody’s food supply, and it could be made anywhere from local materials, so you could cut down on transportation costs.  And it would be completely biodegradable!  What more could you want?”

The rest of the story about Evocative Design, McIntyre and Bayer’s company that produces packaging material out of mushroom fungus, is quite an amazing read and I recommend it. But what stood out for me in the story is that without Swersey it is unlikely that the company and its subsequent success would have happened. McIntyre and Bayer both had jobs lined up after RPI – good jobs. Swersey urged them to forgo these jobs and continue developing their invention. They thought they might be able to work on their invention on an after-work-hours basis, but Swersey emphatically told them this would not be enough. He offered to take money from his retirement savings to invest in their company. He helped them get a grant from the National Collegiate Inventors and Innovators Alliance and got them situated in RPI’s incubator space for start-ups.

Swersey, a curious observer, was an essential part of the discovery process. Neither McIntyre nor Bayer on their own had the perspective to recognize the potential of what Bayer had initially created and what they both further developed in Swersey’s class. Bayer’s flash of insight was based on an idle observation made years earlier in passing. From his point of view at the time, using mycelium to bind perlite was a one-off to complete a class requirement. Bayer threw a “Hail Mary” pass when he brought the idea to Swersey’s class to see if it would pass muster there.

Swersey, while not an expert in mycology or insulating materials engineering, did however operate with a framework that enabled him to see the potential in Bayer and McIntyre’s invention. His “Eureka” moment was every bit as necessary as Bayer’s in this story of invention and innovation. Inventor’s Studio is the search for ruthlessly affordable solutions(2) to existing problems that can make a discernible difference in the lives of the vast majority of people on the planet who live on less than $1 a day. This framework is incredibly clear – expansive and targeted at the same time. Without it, Bayer’s little while disk of perlite and mycelium, would still be an interesting curiosity rather than a biodegradable packaging material which is used by companies like Dell, Crate and Barrel, and Steelcase, and who knows what else in the future.

Without a curious observer to hold this kind of framework in place for groups as they work to solve problems, the connection between creativity and discovery often fails to take place. This is especially true for groups of experts who have even more to overcome than naïve amateurs like the students in Swersey’s class. As their professor, Swersey’s students expected his observations and input to matter, whether or not he was an expert in their project’s specific materials or engineering. Experts, on the other hand, view their facilitator or moderator as someone who is supposed to keep them on time and on task but has little else to contribute to problem solving. And, most facilitators and moderators buy in to this definition of their role. However, when facilitators and moderators are also curious observers, they can help the experts overcome the limitations of expertise. They can call attention to the contrary point of view that groups are quick to dismiss and encourage its exploration. Curious observers can ask questions and offer potential solutions that might be foolish or wrong, essentially acting as a naïve amateur, to challenge a group’s assumptions that often masquerade as facts. The curious observer can catalyze the moment of discovery which grasps the potential in an invention, whether a thing or an idea, and become an integral participant in the process of innovation.

(1)    “Form and Fungus,” Ian Frazier, The New Yorker, May 20, 2013

(2)   Designing for ruthless affordability is a concept from the work of Paul Polak.


Polak Advocates the ‘Ruthless Pursuit of… by FORAtv

Just Generally Better All Around

In planning for disasters, the best preparation comes from making sure that things are better under normal circumstances.  Scientists and engineers who are looking at ways to help governments better prepare their citizenry for climate-related disasters (heat waves, storm surges, hurricanes, etc.) have made two significant discoveries:

  1. The most successful physical adaptations not only protect people and infrastructure when things are bad, they also improve everyday life.
  2. In addition to investing in the physical adaptations necessary to withstand severe climate conditions, investing in social adaptations is equally important.

In disasters like the ones caused most recently in the northeastern United States by Hurricane Sandy and other environmental calamities, it turns out that places where people are more neighborly fare better than those where people are isolated and have little connection to their neighbors. Communities in which neighbors look out for one another, those that exhibit a high degree of social connectedness, are more resilient than those whose inhabitants lack this quality of civic-mindedness. Social network resilience helps neighborhoods bounce back from severe damage to the physical infrastructure. Of course, this sounds reasonable and perhaps obvious (like many insights once they are explicitly stated).  But, even if it seems reasonable and obvious, we do not act as if it is.  That is, we do not think of investments that improve the quality of everyday life in our communities as a prophylactic against bad times.

Instead, when planning to manage environmental risks to our communities, we over-focus on physical infrastructure to protect us from disaster.  For water-related catastrophes, we build dams and levees and other types of storm surge barriers. Most of these measures do little or nothing to improve the quality of life under normal conditions and in some cases, seriously degrade it by consuming scarce resources.  Yet, a philosophical shift is taking place in the realm of physical adaptations. Structures are now being conceived that not only offer protection against environmental calamities, but also make everyday life nicer.

In Rotterdam, The Netherlands, where most of the densely populated country sits below sea level, the story of civilization has been a war with water.  Dikes and other systems that pump water out once it encroaches on the land have long been, and continue to be, a staple of the country’s response to water-related calamities.  However, in the past few years, engineers, architects and city planners have deployed a new water approach. In addition to disaster-prevention and –response, an approach which proactively explores what it means to co-exist with water has emerged. Today, in the middle of the city’s harbor, three transparent domes or Floating Pavilions sit on the water. These buildings are not quite boats and not quite houses but a new blended form of habitat which the city hopes will help it formulate new ways of living with water.   Singapore, another country that lies close to sea level, has always faced the one-two punch of monsoon-season flooding and, perversely, insufficient potable water. The Marina Barrage and Reservoir, located in the city center on one-sixth of Singapore’s entire land mass, is a three-pronged initiative which seeks to simultaneously, “improve drainage infrastructure, reduce the size of flood-prone areas, and enhance the quality of city life.”(1)

Today more than ever, many organizations have elevated risk management and mitigation to a primary position in their resource allocation decisions.  However, I believe that our current views of risk management have more in common with the now abandoned approaches that used to inform environmental disaster planning than those of more recent vintage. We don’t think about risk management and mitigation as making things generally work better most of the time, but rather as protecting us from disaster. We over-focus on structural adaptations in our processes to root out or prevent errors and typically ignore social adaptations altogether. As a result, we often make it harder to get everyday work done and virtually impossible to undertake highly risky activities such as innovation. This may be at least one of the reasons why innovation is so challenging for most organizations.

What if, to think about ways of creating more conducive conditions for innovation, we turned our current view of risk management on its head, borrowing from what is now understood about surviving physical disasters?  What if we focused on physical and social adaptations that not only manage risk but also improve the quality of everyday work life?

Many people who lead innovation initiatives focus on creating a supportive culture, processes and infrastructure that are designed specifically for innovation.  Bespoke.  But what if to achieve breakthrough innovation, you have to have a culture, processes and structures that improve the everyday activities of the organization?  What if designing exclusively for innovation is the wrong way to go about it?

In another potentially perverse turn of the screw, the processes and structures that are set up to encourage innovation often try to weed out the small, incremental ideas that make all aspects of work life better. In the rush to promote game changing ideas, small improvements get shoved aside. This might be a BIG mistake.  It could be that the small improvements are what make it possible for the game changers to be proposed, accepted, and implemented. It’s the slow and steady stream of little things that make life better, creating a solid foundation which strengthens the organization, building the capacity to withstand and support significant change.  Rather than pressing for BIG ideas, innovation might do better promoting a disproportionate number of small ideas.  It might need to partner more closely with those responsible for HR practices and policies so that ideas which improve the day-to-day conditions of most people in the organization are considered to be as important as those that have the potential to transform the business.

The notion that “things just being generally better all around for most of the time” is a precondition for being able to withstand seismic change requires an equally seismic shift in the way we think about effectively managing risk and organizing for innovation.  But, for innovation to succeed, it may be a non-negotiable mindset.

Source:

(1)   “Adaptation,” Eric Klineberg, The New Yorker, January 7, 2013

Doesn’t Play Well with Others

“They can’t even comply with the rules of the conference.”  This indifference to the rules was apparently the most irksome aspect about the behavior of executives at Uber, an upstart car-hiring service, to the president of the International Association of Transportation Regulators.  “Uber [is a] ‘rogue’ app…the company [behaves] in an unauthorized and destructive way.”

Uber and other similar start-ups (SideCar, Lyft by Zimride) use mobile technology to match people with different kinds of transportation services (taxis, limousines, ordinary people driving their cars) in real-time.  The technology disintermediates the infrastructure that in the past has made those connections (dispatchers) and has regulated them (the transportation authorities).  To rein these new companies in, some municipalities have attempted to pass rules that would make the services they provide illegal.  “…[But] when Washington tried to pass rules that would make Uber illegal, customers bombarded City Council members with thousands of emails in protest.”  The companies claim that since they aren’t actually providing the rides, the regulations don’t pertain to them.

It’s another one of those situations where the writing is on the wall.  While regulators can slow the tide of change in the transportation industry, they are not likely to stop it.  But what seems to really get everyone’s goat is that the new kids on the block are completely disinterested in playing the game, let alone following the rules.  After I read the article, I kept thinking about how we emphasize the importance of collaboration in creating a culture that fosters innovation.  Much of what is written about collaboration has a nicey-nice spin to it – like the classic Coca-Cola commercials from the 1970s (back when there were three network channels and commercials ran for a leisurely one entire minute).  The idea that harmonious collectivism could bring about big change was in the air.  But what if collaborating to innovate looks less like the Coke commercial and more like a nasty fight among toddlers in a sandbox?

There is good reason to suspect that the spirit of getting along, of aiming for group harmony is at odds with the kind of against the grain decision-making and action-taking that is required for innovation.  Innovation requires a calculated approach to risk-taking.  Innovators size up a situation, drawing a line between the survivable worst and the fatal worst that could happen and insure that they stay on the side that lets them live to fight another day.

An in-depth profile that appeared in The New York Times this past December about a group of 16 expert skiers and snowboarders involved in an avalanche reads like a primer on how a group of experts striving to be in harmony make horrible decisions.  This group of highly experienced skiers and snowboarders who had all been tested in extreme situations, collectively made a horrible, life-threatening and, for some, life-ending decision to pursue a run down a challenging slope in iffy weather conditions.  It was a decision that any one of them was unlikely to have made on their own, had they been less invested in being and being seen by the other members of the group as good sports.

The disaster which led to the deaths of three members of the party occurred on Cowboy Mountain which is part of The Cascade Range in the state of Washington.  The skiers (there were snowboarders in the party, but for the sake of brevity, I will refer to all of them as skiers) were drawn to an area just outside the official ski zone known as Tunnel Creek.  It’s a place where experts frequently go to enjoy snow and slope conditions that are not available within the sanctioned ski areas but are still relatively easy to access.  The lure of fresh powder and 3,000 vertical feet angled at about 40 degrees is hard for great skiers to pass up.  But, when combined with weather conditions that create a thin, fragile layer of frost sandwiched between hard packed snow below and soft fluffy powder above, Tunnel Creek becomes an avalanche waiting to happen.  The kind of avalanche that is triggered by the skiers themselves as they ski down the slope, creating stress on the layers of snow.

Individuals within the group had misgivings. They all knew that the official avalanche forecast fell into a gray zone that should have made experts like themselves sit up and take notice.  But, the fact that each of them knew the reputation of the others led them to be overconfident that the group simply could not make a bad decision.

As one skier recalled afterwards:  “This was a crew that seemed like it was assembled by some higher force,….I was thinking, wow, what a bunch of heavies….”

Another thought:  “There’s no way this entire group can make a decision that isn’t smart,” he said to himself. “Of course it’s fine, if we’re all going. It’s got to be fine.”

And yet, some remember having misgivings beforehand, but feeling conflicted about expressing them:

“I can tell circumstances, and I just felt like something besides myself was in charge. They’re all so professional and intelligent and driven and powerful and riding with athletic prowess, yet everything in my mind was going off, wanting to tell them to stop.”

But over-riding everything else was a strong need to go along and get along:

“…[T]here were sort of the social dynamics of that — where I didn’t want to be the one to say, you know, ‘Hey, this is too big a group and we shouldn’t be doing this.’ I was invited by someone else, so I didn’t want to stand up and cause a fuss. And not to play the gender card, but there were 2 girls and 10 guys, and I didn’t want to be the whiny female figure, you know? So I just followed along.”

[But she shouldn’t have worried, because the guys felt just the same.]  “I thought: Oh yeah, that’s a bad place to be. That’s a bad place to be with that many people. But I didn’t say anything. I didn’t want to be the jerk.”

Keep in mind that this was a group of experts, the same kind of domain experts we assemble in our organizations when we need to make complex decisions about taking risks.  And they represented a wide range of ages, from 29 – 53, so you can’t lay blame at the feet of youthful exuberance.  Yet, when you read their reflections on how they viewed the situation, you feel as if you are listening in on a group of teenagers for whom being part of the group and having the group operate smoothly is more important than anything else.

We place a high value on harmonious group behavior   Remember the transportation regulator’s biggest complaint about Uber – they didn’t play by the rules, they were not being good sports.  Yet, playing by the rules, whether they are literally regulations or the way things have been done in the past or the even more forceful social norms that proscribe group dynamics, does not always yield the best outcome.  We might want to rethink what playing well with others looks like in the context of innovation – maybe a few squabbles and some sand-throwing is essential to taking the kind of risks that, even if you don’t succeed, insure that you are around the next day to try again.

Perfect Harmony – Coca Cola Commercial


Sources:

  • “Car-Hiring Apps in a Snarl,” Brian X. Chen, The New York Times, December 3, 2012
  • “Snow Fall: The Avalanche at Tunnel Creek,” John Branch, The New York Times, December 26, 2012

 

Fatal Allergies: Part 2

If we can’t avoid failure and mistakes, how can we use the fact that we will make mistakes and fail (and maybe even that we should make mistakes and fail) to our advantage?

Let’s start with some dictionary definitions:

Mistake – An error or a fault resulting from defective judgment, deficient knowledge, or carelessness.

Failure – The condition or fact of not achieving the desired end or ends.

Based on the definitions, it’s clear that a mistake is not necessarily a failure, although it is frequently a precursor to failure.  However, both mistakes and failures share the characteristic that they can only be known after the fact (post hoc).  So when and who decides whether a mistake or failure has occurred is key.

Yet, Schoemaker (whose framework for decision-making was described in Part 1 of this blog post) wants us to design mistakes and purposefully make them, so we can’t wait until the outcome to say that what we did was a mistake.  And he doesn’t want us to make just any kind of mistake.  He was us to make brilliant ones.   Shoemaker asserts that there are four basic types of mistake and one of them is the type we should take advantage of more often than we do.(1)

There are trivial mistakes, e.g., not leaving enough time to catch a plane or not putting enough money in the parking meter.  These are annoying, but not worth worrying about.  There are tragic mistakes for which the cost is extremely high and for which there little to no benefit, e.g.,  texting while driving and losing control of your car, indulging in the pleasure of addictive drugs.  These are always to be avoided when possible.  Serious mistakes are not those that you seek out, but if you have to go through them, you can in many instances turn lemons into lemonade.  Examples include losing your job, getting divorced; some might say getting married.  Brilliant mistakes are a close cousin to serious mistakes, but they are a different breed.  They are the mistakes that Shoemaker wants us to design for.

A brilliant mistake has these characteristics:

  • It is an action whose expected utility or value is less than the expected utility or value of not taking action.  It’s an action that you believe at the outset is unlikely to pay off.  All of your previous knowledge and experience would encourage you to bet against a net positive gain from undertaking the action.
  • Something goes wrong or has the potential to go wrong far beyond the range of prior expectations.  The outcome of a brilliant mistake has to surprise us in some way.  It has to be so far from what we anticipated or so difficult to fit within our current operating theory that we literally sit up and take notice.  As a result, brilliant mistakes offer the possibility that insights will emerge whose benefits far exceed the cost of the original mistake.  Brilliant mistakes offer the potential for expanding the field of knowledge and accelerating learning.  They can cross the chasm, making a giant leap forward that results in a breakthrough.  This is why brilliant mistakes are so closely associated with fundamental innovation, what we in the business world call business model innovation.  [Of course, there are many mistakes which could be called quasi-trivial or quasi-brilliant, the edges between types of mistakes are not clearly defined.]
  • Finally, a brilliant mistake occurs in a system with some slack so that even if most people are focused on supporting the status quo, a handful or more can slip free and do something different.  In most of the professional service organizations where I have worked, the little fits and starts of new ideas are often bemoaned as “hobbies.” An innovation process is supposed to cure the organization of its tendency to “indulge in hobbies.”  However, if organizations are too efficient and too effective, brilliant mistakes are virtually impossible to make.

If we want to design a brilliant mistake, where do we start?

There are two wells from which we can source brilliant mistakes:

  1. Defy conventional wisdom.
  2. Act at cross-purposes to our own views

Start with the assumptions that guide how you approach business growth – as I mentioned above, many professional service organizations want to root out and stop instances of people spending time on anything other than client-related work because it is viewed as a waste of time and resources.  A course that defies conventional wisdom might give everyone time to pursue a work-related “hobby” seeing these activities as a potential source of innovation.  We know that some organizations actively pursue this approach – most famously (today) Google, but not so long ago it was 3M.  Both set up a similar mistake-making pipeline, but came at it from different wells.  3M in my mind is the bellwether of defy conventional wisdom and Google embodies acting at cross-purposes to their own views.

Defy conventional wisdom:  3M launched its 15 percent program in 1948. If it seems radical now, think of how it played as post-war America was suiting up and going to the office, with rigid hierarchies and crisply defined work and home roles. But it was also a logical next step for the company. All of its early years in the red taught 3M a key lesson: Innovate or die.  This is an ethos that the company has carried dutifully into the 21st century.(2)

Act at cross-purposes to your own ideas:  Google has taken measures to encourage outside interests, enacting the 70/20/10 rule, which allows employees to spend 20% of their time on “innovation time off” pursuing their own ideas that relate to Google and then 10% of their time on stuff completely unrelated to Google.  This could be reading a book, drawing in Photoshop, or going to a museum.  In so doing, Google gains loyal employees who can purposely enrich their lives without Big Brother looking over their shoulder.  At the same time, the company stimulates  innovative thinking.  Think about it: how many times have your best ideas about solving work-related problems come to you while you were doing something completely unrelated to work? (3)

Once we have an assumption we’d like to test, what else might we consider to help us determine if we might be poised to make a brilliant mistake?  Schoemaker offers several additional criteria to guide us:

  • Potential benefit relative to cost is high.  This point may seem obvious, but it’s important to state it explicitly because it overcomes the major argument against making deliberate mistakes:  why would you undertake an activity that you believe at the outset is most likely to fail?  Because over the long run and across a portfolio of mistakes, the potential benefit will be greater than the cost.
  • Decision is made frequently.  This is an interesting criterion relating to how much activity within the system flows from a particular assumption.  For example, most companies assume that they have accurate insight into the markets they serve and base their investments in new product development on this assumption. Many decisions flow from this assumption.  Decision frequency is the lever that creates the potential for a large ROI – the benefits can be very large relative to the cost.
  • The environment is in flux.  During periods of rapid change, mistakes are common currency – but most of them occur inadvertently.  Instability provides an opening for trying something different because it lowers the barriers to entry.

To illustrate these points and the next two, we turn to The New Yorker’s 2012 fashion week edition that presented a story about how an enterprising individual used a rapidly changing marketplace to build a new kind of company (4).

Back in 2000, a young Italian MBA graduate was captivated by the notion of exploiting a rapidly evolving marketplace – the Internet – and mashing it together with haute couture.  By integrating these two diametrically opposed experiences – the democratic, highly individualized experience of on-line shopping and the elitist, small herd-like experience of high fashion – he hoped to unleash a new market for high end fashion among people who did not physically show up at Fashion Week, but were passionate about fashion.

This was the idea behind Yoox.com which has spawned several new business models in the market for high end fashion.  Yoox has made it possible to purchase haute couture as it debuts on the runway if you are willing to pay full price through design house websites which Yoox operates.   Those who can’t afford full price designer clothing can purchase “vintage” haute couture at deep discounts through the Yoox site (which insures that sales of remaindered clothing don’t cannibalize design house in-season sales).

  •  Experience base is limited.  The less that you know about a new opportunity, theoretically, the more open you should be about different approaches.

The Yoox story also provides a window into how this attribute can contribute to opportunities for making brilliant mistakes.   Many of the design houses initially pooh-poohed the Yoox approach.  They firmly believed that haute couture had no place on the Internet based on an assumption that people shopped for bargains on the Internet, looking for deeply discounted items on sites like eBay.  (Remember this was back in 2003.)  However, at least one design house (Marni)  was willing to experiment, acknowledging that it knew very little about the internet.  Yoox provided the technology, logistics, ability to handle customs, currency conversion and perhaps even more importantly, knowledge about what product was selling where, which it plugged into its algorithm to help companies predict trends.  Now, Marni, Armani, and Zegna, are powered by Yoox.

  • Problem is complex.  Finally and not surprisingly, the more complex the problem, the more possibilities exist for solutions.  The more opportunities to actively make mistakes.  Yet, in many fields, dogma takes hold surprisingly quickly (remember the great wrinkle that time poses for the problem of determining whether a decision outcome is good or bad) and with great tenacity, narrowing the boundaries for new ideas.  Certainly, this has been a challenge for the field of cancer research and treatment.

We wrap up our foray into failure and mistakes by teasing out one strand of Siddhartha Mukherjee’s comprehensive and compelling biography of cancer. (5)

Cancer is a big and growing health problem.  BIG:  In the US, 1 in 3 women and 1 in 2 men will develop cancer during their lifetimes. Of the 156 million women and 151 million men in the US, 128 million Americans now living will develop cancer.  Of the 2.4 million people in the US who die each year, 25% of them die of cancer.  GROWING:  As we live longer, the odds of genetic mutations that result in cancer increase.  It’s a trade-off – as life span increases, so does the incidence of cancer in the population.

Cell division is the source of life – the process by which human beings “grow, adapt, recover, and repair.”  It is also the cause of cancer because cancer cells do all of these things better than normal cells, they have achieved the chimera of eternal youth  – “they are more perfect versions of ourselves.”  But the perfection of cancer is twinned with the destruction of its human host.  Ultimately, from our human point of view, failure is encoded in the biology of growth – inseparable from it.

The awareness of cancer extends far backward in human history, over almost 4,000 years, when evidence of a disease that was most likely what we understand as cancer was first documented.  The Egyptian document from 2500 BC appears to describe a tumor of the breast.   The history of trying to understand the mechanisms by which cancer occurred however, begins almost 1,000 years later, when the Greeks undertook to explain bodily functions in terms of fluids, building on and extrapolating from their knowledge of hydraulics (fluid mechanics).  And yet, since cancer is an age-related disease, until life expectancy began to increase, other diseases (small pox, tuberculosis, the plague, cholera, etc.)  blanketed the historical record and mention of cancer is harder to find.

It is impossible to recount the long history of discoveries and theories that populate the cancer research and treatment roadmap, but the story of radical surgery embodies the hallmarks of the mistakes that have propelled the field  forward and held it back at the same time – the brilliant and the serious mistakes of cancer research and treatment.

Removing cancerous tumors by cutting them out was practiced millennia ago.  But the obstacles that plagued surgery had to be overcome before the benefits of extirpating tumors could be seriously explored.  It wasn’t until 1850 that pain was separated from surgical procedures via ether-induced anesthesia (William T.G. Morton is credited with this innovation).  In 1870 Joseph Lister introduced the use of carbolic acid, an antibacterial chemical, to promote antiseptic surgery, reducing the post-surgical complication of infection.  These two advances were the primary drivers that freed surgeons to conceive the notion of not just removing cancerous tumors, but curing cancer through surgery.  And no one epitomized this approach more than William Halsted who in 1900 pioneered and became associated with the practice of radical mastectomy as a cure for breast cancer.

Radical mastectomy involves removing the breast, chest muscles, and all of the lymph nodes under the arm.  Halsted believed that this approach, while disfiguring, would eradicate cancer from the body – an assumption that ultimately proved to be wrong in most cases.  His approach was based on a theory that cancer spread throughout the body through a kind of centrifugal force that spun metastases outward along a spiral path from the original site.  So it made sense to continually widen the surgical scope in seeking a cure.  It took almost 100 years, for another approach to replace the radical mastectomy as the dominant surgical approach to cancer.

In the 1930s a physician named Keynes combined radiation and limited surgical excision to treat breast cancer.  While his results were as successful as those achieved by practitioners of radical mastectomy, his approach was derided and sneeringly labeled “lumpectomy” – a put-down in surgical terms, implying that the surgical approach was crude, taking out a “lump” of tissue.  [Similarly, when the term “junk” was applied to non-coding DNA mentioned in Part 1 of this Post, it also had the effect of pushing research in this area to the far edges of scientific inquiry.]  It wasn’t until the 1950s that another surgeon (Criles) reconsidered the lumpectomy based on a different theory of cancer metastasis.  Criles proposed that for many breast cancers, the metastases spread not in an orderly spiral path, but in a chaotic, unpredictable fashion to far flung parts of the body, rendering a surgical procedure that extirpated tissue near the original site ineffective.  But it wasn’t for another 30 years, during which physicians were persuaded to enroll patients in trials that would provide enough data to apply statistical tests of validity, that yet another surgeon, Bernard Fisher was able to demonstrate that  “…[t]he rates of breast cancer recurrence, relapse, death and distant cancer metastasis were statistically identical among…[three treatment options – radical mastectomy, simple mastectomy, simple mastectomy followed by radiation].  It took nearly 100 years to render an accurate judgment for radical mastectomy as the correct approach for curing breast cancer – mistake.

You might think that the field of cancer oncologists, surgeons, and radiologists would take this lesson to heart and seek to avoid the path of devastation and delay that accompanied the passion for radical surgery as a cure for cancer.  But just as radical surgery was winding down, radical chemotherapy replaced it as the favored approach for curing cancer (sometimes combined with radiation).  Only recently has a new belief set emerged which does not seek to “cure” cancer, but rather to manage it as a chronic disease.  Today, many are looking to a changing array of targeted pharmaceuticals to manage cancer by inducing the body to isolate cells functioning in this damaging way and destroy them the way that the immune system takes care of other foreign and dangerous invaders.

This approach represents a radical break from two centuries of thinking about cancer.  Cancer, a genetic “mistake” that leads to system failure, is now understood as inseparable from the biology of growth.   Trying to eradicate it is increasingly viewed as fruitless, but effectively managing it is seen as possible.

This is the same sort of radical perspective that I believe Schoemaker wishes for us to adopt with respect to mistakes.  He wishes for us to understand that mistakes and failure are inseparable from individual and organizational growth.  Rather than seeking to eradicate them, Schoemaker urges us to learn how to push past the unpleasant confrontation with human limitation and fallibility that making mistakes brings about and find ways to manage mistakes so that, on balance, we gain more than we lose from our inevitable lot as human beings, the makers of mistakes.

“If a few mistakes can be good, wouldn’t a few more be even better?  Paul Schoemaker

 

Endnotes:

(1)    Paul Schoemaker, Brilliant Mistakes: Finding Success on the Far Side of Failure (Philadelphia:  The Wharton Digital Press, 2011)

(2)    Sourced on 9/12/12 at http://www.fastcodesign.com/1663137/how-3m-gave-everyone-days-off-and-created-an-innovation-dynamo

(3)    Sourced on 9/12/12 at http://99u.com/tips/5766/Encourage-Daylighting

(4)    John Seabrook, The Geek of Chic, The New Yorker, September 10, 2012

(5)    Siddhartha Mukherjee, The Emperor of All Maladies: A Biography of Cancer (New York: Scribner, 2010)

These blog posts were originally delivered as a presentation to The Learning Forum’s Knowledge Council in September 2012.

Fatal Allergies: Part 1

Since making a career transition about a decade ago from knowledge management to innovation, I find that I have spent more and more of my time thinking about failure and mistakes. It seems to me that knowledge management operates under the assumption that what is or can be known is valid and correct.  Innovation in many ways starts with the opposite assumption – that what is or can be known and the frameworks for acquiring knowledge might not be valid or correct any longer.  So, innovation rests on a foundation of failures and mistakes and most of its outcomes add to the store of things that have turned out to be wrong.  However, while organizations desperately want innovation, they are not too excited about embracing failure and mistakes.

At least that’s what I assumed at the outset of exploring this topic.  In an earlier draft, I started this post with the sentences:  “Organizations seem to be allergic to mistakes and failures.  No one wants to say that they made a mistake.”   After having confidently made this assertion it occurred to me that I really didn’t know if it was accurate.   So, I decided to see if I was right.  When I put the search string “CEOs admit mistakes” into Google, I received about 6 million results in less than one second.  It turns out that I was wrong, CEO’s do admit to mistakes – I was mistaken.

The top two results were illustrative of most of the rest (italics in these quotes are mine):

“The past year and a half have had situations where we might [have done] some things differently if we had known [things were] changing so rapidly, even faster than anyone could have predicted, ….Each time the future is difficult to predict, the situation is difficult.”(1)  Stephen Elop, CEO, Nokia

 “…the Goldman chief compared the financial crisis to one historic hurricane season during which four major storms struck the east coast.  “How would you look at the risk of a hurricane?” he asked, noting that the following year, no large storms struck the area.  “Mr. Blankfein, I want to say this,” Angelides responded. “Having sat on the board of California’s earthquake authority, acts of god were exempt. These were acts of men and women. These were controllable.“  Lloyd Blankfein, CEO, Goldman Sachs and Phil Angelides, the chairman of the Financial Crisis Inquiry Commission

From Mr. Angelides’ retort to Mr. Blankfein, it seems our mistakes will not be excused if our only explanation is that we don’t believe we could have foreseen the outcomes.  Even though Nokia’s CEO seems to be on to something  (i.e., the environment in which decisions were being made was in rapid flux) as does Mr. Angelides (who suspects that people might be able to use the possibility of negative outcomes more productively to temper their decisions and subsequent actions), neither Mr. Elop nor Mr. Blankfein are reported as drawing any conclusions about how they might have approached their situations differently. Instead they are content to put forward the argument that the outcome was outside of their control and therefore, they were not responsible for it.

We do evaluate our decisions based on outcomes, which as Mr. Elop and Mr. Blankfein rightly suggest are hard to predict, let alone control (despite Mr. Angelides’ suggestion to the contrary).  We say that a decision was a mistake post hoc (after the fact) but we make our decisions a priori (before the fact).   And despite the major executive mea culpas that are easily found on the Internet in great abundance, we all know that it’s rare inside of organizations for people to associate themselves with a mistake or a failure.  We don’t even use the word “problem” too much anymore.  We prefer “challenge” which implies that we can somehow manage to control outcomes and achieve a positive result.

Should we be concerned?

I believe that we should because mistakes and failure are inseparable from creative and generative processes.  Lacking a useful conceptual model for engaging productively with mistakes and failures, we default to making mistakes inadvertently. We court failure and miss opportunities to learn from it.  As a result we might unwittingly be choking off potential sources of collective and even individual growth.  Are there ways of looking at mistakes and failures that might help us avoid or prevent the ones which cause or lead to organizational decline (or even demise) while at the same time embracing our destiny as human beings who cannot avoid making mistakes and who will undoubtedly experience failure?

Drawing on two books and my own musings, let’s explore the idea that we can figure out how to make “good” mistakes and have “productive” failure. The conceptual framework that I will use comes from Brilliant Mistakes by Paul Schoemaker (2), a decision sciences professor at the Wharton School, and the stimulus that made me see mistakes and failures everywhere was The Emperor of All Maladies: A Biography of Cancer by Siddhartha Mukherjee (3) about the history of cancer research and treatment.

First, let’s look at decisions and determine if we lean towards Mr. Blankfein’s or Mr. Angelide’s point of view.  Can we reasonably predict the outcome of our decisions?  Can we know if we are making a mistake?  Can we know if we will fail?  We start with Mr. Schoemaker’s framework for considering what factors contribute to a decision outcome.

Schoemaker Decision-Making Framework

The quality of thinking and judgment prior to a decision and how well one executes according to plan and adjusts when circumstances necessitate clearly affect the outcome of a decision.  We can all think of major decisions that were reached in haste and those which were carefully considered but poorly executed.  But other factors which are outside the decision-maker’s direct control also play a very large role in decision outcomes.   In situations where complexity and risk are high, the element of chance and the influence and actions of other people loom large.  Even the element of time colors the determination of whether a decision is good or bad.  This is where I see mistakes and failures everywhere (much like the little boy in the film The Sixth Sense who sees dead people everywhere).

For example, when I started to work on this post in early September 2012, The New York Times reported the results of a large federal project that involved 440 scientists in 32 labs around the world.  The study concluded that large chunks of DNA which had previously been dismissed as “junk” are now understood as playing “critical roles in controlling how cells, organs and other tissues behave.”(4)   Why did this misunderstanding which was enshrined in the 1970s persist for so long?  I would argue that this is human nature.  Typically, those who are responsible for a decision (like those who decided that these parts of DNA were junk with no biochemical function) tend to dismiss discomfirming evidence  and respond to signals that reinforce their pre-existing beliefs.  We get caught up in self-fulfilling prophecy (those who decided it was junk, looked for or chose to understand evidence in such a way that confirmed their beliefs).  What typically changes thinking is that new technologies, new tools and, as Thomas Kuhn asserts in The Structure of Scientific Revolutions, new people come along who are unburdened by existing knowledge or experience and, most importantly, were not part of the original decision-making process.

Many decisions have very long tails – their consequences reverberate for a long time.  It’s hard to know that they were mistakes, very serious mistakes with very serious consequences.  Scientific research is replete with long tail decisions.  The second part of this post will relate a story from the history of cancer research and treatment that is a sobering tale of a long tail decision.

I’d like to make two other points about decision outcomes before we leave this topic.  One is related to the limits of our knowledge and the other to the fidelity of history.

  1. Limits of knowledge.  We may judge a decision to have been a good one, but we are often unable to compare it with the choice or choices not taken.  What if you could know about the outcome of the choice you did not make because someone else did make it?  Even if you had done well with your choice, how would you feel about it if you learned that the person who had made the choice you did not ended up doing much better?  How would that affect the way in which you viewed your decision outcome?  Would you then think it was a mistake?
  2. Fidelity of history.  History is written by the winners (or at least the survivors).  This skews our understanding of decision-makers in favor of a belief that the decisions reached by the winners/survivors were better than those reached by the losers.   We want to believe that some people are better able to see a clear way ahead than others even in very complex, high risk situations.  That leaders are leaders because they make better decisions than the rest of us.

Let’s wrap up part 1 of this post by considering two business stories that involve leaders making decisions – read each story, and write down your guess about which leader’s decisions resulted in success.  Then look at the “reveal” and see if you could tell.  Don’t peek.

Story #1:

A $6.5 billion media company spends $20 million for an online start-up that was founded by two recent college grads for $12,000 a year before.  They ask the founders to stay on for the next three years as part of the deal and then essentially leave them alone.  They don’t insist that the new acquisition immediately get absorbed into the larger company and when even being associated with the larger company makes it difficult for the fledgling operation to compete for talent, the company is spun out as an independent subsidiary.  The subsidiary hires a new chief executive who doesn’t have much executive experience, although he has worked at Facebook and Paypal in management positions.  They bring back one of the founders to serve on the board.

 Story #2:

A $3.4 billion commerce giant appears to be in the throes of a death-spiral – its core business has lost a great deal of its uniqueness and relevance in what has become an increasingly crowded marketplace compared to when the company got its start nearly 17 years earlier.  The company’s leadership decides to make a dramatic course correction.  “’It was clear the world had innovated around [us] and [we] had stayed with the same formula….Saying that was considered heresy.  With any company that’s been this successful, there’s enormous momentum to keep doing what you’ve been doing and hope the world will go back to what it used to be….we had to make changes that were unpopular with subsets of our customers and other people.  You have to have the conviction to do what you know is right….We spent three years fixing the fundamentals and tried not to worry about what everyone else was saying.’”  And then the CEO replaces virtually all of the senior management team.

What are the endings to these stories?  Success or failure?

Story #1 is the story of reddit.  In late September 2012, with a mere 20 employees, the company was serving up three billion page views a month and the President of the United States had just signed up for an “Ask Me Anything” session.

Story #2 is the story of eBay.  Its stock price, which had fallen from a high of $58 in 2004 to a low of $10 in 2009, rebounded to a six-year high of about $45 in July 2012 as eBay retooled itself into a mobile retailer based primarily on its significant investment in PayPal as a means of innovating the purchase experience.

What are we to do?  If we can’t avoid failure and mistakes, how can we use the fact that we will make mistakes and we will fail and maybe even that we should make mistakes and should fail to our advantage?  In Part 2, we will attempt to answer these questions.

 

These blog posts were originally delivered as a presentation to The Learning Forum’s Knowledge Council in September 2012.  I’d like to acknowledge and thank Brian Hackett,  Founder of the The Learning Forum, for giving me this opportunity. 

(1) Nokia’s phones historically have relied on an operating system called Symbian which does not dominate the market (4% market share in 2012 down from 17% in 2011).  It has switched recently to the Microsoft’s Windows platform which has a 3.5% market share in 2012.  Sourced from:  http://www.dailyherald.com/article/20120915/business/709159982/] on 11/21/12.

(2) Paul Schoemaker, Brilliant Mistakes: Finding Success on the Far Side of Failure (Philadelphia:  The Wharton Digital Press, 2011)

(3) Siddhartha Mukherjee, The Emperor of All Maladies: A Biography of Cancer (New York: Scribner, 2010)

(4) Gina Kolata, “Study Discovers Road Map of DNA: The Key to Biology,” The New York Times, September 6, 2012.