University of New Mexico LEGO Case & Failing By Design Article Analysis Please read the “LEGO” case and the article on “Failing By Design.” Which are attached (length: maximum 3 pages double spaced). 1. Please summarize and analyze the key concepts in the article “Failing By Design.”2. For the LEGO case, please analyze what has led the firm to the edge of bankruptcy.3. As Jørgen Knudstorp, what would you do throughout the LEGO Group in order to turn the company around? HBR.ORG
For the exclusive use of A. Masters, 2020.
April 2011
reprint R1104E
Failure: Learn from It
Failing by Design
Uncertain environments call for experimentation.
Here’s how to set up the trials—and learn from the
errors. by Rita Gunther McGrath
This document is authorized for use only by Andrew Masters in Spring 2020 MGT 164 Zimmermann taught by JILL ROSENOW-FARWELL, University of California – San Diego from Mar 2020
to Jun 2020.
For the exclusive use of A. Masters, 2020.
IllustratIon: PhIl WrIgglesWorth
Failure learn from it
2 harvard Business review april 2011
This document is authorized for use only by Andrew Masters in Spring 2020 MGT 164 Zimmermann taught by JILL ROSENOW-FARWELL, University of California – San Diego from Mar 2020
to Jun 2020.
For the exclusive use of A. Masters, 2020.
For artICle rePrInts Call 800-988-0886 or 617-783-7500, or vIsIt Hbr.orG
rita Gunther mcGrath
(rdm20@columbia.edu),
a professor at Columbia
Business school, researches
strategy and innovation in
volatile environments.
Uncertain environments call for experimentation.
Here’s how to set up the trials—and learn from
the errors. by Rita Gunther McGrath
Failing
By Design
I
it’s Hardly news that business leaders work in increasingly
uncertain environments. Nor will it surprise anyone that under uncertain conditions, failures are more common than successes. And
yet, strangely, we don’t design organizations to manage, mitigate,
and learn from failures. When I ask executives how effective their
organizations are at learning from failure, on a scale of one to 10,
I often get a sheepish “Two—or maybe three” in response. As this
suggests, most organizations are profoundly biased against failure
and make no systematic effort to study it. Executives hide mistakes or pretend they were always part of the master plan. Failures
become undiscussable, and people grow so afraid of hurting their
career prospects that they eventually stop taking risks.
I’m not going to argue that failure per se is a good thing. Far
from it: It can waste money, destroy morale, infuriate customers,
damage reputations, harm careers, and sometimes lead to tragedy.
But failure is inevitable in uncertain environments, and, if managed well, it can be a very useful thing. Indeed, organizations can’t
april 2011 harvard Business review 3
This document is authorized for use only by Andrew Masters in Spring 2020 MGT 164 Zimmermann taught by JILL ROSENOW-FARWELL, University of California – San Diego from Mar 2020
to Jun 2020.
?
For the exclusive use of A. Masters, 2020.
Learning from Failure Failing by Design
What
Are Your
No-Fail
Zones?
Leaders need to
be clear about
where failure will
be tolerated—and
where it won’t.
Mike Eskew, the former
CEO of UPS, put the customer experience out of
bounds: “We fail in such a
way that it never touches
the customer,” he said. In
practice this meant that
UPS didn’t experiment
with moving, paying for, or
otherwise interacting with
a package. In all other respects it permitted—even
encouraged—entrepreneurial experiments that
stretched the century-old
company.
possibly undertake the risks necessary for innovation and growth if they’re not comfortable with the
idea of failing.
An alternative to ignoring failure is to foster “intelligent failure,” a phrase coined by Duke University’s Sim Sitkin in a terrific 1992 Research in Organizational Behavior article titled “Learning Through
Failure: The Strategy of Small Losses.” If your organization can adopt the concept of intelligent failure,
it will become more agile, better at risk taking, and
more adept at organizational learning.
How Failure Can Be Useful
Some of the failures I’m about to describe were the
results of intentional experiments. Others were completely unplanned and unexpected. But all of them
provide valuable takeaways. A certain amount of
failure can help you:
Keep your options open. As the range of possible outcomes for a course of action expands, the
chances of that action’s succeeding diminish. You’ll
improve your odds if you make more tries. This is
the logic driving businesses that operate in highly
uncertain environments, such as venture capital
firms (whose success rates range from about 10% to
about 20%), pharmaceutical companies (which typically create hundreds of new molecular entities before coming up with one marketable drug), and the
movie business (where, according to one study, 1.3%
of all films earn 80% of the box office).
Learn what doesn’t work. Many successful
ventures are built on failed projects. Apple’s Macintosh computers emerged in part from the ashes of a
4 Harvard Business Review April 2011
now-forgotten product called Lisa, which introduced
a number of the graphical user interfaces and mouse
operations in today’s computers.
In truly uncertain situations, conventional market research is of little use. If you had asked people
in 1990 what they would be willing to pay for an
internet search, no one would have known what
you were talking about. A massive amount of experimentation was needed before workable search
engines emerged. Early entrants sought to be paid
for doing the searches themselves. Later, companies explored business models based on advertising.
Later still, Google developed a system to maximize
the profitability of the ad-based model. Without all
that trial and error, it’s highly unlikely that Google
could have built the algorithm-based juggernaut so
familiar today.
Create the conditions to attract resources
and attention. Organizations tend to move on to
new projects rather than fix systemic problems with
existing ones. Let something big go wrong, though,
and it’s all hands on deck!
I was personally introduced to how failure can
be used strategically years ago, when I worked for
the City of New York. I ran an IT group charged with
installing an automated procurement system. I was
blissfully unaware of how challenging it would be
to gain political support and financial resources for
the project. Luckily, my boss was a political genius.
One afternoon, while I was running some analytics,
I learned that the data in the old system had become
corrupted. I leaped into action, determined to save
the day. But after I ran my plan past my boss, he quietly said, “Don’t do any of that. Sometimes things
have to fall apart before anybody musters the will to
fix them.” He was absolutely right. The failure of the
old system created a compelling argument for the
new one and was a turning point in gaining support.
Make room for new leaders. Sad but true: Even
today many leadership positions are held by people
very much like those who selected them. Entire industries have suffered the consequences of “lifers”
who don’t challenge unspoken assumptions and
taken-for-granted rules. Only when those assumptions and rules are proven ineffective—often, unfortunately, in the course of great trauma—do boards
recruit fresh leaders. The change can be surprisingly
beneficial. The U.S. auto industry provides a case in
point. Who would have thought that Alan Mulally, a
former senior executive at Boeing, would be an inspirational turnaround CEO for Ford?
This document is authorized for use only by Andrew Masters in Spring 2020 MGT 164 Zimmermann taught by JILL ROSENOW-FARWELL, University of California – San Diego from Mar 2020
to Jun 2020.
For the exclusive use of A. Masters, 2020.
For article reprints call 800-988-0886 or 617-783-7500, or visit hbr.org
Idea in Brief
If you’re launching a new business, creating a new product, or developing a new
technology, the principles of intelligent failure provide both logic and a safety net.
Decide what you’re
Be explicit about the
Design the initiative in
Create a culture
trying to do and what
success would look like.
assumptions you’re
making and have a
plan for testing them
throughout the project.
small chunks so that you
learn fast, without spending
too much money. Don’t try
to learn more than one
significant thing at a time.
that shares, forgives,
and sometimes even
celebrates failure.
Develop intuition and skill. Researchers say
that what people think of as intuition is, at its heart,
highly developed pattern recognition. Those who
have never faced a negative outcome have a critical gap in the body of experience that intuition is
based on. Many venture capitalists won’t invest in a
new enterprise if the founder has never undergone
failure.
Microsoft’s successful entrant in the game business, the Xbox 360, was developed by a team that
had worked on 3DO’s failed game console, the unsuccessful WebTV, Apple’s problematic video card business, and Microsoft’s own short-lived UltimateTV.
Having been through so many disappointments, the
team members were able to spot warning signs and
make smart course corrections. For example, the
earlier Xbox had used expensive chips from outside
manufacturers, and it reportedly lost about $4 billion
from 2001 to 2005. The Xbox 360 team chose different manufacturers, worked in close partnership with
them to develop the chips, and retained intellectual
property rights to the chips, allowing the system to
generate profits very early on.
Putting Intelligent Failure to Work
Obviously, not all failures are useful, and even some
that we could learn from should be avoided at all
costs. But if you accept that failures will sometimes
occur in uncertain environments, it makes sense to
plan for, manage, and learn from them—and in many
cases to consider them experiments rather than failures. Here are seven principles that can help your
organization leverage learning from failure.
Principle 1
Decide what success and failure would look
like before you launch an initiative.
It never ceases to amaze me how often people working on the same project have entirely different views
Many venture capitalists
won’t invest in a new
enterprise if the founder has
never undergone failure.
of what would constitute success. In one case I
studied, an organization that made environmental
remediation equipment was hoping to introduce a
new product line. The marketing group thought the
equipment’s selling point was that it met a tough new
regulatory standard. The engineering group thought
the point was cost-effectiveness—and to keep costs
down, it was designing out the very features the marketing group wanted to sell. This gap in understanding could easily have led to a failure of the unintelligent variety. But the company found out about it in
time to get everyone on the same page and prevent
what could have been a marketplace disaster.
principle 2
Convert assumptions into knowledge.
When you’re tackling a fundamentally uncertain
task, your initial assumptions are almost certain to
be incorrect. Often the only way to arrive at better
ones is to try things out. But you shouldn’t start experimenting until you’ve made your assumptions
explicit. Write them down and share them with your
team. Then make sure that you and your team are
open to revising them as new information comes in.
The risk is that we all have a tendency to gravitate
toward information that confirms what we already
believe—it’s called confirmation bias. A practical
way to address this bias is to empower one of your
team members to seek out information that suggests
your course of action is flawed. You want to find disconfirming information early, before you’ve made
April 2011 Harvard Business Review 5
This document is authorized for use only by Andrew Masters in Spring 2020 MGT 164 Zimmermann taught by JILL ROSENOW-FARWELL, University of California – San Diego from Mar 2020
to Jun 2020.
For the exclusive use of A. Masters, 2020.
learninG From Failure FaIlIng By DesIgn
extensive commitments and become resistant to
changing your mind.
Organizations that don’t record their assumptions tend to run into two big problems. First, assumptions become converted into facts in people’s
minds. During a meeting, a manager might venture
a guess that a given market could generate $5 million
in sales—and before the meeting ends, the $5 million is baked into next year’s budget! This sort of leap
causes all kinds of dysfunctional behavior when the
guess, almost inevitably, turns out to be wrong. Second, such organizations don’t learn as much as they
could. They may right their course as they proceed,
learning as they go, but if they’re not rigorous about
comparing results with expectations, the lessons
won’t be explicit and shared, and future projects
won’t benefit from them.
Having spelled out and revised your assumptions,
you should then design the organizational equivalent of an experiment to test them. As with a scientific experiment, the idea is that whether or not the
outcome is what you’d hoped for, at least you will
have learned something.
PrinCiPle 3
Be quick about it—fail fast.
Quick, decisive failures have a number of important
benefits. First, they can save you from throwing additional resources at a losing proposition. Second,
it’s much easier to establish cause and effect when
we won’t Punish Failure
Here are one
the effort involves
UK-based
genuine uncertainty.
global retailer’s
the outcome will be
formalized rules
decisive, because we
for when failure
planned
carefully.
is acceptable.
It’s riskier to do nothing—or to conduct
further analysis—
than to act and fail.
6 harvard Business review april 2011
actions and outcomes are close together in time.
Third, the sooner you can rule out a given course of
action, the faster you will move toward your goal.
And finally, an early failure lessens the pressure to
continue with the project regardless, because your
investment in it is not large.
A practical way to help ensure that any failure
happens quickly is to test elements of your project
early on. This is the main reason that “agile software
development” often produces better results than the
more conventional sequential process of systems design. In an agile environment, small chunks of code
are written and shared in a quick, iterative fashion
with other programmers and users before the team
moves on. This is in sharp contrast to the approach
in which analysts spend months documenting user
requirements before submitting those requirements
to programmers, who only then begin coding. By the
time a problem is discovered, a project could have
been heading in the wrong direction for years.
Speed may require changing how you allocate
resources. Instead of going for maximum NPV over
a project’s lifetime, for example, you may want to
break the financial evaluation into smaller chunks
in terms of both money and time. You may also want
to invest in more-flexible assets and people until you
have learned enough to confidently build a significant operation.
And the human benefits of failing fast should not
be overlooked. If people feel that a project’s failure
the cost is small.
the cost is contained.
the major underlying
assumptions are documented in writing.
Commitments are scaled
according to our increasing understanding.
there is a plan to test
the assumptions.
We’ve defined what
success would look
like—and the opportunity is significant.
the risks of failing are
understood and, to
the extent possible,
mitigated.
This document is authorized for use only by Andrew Masters in Spring 2020 MGT 164 Zimmermann taught by JILL ROSENOW-FARWELL, University of California – San Diego from Mar 2020
to Jun 2020.
For the exclusive use of A. Masters, 2020.
For artICle rePrInts Call 800-988-0886 or 617-783-7500, or vIsIt Hbr.orG
Focus oN Failure
poster
will doom them to months of waiting for another
project, or to losing their jobs, then failure is demoralizing. But if lots is going on and the conclusion of
one effort means that they’ll immediately get put
on another (possibly more interesting) project, then
endings can be positive. At the technical consultancy
Sagentia, for example, employees are quick to move
from project to project. The finance director, Neil
Elton, told me, “They’ll proactively send around
e-mails with a mini CV, saying, ‘I was going to be
busy, now I’m not. Can you use my skills?’” This attitude is symptomatic of an organization that knows
how to experiment intelligently.
PrinCiPle 4
Contain the downside risk—fail cheaply.
This is an important corollary to failing fast. Initiatives should be designed to make the consequences
of failure modest. Sometimes it’s valuable to test a
small-scale prototype before making a significant
investment. When the Japanese cosmetics firm Kao
was considering going into the manufacture of floppy
disks, a big question was whether or not customers
would buy Kao-branded disks. So the company went
to another manufacturer and bought disks that met
its quality standards, put the Kao label on them, and
offered them to customers. The response was positive, so the plan moved forward. Had the response
been negative, Kao could have stopped the project
without incurring substantial costs.
This approach may require breaking ingrained
habits. The chief innovation officer of a highly
technical company I worked with observed that
the company would typically get “some guy in a
white lab coat” to do a technical feasibility study before deciding whether to enter a new product area.
Such studies are not only expensive—upward of
$200,000—but also relatively unindicative of business feasibility. So the innovation officer started
making mock-ups of potential new products and
showing them to prospective customers. In many
instances the company learned that nontechnical
issues, such as form factor, usability, and fit with
existing systems, would have prevented customers
from adopting a product. The difference in cost between the approaches was an order of magnitude:
A typical mock-up cost around $20,000. The difference in speed was also considerable: a few weeks
rather than nine to 12 months.
3M’s reputation for being failure tolerant took
a beating under former CEO Jim McNerney, a GE-
WPa Federal
WPa
art Project,
circa 1936
trained leader who sought to utilize Six Sigma quality practices throughout the company, even in its research labs. Although these worked wonders in 3M’s
factories, the emphasis on generating predictable
results hampered employees’ willingness to take
risks on unproven ideas. When George Buckley took
the reins as CEO, in 2005, part of his challenge was
to restore the culture of risk taking. He discontinued
the use of Six Sigma in the labs and spurred scientists
and researchers to pursue new ideas—provided that
the downside was small. During the recession, 3M’s
historical philosophy of “make a little, sell a little”
when introducing a new product was successfully
coupled with Buckley’s emphasis on bottom-of-thepyramid innovations—inexpensive items that could
appeal to very broad markets.
PrinCiPle 5
Limit the uncertainty.
There isn’t much point to encouraging failure in an
arena your organization is already familiar with. But
experiencing it in an arena completely divorced from
your current capabilities won’t do you much good eiei
ther: You probably won’t be able to use what you find
out, because you won’t understand the context and
you won’t know how to connect what you’ve learned
to your existing knowledge base.
Google, which is ordinarily very good at experiexperi
mentation, went too far afield when it tried to launch
a non-internet radio venture. The company wanted
to automate the pricing of radio ads, as it had with internet ads. Radio stations would give Google a portion
(ideally all) of their ad inventory, and Google would
pit advertisers against one another to bid for the
spots. Problems emerged, however, because stations
were reluctant to give over control. Worse, the Google
ads went for less than those sold directly by the
april 2011 harvard Business review 7
…
Purchase answer to see full
attachment