Problem Solved. Now What?
A. Blanton Godfrey
We in quality management may have been working on the
wrong problem for years. We’ve focused on creating
and teaching better statistical methods. With advanced
software and popular methodologies such as Six Sigma, we
now teach these programs to millions of people in all kinds
of organizations. But most organizations still deal with
incredible numbers of problems; simple problems remain
unsolved, and many see the same problems repeatedly popping
up.
Some years ago, a group of my quality management graduate
students at Columbia University worked on the distribution
problem for local delivery of The New York Times. The team
collected sales data on the Sunday Times from newsstands
during the year. Data analysis showed that sales were seasonal
and predictable. By calculating some simple distributions
using the time of day when the newsstands ran out of papers
on both high and low sales days, the students predicted
how many papers could be sold on these days to eliminate
wasted copies. The evidence was strong, but as far as I
know, no one ever used the results.
Another group analyzed checkout times at a local grocery
store. The root cause for long lines became obvious when
the data were analyzed. Clerks new to the cash registers
took far longer than those with more experience. Throughput
could actually improve using fewer registers and clerks.
However, the store’s high turnover of clerks meant
that most registers were staffed with inexperienced employees.
The reason for the high turnover was also easy to discover--low
wages. The math was simple: Increase wages, lower turnover,
reduce staff and increase profits. It seemed so obvious.
But the store manager was unimpressed by the numbers. His
supervisors were more concerned with keeping salaries low,
so he was, too; improving profitability by raising wages
wasn’t important to them.
Time after time in consulting projects, I’ve seen
the same thing happen. Analysis leads to a clear solution,
but the solution is never implemented. Something in the
company culture prevents people from facing, or perhaps
using, the facts. Instead, we continue doing things the
way we always have.
Some years ago, I worked with an organization with 21
service sites. It desperately needed to improve throughput
because it had far more demand than capacity. The organization
experimented with one pilot site and increased the number
of customers served per day by more than 20 percent. Problems
arose, however, when it tried to replicate these results
across the other 20 sites. Managers presented numerous
reasons why the same changes wouldn’t work at their
sites. They questioned the data and the manner in which
the study was done. Ultimately, the company never achieved
the same results at any of its other sites.
A similar stalemate occurred in a different service company,
this one with 250 sites across North America. One improvement
team managed to reduce by nearly 80 percent a major problem
common across all sites, and in doing so it saved more
than $300,000 at that site alone. The CEO, seeing a potential
$75 million jump in profits companywide, demanded that
the other sites implement the improvement as well. Nothing
much happened. Some sites made some progress, but most
found many reasons why the solution wouldn’t work
for them.
As quality professionals, we’ve been teaching problem
solving for years, but perhaps we should have spent more
time teaching solution implementation. Too often we take
the easy way out and blame management for failing to implement
profitable solutions. It’s easy to cite 10 or 20
reasons these ideas won’t work in our organizations,
but it’s much harder to discover 10 or 20 ways to
make them work.
In both of these organizations, the improvement projects
were carried out almost secretly. Management assumed that
if positive results were achieved at one site, all the
others would want them as well. This wasn’t true,
of course. Site managers didn’t believe the results.
Their measurements were different, and they couldn’t
translate the improvements to their sites. Senior leaders
were supportive but couldn’t address these objections.
They didn’t understand that each site manager would
also have to understand the methods, tools and processes
by which the results could be achieved. People from each
site needed to learn from the pilot sites and modify the
approach to fit their sites. They needed the same support
the pilot site managers had provided their teams.
The real key to implementing improve-ments across an
organization isn’t merely leadership but also widespread
active involvement. This requires knowing when to provide
the right support at the right time. In both the examples,
the organizations should have created steering teams with
representatives from every site. They should have agreed
on the improvement plan, which measurements to make and
how they’d be modified to fit all sites. The improvement
processes across the sites should have been standardized
to make comparisons easy. Everyone involved would have
to know that differences would crop up from one site to
the next. But each site would have been able to achieve
similar results.
A. Blanton Godfrey is dean and Joseph D. Moore Professor
of North Carolina State University’s College of Textiles.
He’s co-editor of Juran’s Quality Handbook,
fifth edition (McGraw-Hill, 2003), and co-author of Modern
Methods for Quality Control and Improvement, second edition (John
Wiley & Sons, 2001).
|