SURVIVING Innovation [PDF]

demand. Also, Kimberly-Clark ran the ads nationally when (a) the product was not available and (b) when it was finally a

1 downloads 11 Views 524KB Size

Recommend Stories


PDF Review Surviving Your Dissertation
Stop acting so small. You are the universe in ecstatic motion. Rumi

[PDF] Surviving the Death of Money
Ask yourself: Do I feel comfortable expressing myself? Next

surviving death
Don't count the days, make the days count. Muhammad Ali

[PDF] Download Surviving an SAP Audit
No amount of guilt can solve the past, and no amount of anxiety can change the future. Anonymous

Adore Organic Innovation [PDF]
Jul 28, 2015 - Adore Organic Innovations will open for you a whole new world of innovative organic products. Adore organic ... Adore Organic Innovation offers an excellent collection of moisturizers, hand creams and nail care kit to take care of your

Read PDF Innovation Games
The butterfly counts not months but moments, and has time enough. Rabindranath Tagore

Surviving Sepsis Campaign
The greatest of richness is the richness of the soul. Prophet Muhammad (Peace be upon him)

Surviving a zombie apocalypse
I cannot do all the good that the world needs, but the world needs all the good that I can do. Jana

Surviving Sepsis Campaign
Life isn't about getting and having, it's about giving and being. Kevin Kruse

Surviving Urbanization Bibliography
You have survived, EVERY SINGLE bad day so far. Anonymous

Idea Transcript


Article 26

SURVIVING Innovation Common testing mistakes can derail a promising new product launch. By Kevin J. Clancy and Peter C. Krieg Watching the sports coverage on Monday after Sunday’s loss, any knowledgeable fan can spot the quarterback’s mistakes and see what he might have done to exploit the other team’s weaknesses. In the same way, once a new product has failed, any informed marketing executive usually can spot the company’s slip-ups. Take Cottonelle Fresh Rollwipes, “America’s first and only disposable, pre-moistened wipe on a roll.” Kimberly-Clark announced this “breakthrough product” in January 2001. According to the corporation’s press release, “It is the most significant category innovation since toilet paper first appeared in roll form in 1890.” Cottonelle Fresh Rollwipes “deliver the cleaning and freshening of pre-moistened wipes with the convenience and disposability of toilet paper.” The corporation was certain a market existed. Its market research had established that 63% of Americans had experimented with a wet “cleansing method” and that one out of four uses a moist wipe daily. Most of these people are using a baby wipe, but baby wipes usually can’t go into a septic system. Kimberly-Clark had been selling tubs containing sheets of “disposable” moistened toilet paper for years, and the growth of this product convinced the company to invest research money into finding a more convenient delivery mechanism. It came up with a refillable beige plastic dispenser that clips to the standard toilet paper spindle and holds both a roll of dry toilet paper and the wet Fresh Rollwipes. The dispenser with four starter rolls was to sell for $8.99; a refill pack of four rolls was $3.99. Kimberly-Clark, which says it spent $100 million on research for the project, protected the product and dispenser with 30 patents.

EXECUTIVE

briefing New product and service introductions have a startlingly high rate of failure, largely because they weren’t tested properly before launch. Even a great new product or service can fail if marketers don’t do their homework first. Avoiding some common mistakes in testing concepts and marketing plans can help companies make sure their products enjoy a long and healthy life in the marketplace.

In making the January 2001 announcement, the corporation said that the U.S. toilet paper market was $4.8 billion a year. Kimberly-Clark expected first-year Fresh Rollwipes sales to reach at least $150 million and$500 million within six years. Even better, those sales would expand the total U.S. toilet paper market because moistened toilet paper tends to supplement dry toilet paper rather than replace it. But by October, 10 months later, Kimberly-Clark was blaming the economy for poor sales. According to Information Resources Inc., Cottonelle Fresh Rollwipes sales were about one-third of the forecasted $150 million. In our experience, however, if consumers are interested in a product, a weak economy won’t depress sales so badly. Something had to be wrong with Kimberly-Clark’s marketing, and in April 2002, The Wall Street Journal reported some of the corporation’s missteps. 1

Article 26. SURVIVING Innovation Start with the premature announcement. Although Kimberly-Clark rolled out the publicity and advertising in January, it was not ready to ship the product to stores until July. This is like heating a home in the summer expecting it to remain warm in winter. The late arrival of manufacturing equipment may have been responsible for “a good part” of the delay, but by July, most shoppers had forgotten about the hype. “I know I’ve heard something about it. I can’t recall if it was a commercial or a comedian making fun of it,” said Rob Almond, who purchases paper goods as director of housekeeping services for Richfield Nursing Center, in Salem, Va., one of the markets where Rollwipes are available. Then there was the ineffective advertising. Granted Kimberly-Clark was trying to promote the advantages of a product that few people can talk about without embarrassment, but it never explained to consumers what the product does in its advertising and promotions. The ads, which cost $35 million, carried the slogan, “sometimes wetter is better” and feature shots from behind of people splashing in water. A print ad was an extreme close-up of a sumo wrestler’s behind. Analysts criticized the ads for not clearly explaining the product—or helping create demand. Also, Kimberly-Clark ran the ads nationally when (a) the product was not available and (b) when it was finally available but only in certain Southern markets.

The Cost of Failure While Monday morning quarterbacking can be instructive for marketers, it certainly hasn’t helped improve anyone’s chances of launching a successful new product or service or prevented the waste of millions— even billions of dollars—on failing efforts. Industry speakers generally say 80% to 90% of new products succeed, and a recent Nielsen BASES and Ernst & Young study put the failure rate of new U.S. consumer products at 95% and new European consumer products at 90%. Based on research at Copernicus, we believe that no more than 10% of all new products or services are successful—that is, still on the market and profitable after three years. This is true for consumer package goods, financial services, pharmaceuticals, consumer durables, telecommunications services, Hollywood movies, and more. And failure costs companies a tremendous amount of money. One way to calculate exactly how much is to take the average cost to develop and introduce a new product and multiply it by the number of failures. Edward F. Ogiba, president of Group EFO Limited, a new product development consulting company, has written that the cost of introducing a national brand can easily exceed $20 million. “The going budget for creative development and market research alone now reportedly exceeds $500,000 per project, a 150% increase in two years.”

No more than 10% of all new products or services are successful— that is, still on the market and profitable after three years.

The editor of New Product News has written “it probably costs $100 million to introduce a truly new soft drink nationally and it costs $10,000 to introduce a new flavor of ice cream in Minneapolis. Somewhere in between is a worthless ‘average’ cost to introduce a new product.” He added that with the help of media experts, the magazine built an “ideal” national introductory advertising/consumer promotion/trade promotionmarketing plan for a major new product. The total expenditure was roughly $54 million and did not include product development, package design, sales force contribution, or brand management costs.

If there was ever a product suitable for a sampling campaign, this was it. But Rollwipes weren’t produced in small trial sizes, which meant no free samples. Instead, Kimberly-Clark scheduled a mobile restroom tour to stop at public places so people could try out the product in the Southeast in mid-September 2001. It was just bad luck the road trip got canceled after September 11. Production design posed yet another problem. Unlike the wipes in boxes that bashful consumers can hide under the bathroom sink, Rollwipes’ beige plastic dispenser is immediately visible. The dispenser clips onto the spindle of the regular toilet tissue but is about the size of two rolls on top of one other. Not every bathroom has space for the dispenser, and not every consumer wants beige plastic. As Tom Vierhile, president of Marketing Intelligence services, a firm that tracks new product introductions, told The Wall Street Journal, “You do not want to have to ask someone to redecorate their bathroom.” A year and a half after Kimberly-Clark’s big announcement, Fresh Rollwipes were in one regional market and executives said sales are so weak they are not financially material.

A marketing research manager at the Best Foods Division of CPC International recently told us that, based on what he believed to be several conservative assumptions, the total marketing dollars spent by manufacturers for failed new food products in a single year ranged from $9 billion to $14 billion, a figure we believe understates reality. Now compare these estimates of development and launch costs with a stunning analysis done a few years ago by SAMI/Burke data that found fewer than 200 products of the thousands introduced in a 10-year period had more than $15 million in sales, and only a handful produced more than $100 million in sales. Very few new products and services are returning a positive return on 2

ANNUAL EDITIONS investment, let alone one large enough to justify the allocation of limited time and resources in the first place. But it doesn’t have to be this way. The key to improving the odds of launching successful new products and services is to avoid making critical marketing mistakes in the first place. Doing this requires new ways of creating new product ideas, testing new concepts, testing product and service formulations in terms of consumer acceptance, testing advertising and promotions, and finally testing entire marketing plans. In this article, we will focus on two of the possible reasons for failure: concept testing that falls short and breakdowns in marketing plans.

Concept Testing Falls Short New ideas for products and services come from everywhere. They fly into people’s heads in the middle of the night. Or the company has the opportunity to license or buy the rights to something from another country (e.g., Häagen-Daz has had enormous success with dulce de leche ice cream, which began life in South America; Clorox, with Brita water purification systems that originated in Europe; and Red Bull with an energy drink from Thailand). Or the firm has actually generated some new product ideas through one-on-one interviews, focus groups, or customer observations followed by intensive brainstorming sessions. At some point in the process, the idea turns into a concept, often described in one or two paragraphs, sometimes with a name and price. And if the marketer is smart, this concept is tested among a cross-section of prospective buyers in a category. After all, why waste precious time and dollars to develop a product or service if customers don’t want to buy it? Concept testing the way it is most frequently done today, however, is plagued with problems. Almost every marketer has done one (if not hundreds) of these tests, yet such tests often raise as many questions as they answer. We hear marketing executives ask questions like, “Is 14% in the top box ["definitely will buy"] a good score?” Or they say, “We studied three pricing variations. How could they all get 10% in the top box? Is there a fourth variation we should offer?” Or, “If we change the price (or formulation or the packaging), how much would trial increase? Would it go to 30%?” Traditional concept tests are fraught with problems, which we list here: Sample limitations. Marketers contract with research companies that generally employ small (75 to 150), non-projectable groups of men and women wandering through shopping malls and willing to answer questions for the research. Further, they tend to use only about three non-representative malls and markets for a given study. Data collection. Concept tests by phone and via the Web also have drawbacks. On the phone, an interviewer

reads a description of the concept to respondents. Because they’re reading and because people forget the first sentence by the time they hear the second, researchers must shorten the concept and distill it down to its bare bones. The bare bones sometimes can be as short as a sentence. The distilled concept is then coupled with rating scales with typically fewer points (i.e., less discrimination) than would be employed in a personal interview or Web-based survey. On the phone, the respondent has to remember a rating scale and give this stranger a number. Though it allows more visual capabilities, the Web can be just as dangerous as the phone. Many marketers don’t realize there are two broad methods of internet data collection: databases and panels. According to Greg McMahon, a senior vice president at international market research firm Synovate, making this distinction between the methods is critical: “Only research firms with true Internet panels maintain detailed demographic (and other) information about their panel members and balance their samples so they match U.S. Census statistics. Without this balance, there is a very high risk survey results will not measure what they are supposed to and lack study-to-study consistency.” Also, the average response rate to a Web survey is less than 1%, making it even less likely to get a reliable read on the potential of a new product. Alternative possibilities. Few marketers are able to efficiently ask “What if?” questions concerning variations in a concept’s features and benefits. For example, an insurance company considering a concept for a new car insurance product might wonder, “What if we provided 3

Article 26. SURVIVING Innovation which never happens in the real world. Even taking this into consideration, the overstatement problem remains. We have closely examined the relationship between people’s reports on the 11-point scale and actual buyer behavior (among people who were aware of the product or service and for whom it was available to be purchased) for numerous companies in consumer and B2B categories. As Exhibit 2 indicates, usually no more than 75% of the people who claim they definitely will buy actually do so. This figure declines as self-reported purchase probability declines, but the ratio is not constant. This leads to a set of adjustments for each level of self-report, which converts questionnaire ratings into estimates of likely behavior.

preferred access to a car repair shop? What if we take care of the whole registration process for a small fee? For free?” Ignorance of costs. In our experience, marketing managers seldom know the fixed and variable manufacturing and marketing costs of a new product or service, and they certainly never pass this kind of information along to their researchers. But without knowing costs, a manager cannot estimate profitability. Limited models. Finally, few research companies offer a valid model of the marketing mix into which they can feed concept scores to predict sales and profitability. Researchers present concept scores to marketers as if they were discrete pieces of information in themselves: “This one got a 33% top two box score, beating the control concept by almost two to one.” That’s nice, but will it sell? And if it sells, will it be profitable? Blank stares from the researcher.

Usually no more than 75% of the people who claim they definitely will buy actually do so.

The Quick Fix

These adjustments, as an aside, vary by the consumer’s (or industrial buyer’s) level of “involvement” in a category. The higher their involvement, the more faith we can have in what people say and the lower the need for overstatement adjustment. Needless to say, by taking purchase probabilities and involvement into account, it’s possible to produce a reasonably valid estimate of actual sales (i.e., the percentage of consumers who would buy the product at least once). While the traditional methods of concept testing remain the most common way to investigate new product concepts, traditional methods do not ipso facto represent the best approach. The goal of traditional tests is to find the concept that produces the highest level of buyer appeal, but they often fail to address what a company really needs. Computer-aided new product/service design is an alternative to traditional methods. It begins with a modified multiple trade-off analysis using either conjoint measurement or choice modeling methodologies. These approaches offer several advanced features: they predict real-world behavior and sales for a constellation of alternative concepts; use a nonlinear optimization algorithm to identify the most profitable concepts; allow the marketer to play out “what if” scenarios; and offer targeting and positioning guidance.

The good news is that companies can overcome most of these problems with traditional concept testing. We suggest some modifications in the process: Larger samples. Instruct researchers to use a larger, more projectable sample of prospective buyers (300 to 500) in more locations than is traditionally used. These people should be serious respondents, recruited via random-digit dialing and then brought to a central location—not the first warm bodies willing to stand still in a shopping mall. Balance phone and Web data collection. Mail a concept description and scoring scale to a respondent before the phone call. Do some one-on-one interviews to balance the results from a Web survey. Check to make sure the Internet research firm uses a panel. Full descriptions. Expose the sample to the big idea—a full description of the concept, complete with the name, positioning, packaging, features, and price (in our experience, we’ve consistently been surprised by how many concept tests ignore price). Present the concept in its competitive context, that is, with competing products sold in the market at their actual prices. The more a test mirrors reality, the more accurate the forecast. Even so, most concept testing ignores the competitive frame. Measure purchase probability. Have consumers rate the concept in terms of purchase probability using a scale superior to traditional 3-, 5-, or 7-point purchase intention scales for predicting likely market response. We’ve discovered through extensive experimentation that an 11-point scale better predicts real world behavior, especially for mixed and high-involvement decisions. Yet even this 11-point scale overstates the actual purchasing that takes place. People don’t do exactly what they say. For one thing, the research environment assumes 100% awareness and 100% distribution—in other words, all are aware of it and able to find it easily—

The Trouble With Test Markets Once the concept is selected, perfected, and ready to go, the next step for most marketers is a test market, essentially a small-scale dry run with the new product or service and its marketing campaign. But like traditional concept testing methods, traditional test markets are also fraught with problems. Often the company selects a test market because it’s easy to manage or because a retailer in the market will cooperate. It often fails to select the market that best represents the target the company wants to reach. 4

ANNUAL EDITIONS failed. Was it a problem with the way the company executed the idea or was the idea simply too small? Was the problem with the marketing program or with the competitive response? What part of the marketing program wasn’t working? Could the company have done something to turn a modest failure into a roaring success? Conversely, if the test was successful, is there anything that could have made the product or service even more profitable? A simulated test market (STM) is both an alternative and a complement to traditional methods. Today’s better simulated test marketing systems capture every important component in the marketing mix and assess the effect of any plan on product awareness, trial, repeat rates, market share, profitability, and more. These STMs test any plan the marketer wants to consider—even a competitor’s. The marketer enters plan details into a PC program, and the model forecasts what is likely to occur month by month in the real world.

Do Something Different If all you do is what you’ve done, then all you’ll get is what you’ve got. Most companies haven’t gotten much in terms of new product/service success in the past two decades. Companies need to do something different. A growing number of companies embrace the sentiments of Harvard economists John McArthur and Jeffrey Sachs, who said, “Innovation is no mere vanity plate on the nation’s economic engine. It trumps capital accumulation and allocation of resources as the most important contributor to growth.” The time is now for companies to take advantage of computer-aided design technologies and simulated test marketing to improve the performance of marketing programs for new products and services. The technologies can help marketers find the best product and service concept and discover the marketing plan, within or without a given budget, that will stimulate demand and grow the bottom line.

Traditional test markets have four major defects. First of all, they’re expensive. They can cost as little as $3 million, but typically run more. Costs include the research, the media, and the effort throughout the organization to control and check the test. Second, they take a long time. Waiting a year, 18 months, or two years for results is simply not feasible in today’s competitive environment. A third problem with traditional test markets is that competitors often have the opportunity to sabotage results. Even modest efforts by competitors can spoil the company’s ability to read the test market outcome. Competitors have undermined tests by having their salespeople pull the new products off retail shelves, turn them sideways, or move them to other shelves where shoppers will not notice them. Meanwhile, these same competitors scramble to devise a similar new product or service to counter a national introduction. Finally, traditional test markets usually don’t tell marketers what they need to know. While a product failing in a test market is not as painful as one failing in a national rollout, it’s often difficult to determine why it

About the Authors Kevin J. Clancy is chairman and CEO and Peter C. Krieg is president and COO of Copernicus Marketing Consulting and Research. They are currently working on a new book on technologies for improving new product/service success rates. Clancy may be reached at [email protected] and Krieg may be reached at [email protected].

From Marketing Management, March/April 2003, pp. 14-20. © 2003 by the American Marketing Association. Reprinted by permission.

5

Smile Life

When life gives you a hundred reasons to cry, show life that you have a thousand reasons to smile

Get in touch

© Copyright 2015 - 2024 PDFFOX.COM - All rights reserved.