Monday, March 21, 2011

The seven deadly sins of Voice of the Customer (VOC) research

From an excerpt from Faster, Cheaper, Better (Hammer and Hershman, 2010) on the "seven (deadly) sins of corporate measurement", with examples of how they have undercut VOC programs by Howard Lax.

Vanity
Picking metrics that are easy to hit and which make managers look good.

Not happy with the percentage of top scores they receive, often companies treat any non-negative rating as a positive indication of loyalty or use scales that make anything short of the most egregious service failure look like success. This may make the dashboard results look better, but the illusion of excellence isn't excellence.

Provincialism
Asking customers questions along organizational lines or using internal jargon that has no meaning to them.

Operational definitions may seem mundane, but they make more sense to readers than expecting them to appreciate nuanced differences between service management vs. service delivery or tellers vs. platform personnel.

Narcissism
Measure from the company's perspective, rather than from the customer's.

Often companies insist that customers were wrong about timing measures. The underlying issue is often different staring points. The firm would track the time to resolve a problem from the point when a service tech contacted the customer or opened a service order. OK, but customers begin marking time when they first call or log the issue (or even from the first moment they experience a problem). Of course, perceived time - even if inaccurate - is ultimately what matters to customers anyway (hence the Disney "magic" of turning wait time into part of the experience).

Laziness
Assuming that the company knows what matters to customers better than customers do.

Example: a major mortgage investment player that insisted on evaluating its performance on those criteria it "knew" should matter most to customers. The company postulated a corporate advantage number that did not reflect what was most important to customers. Not surprisingly, the advantage number had little to do with customer loyalty or satisfaction with the firm.

Pettiness
Taking too narrow a scope of a larger issue.

Asking customers about the geographic footprint of their cell service, for example, scarcely captures their sense of the quality and reliability of the service.

Inanity
Losing sight of the consequences of measurement.

If you measure and highlight the number of rooms housekeeping cleans in an hour or the call center turns in a shift, don't be surprised if the numbers you are tracking improve but the guest or customer experience deteriorates.

Frivolity
Failure to take measurement seriously.

This category is where many concerns arise about mis-measurement and poorly-designed VOC research, including social media and text analysis. The who (sample or population), what (content), when (timing), how (mode of data collection) and why (type of analysis) of measurement need to be clearly understood and driven by business objectives. (Note: "We need to do a survey" is not a business objective.) These aren't simply technical issues for the data wonks; these are the critical parameters that determine the application and utility of the results. In other words, these are the issues that guard against the garbage-in part of the GIGO problem.

Who is not an existential question. Rather, it is the practical issue of the people (or households, or companies, etc.) that are included in the data and the underlying key question: What larger population is the data representative of or projectable to? Is it representative of all customers? Online users only? Those who post comments online only? Customers who came into the store and who made a purchase and paid with a store-branded credit card? Or (gulp), do you have no idea how the who is defined?

Content may seem easy but how you ask what you ask is anything but. Are respondents answering the questions you intended to ask in a consistent, reliable manner? Do you have the right breadth and depth of inquiry? The what

When often is ignored but time of day, week, month or year can have significant impact on the customer experience, as well as the response rate. This becomes particularly important when it comes to trending.

While you might not have that many options with regards to the how of data collection, there is a mode effect (i.e., how you collect the data will affect the data). The mode of data collection also will affect how much you can ask, what you can ask and how you should ask. Issues often bump up against practical concerns about survey real estate and the need to limit the length of the questionnaire.

Ultimately, the why is all about application. How do you plan to analyze and use the data? This is where the research meets the business objectives. In a well-conceptualized engagement, the why is specified up front and determines many of the who, what, when and how issues.

Doom the results
Failure to properly attend to these five measurement parameters will doom the results to the domain of the frivolous, in Hammer and Hershman's terms. In other words, lack of attention to these factors will lead to mismeasurement of VOC and mismanagement of your efforts to improve customer loyalty and the customer experience.



No comments:

Add to Technorati Favorites