Monthly Archives: April 2015

Observations on Customer Experience Improvement

Good Better Best Keys Represent Ratings And Improvement

Just came across a very interesting article courtesy of McKinsey Quarterly‘s Classics mailing, discussing the need to optimize service delivered to customers needs. While it focuses on consumer businesses there are a few key learnings for enterprise technology support operations, demonstrated by this quote:

“Finding these savings requires rigor in customer experience analytics: […]. It also requires a willingness to question long-held internal beliefs reinforced through repetition by upper management. The executive in charge of the customer experience needs to have the courage to raise these questions, along with the instinct to look for ways to self-fund customer experience improvements.”

The McKinsey article identifies two main drivers of resistance to implementing effective improvements:

  • Organizational momentum and deep held, but often misplaced, beliefs. Frequently these beliefs are shared across management levels, causing the company to operate with ineffective goals that remain unchallenged
  • Lack of both rigorous analysis as well as the ability to present those results convincingly and effectively

A future post will discuss analytics techniques. Here I’d like to focus on a few causes preventing organizations from progressing towards effective and efficient improvements and combine a few of the blog’s previous posts for the benefit of new readers. First, we previously had a high level discussion of several differences between consumer and enterprise support, namely the different roles we interact with and the many more opportunities for friction.

We also wrote about complexity and volume. This is a very useful distinction, touching on the operational differences between high volume/low complexity operations and those with low volume/high complexity, as most enterprise support operations are.

Having said that, we frequently find concepts and metrics from consumer businesses that are not beneficial for the unique challenges of supporting enterprise technology. It becomes our job to educate the organization and provide deep, actionable insights to help the company excel efficiently. We’ll cover that in future posts

No Free Lunches, or How To Reduce Case Life

Clock and calendar

A few weeks ago I had an interesting discussion with a senior executive in an enterprise technology company. That person started his career as a support engineer, and at that time one of his main measurable objectives was case life – the average amount of elapsed time between opening and closing of cases in his ownership. His impression, at the time and even now, was that the only way in which he could impact that goal was to close cases prematurely. Presumably this was not the intended result of this goal.

That discussion opened the door for two questions: can case life be a valuable metric for managing support organizations, and if so, how can it be used in a meaningful and productive manner?

Case life, when we think of it, is an aggregate metric combining a variety of activities taking place throughout the case life, and many times more than once. Each activity has, therefore, a double impact on case life, first by its elapsed time, and second through repetition which, in turn, offers opportunities for improvement exist both by reducing the time each element takes, and by eliminating as many repetitions as possible.

When we examine the most common elements of a case life and analyze the influences on elapsed time and drivers of repetition we find that most of the time spent on a case is comprised of waiting for one of three activities to take place:

  • Producing problem documentation
  • Investigating the problem
  • Implementing a fix and verifying its effectiveness

We also know that for each of these activities we can reduce the time they take to perform, and the number of iterations required to bring them to completion. For example, producing the documentation required to investigate a problem will be much faster and require fewer repetitions when the documentation is generated automatically when the problem occurs for the first time. Having to wait for the problem to recur and then rely on verbal instructions from the support engineer will invariably create errors and require repeated attempts to get right. But, high impact changes require higher investment in tools or in the product.

If we map the options for reducing case life according to their complexity and anticipated impact, we’ll see something similar to the following chart, where the activities that impact case life the most are also those that requires the largest investment and involve higher levels of the organization:

Case Life Elements2

Taking all this into account, we can conclude that the ability of the support engineer to influence case life is relatively small and depends on their ability to manage the customer interaction efficiently and effectively. But, the bigger impact will be driven by higher investment and greater organizational focus on the various drivers.

Having reviewed this we can now answer the questions we initially posed:

  1. Is case life a valuable metric for the support organization?
  2. Should it be used to measure the support engineer?

The answer to the first question is a resounding yes. Case life, the way it develops over time and its composition gives us unique insight into the performance of the organization, helps us gauge the success of past actions and outline future development plans. On the other hand, measuring support engineers on case life is probably unproductive, and is likely to drive the behaviors discussed in the first paragraph. It is better to measure engineers on the specific element each needs to improve, and especially those they can directly influence