Observations on Customer Experience Improvement

Good Better Best Keys Represent Ratings And Improvement

Just came across a very interesting article courtesy of McKinsey Quarterly‘s Classics mailing, discussing the need to optimize service delivered to customers needs. While it focuses on consumer businesses there are a few key learnings for enterprise technology support operations, demonstrated by this quote:

“Finding these savings requires rigor in customer experience analytics: […]. It also requires a willingness to question long-held internal beliefs reinforced through repetition by upper management. The executive in charge of the customer experience needs to have the courage to raise these questions, along with the instinct to look for ways to self-fund customer experience improvements.”

The McKinsey article identifies two main drivers of resistance to implementing effective improvements:

  • Organizational momentum and deep held, but often misplaced, beliefs. Frequently these beliefs are shared across management levels, causing the company to operate with ineffective goals that remain unchallenged
  • Lack of both rigorous analysis as well as the ability to present those results convincingly and effectively

A future post will discuss analytics techniques. Here I’d like to focus on a few causes preventing organizations from progressing towards effective and efficient improvements and combine a few of the blog’s previous posts for the benefit of new readers. First, we previously had a high level discussion of several differences between consumer and enterprise support, namely the different roles we interact with and the many more opportunities for friction.

We also wrote about complexity and volume. This is a very useful distinction, touching on the operational differences between high volume/low complexity operations and those with low volume/high complexity, as most enterprise support operations are.

Having said that, we frequently find concepts and metrics from consumer businesses that are not beneficial for the unique challenges of supporting enterprise technology. It becomes our job to educate the organization and provide deep, actionable insights to help the company excel efficiently. We’ll cover that in future posts

No Free Lunches, or How To Reduce Case Life

Clock and calendar

A few weeks ago I had an interesting discussion with a senior executive in an enterprise technology company. That person started his career as a support engineer, and at that time one of his main measurable objectives was case life – the average amount of elapsed time between opening and closing of cases in his ownership. His impression, at the time and even now, was that the only way in which he could impact that goal was to close cases prematurely. Presumably this was not the intended result of this goal.

That discussion opened the door for two questions: can case life be a valuable metric for managing support organizations, and if so, how can it be used in a meaningful and productive manner?

Case life, when we think of it, is an aggregate metric combining a variety of activities taking place throughout the case life, and many times more than once. Each activity has, therefore, a double impact on case life, first by its elapsed time, and second through repetition which, in turn, offers opportunities for improvement exist both by reducing the time each element takes, and by eliminating as many repetitions as possible.

When we examine the most common elements of a case life and analyze the influences on elapsed time and drivers of repetition we find that most of the time spent on a case is comprised of waiting for one of three activities to take place:

  • Producing problem documentation
  • Investigating the problem
  • Implementing a fix and verifying its effectiveness

We also know that for each of these activities we can reduce the time they take to perform, and the number of iterations required to bring them to completion. For example, producing the documentation required to investigate a problem will be much faster and require fewer repetitions when the documentation is generated automatically when the problem occurs for the first time. Having to wait for the problem to recur and then rely on verbal instructions from the support engineer will invariably create errors and require repeated attempts to get right. But, high impact changes require higher investment in tools or in the product.

If we map the options for reducing case life according to their complexity and anticipated impact, we’ll see something similar to the following chart, where the activities that impact case life the most are also those that requires the largest investment and involve higher levels of the organization:

Case Life Elements2

Taking all this into account, we can conclude that the ability of the support engineer to influence case life is relatively small and depends on their ability to manage the customer interaction efficiently and effectively. But, the bigger impact will be driven by higher investment and greater organizational focus on the various drivers.

Having reviewed this we can now answer the questions we initially posed:

  1. Is case life a valuable metric for the support organization?
  2. Should it be used to measure the support engineer?

The answer to the first question is a resounding yes. Case life, the way it develops over time and its composition gives us unique insight into the performance of the organization, helps us gauge the success of past actions and outline future development plans. On the other hand, measuring support engineers on case life is probably unproductive, and is likely to drive the behaviors discussed in the first paragraph. It is better to measure engineers on the specific element each needs to improve, and especially those they can directly influence

Autoupdate Anyone?

radiation warning sign

I recently wrote a post asking Are Your Technology and Policy Aligned? Today I came across this story about a German basketball team and the fallout when the laptop controlling their scoreboard decided to upgrade itself at the most inopportune time, begging the question of vendor responsibilities vs. those of the customer

There are several options vendors and customers can choose from. These range from fully automated to fully manual, with one or two additional options mid-range, such as alert only, or alert with automatic download and user controlled install. How should vendors and customers navigate these options and what criteria should be used for their decisions?

To answer, we need to understand the business and technology risks associated with the decision. For example, most of us would think that updating the anti-virus signature file on our PC is trivial and are happy to let that happen in the background. On the other hand, even this very low risk update taking place at the same time as an extremely critical process would not be acceptable to some users. Vendors, therefore, have to offer their customers a variety of choices they can adapt to their needs. Even more importantly, vendors have to repeatedly educate their customers on the risks and benefits of their choices

Customers, on the other hand, should understand their options and the impact of choices, and the resulting risks they are taking. For example, relying on a single laptop to run a mission critical application is probably not the smartest thing anyone can do. That same laptop could have failed for a variety of other reasons, from hard drive crash to power supply failure. The only reason this event received press coverage is the timing it chose to update itself.

Lastly, I mentioned this post to a friend, his response was a link to this video:

Basic Concepts of Six Sigma

Six Sigma Blue Stripes Horizontal

I recently came across a post titled Using Six Sigma to Improve Customer Experience and Service. As it touches on several topics close to my heart I read it with great anticipation and sadly even greater disappointment.

Since I feel the author misses the basic concepts of six sigma and the many improvement opportunities it offers support and services organizations I decided to attempt and correct some of the misconceptions and offer a different perspective to some of the points discussed.

First and most obviously missing is the fact that six-sigma is an iterative improvement process. DMAIC, therefore, is circular as shown in the chart attached to the original post rather than being a one time linear activity, at the end of which is the six-sigma nirvana stage of 3.4 defects per million.

Second, the statistical concepts behind six-sigma are never mentioned, nor is the meaning of 3.4 defects per million opportunities (known as DPMO). Here is a brief explanation. Sigma is a measure of spread of a normally distributed population, and measures the number of standard deviations fitting within a certain range. That range, in turn, is the acceptable performance range, so performance within that range is considered good, and outside of it is bad. To achieve one sigma, for example, about 68% of the population must be within the acceptable performance range, for two sigma, 95.5%, and so on, as shown below:

Sigma ValuePercentage Within Range
~68.2%
~95.5%
~99.73%
99.993666%
99.9999426697%
99.9999998027%

The last item I’d like to touch on is the need to make quality improvement, six sigma or otherwise, an inclusive initiative. It should ensure each and every employee understands the improvement process and expected results, and is able to make contributions.

Obviously, it is not possible to increase sigma levels by going through the phases of DMAIC without transforming service delivery, and it is extremely unlikely that a single journey through DMAIC will take you into six sigma performance levels. However, the benefits of improvement, even from one sigma to two sigma are enormous. So, the value in six sigma is in the journey and in creating the continuous improvement culture rather than reaching the elusive ultimate destination.

How To Follow Up On Your Survey Results

Fill in the customer satisfaction survey

Recently I have seen a number of discussions and posts discussing ways to follow-up on NPS surveys. Strangely, the writers seem to focus on transforming NPS, which is a quantitative survey with rigorous interpretation methodology into a qualitative interview, where insights are gained from reading customers’ comments or follow-up interviews to ‘drill down’ for the reasons behind the ratings. This accentuates the challenge with NPS results being non-actionable, and poses several additional problems we must think about and offer other methods to supplement the ratings.

First, let’s discuss the reasons for not using comments and interviews for extra insights from surveys:

  1. Sample size is always a concern for surveys. Comments are not always filled by customers, therefore the sample size is further reduced
  2. The manual nature of interviews will inevitably make scalability and cost a concern, further reducing sample size
  3. With global operations, language and time zones may eliminate a portion of the customers due to your inability to conduct interviews or correctly interpret comments
  4. Confirmation bias, where interviewers and comment readers only account for responses that confirm existing concepts, may pose a significant threat to the success of the survey program
  5. Discrepancies between comments and actual drivers of dissatisfaction are well documented. Relying on comments only will prevent you from confirming the comments via metrics

Now, should you read survey comments, or interview customers who rate you poorly? Absolutely, but do not confuse that for your main insights. Comments and interviews provide illustration to the broader conclusion you derive from researching the details of the survey and correlating them with your operational and demographic information.

So, how should you go about analyzing the responses from your NPS survey in greater detail?

First, by all means, call your customers back to follow up on survey results. Call those who rate you poorly, as well as those who rate you well. But, also correlate the results with demographic and operational data. For example, how does score vary across regions or industries? Do customers using a certain feature or function rate better or worse than others? Does your score vary based on support case count or their duration? How do events over time impact customers’ score? Last, and equally important, remember that non-responsive customers do that for a reason as well. Can you identify different factors that drive customers’ response rate? Does any of that indicate their propensity to renew, or churn?

In conclusion, go ahead and experiment with your customer survey results and the drivers behind them. Do not assume that what works for others will necessary work for you. If you have access to a person with statistics knowledge seek their help in building a regression model that identifies the impact of each factor on customer satisfaction. If you don’t, there’s much you can do on your own to analyze the results and understand your company’s specific environment and reach conclusions on what to improve next.

Process Mining and Customer Support

Coal Miner Pump Fist With Pick Ax Retro

Recent months have been busy investigating the Process Mining discipline and its potential applications in enterprise technology support. A relatively recent discipline, Process Mining uses advanced analytics techniques to analyze logs from various systems and identify process flows and various other parameters parameters. The Process Mining Manifesto defines it as follows:

Process mining is a discipline positioned at the crossroads of computational intelligence, data mining, and process modeling and analysis. The idea of process mining is to discover, monitor and improve real processes by extracting knowledge from the event logs produced as part of on-going business activity.

At this point you may ask yourself what problem Process Mining will solve for you. To help answer this question I’ve created a short powerpoint presentation:

Most process driven organizations go about implementing their processes and the IT systems that support them in a manner that creates a disconnect between the process model as designed and implemented into the organization and the IT system on one side, and the ‘real life’ process as eventually used by the organization on the other.

Implemented correctly, Process Mining helps us answer two questions. First, what’s my organization doing? and second, how different is that to the prescribed process? We can then identify discrepancies and problems, isolate their root causes and take action to change either the model or the way it’s executed.

Customer support departments are very disciplined in recording all activities and interactions in a CRM system, which makes them prime candidates for Process Mining investigation.

How does process mining work? There are several commercial and open-source tools that will process your CRM’s log files and produce a chart of the executed process. Some will also allow you to filter for the most common paths, the longest process steps and bottlenecks. Contact us for more details and further information.

Are Your Technology and Policy Aligned?

time to upgrade

An interesting discussion on the ASP Group on Linkedin asked for ways to address customers who won’t upgrade their systems and keep requesting support. There are several valuable suggestions in that discussion, but I’d like to use this opportunity to address the how vendor choices impact customers upgrade decisions.

There are two main choices vendors can make that impact customers’ decision. First are policy choices: for example, the number of supported release and the options available to a customer on a non-supported release when encountering a problem. Second are engineering choices: complexity of the upgrade process, the amount of work the customer need to invest in the upgrade and whether any equipment needs to be acquired. These decisions influence customers and their ability, and desire, to upgrade. The interaction between policies and and technology choices will tend to drive customer choices and the resulting actions vendors can take.

The following matrix offers a quick visualization of the choices and their impact.

matrix

If we think about these four options, we can identify some familiar categories and use cases:

  • Quadrant I is empty due to the fact that simple or automated upgrades make it easy for customers to upgrade vendors can get away with strict policies, sunsetting releases rapidly, reducing support levels for backward releases and sometimes charging high maintenance fees.
  • Quadrant II Is for situations where upgrades involve high expense and risk, and the vendors are willing to continue supporting these releases either for customer retention of for the increased maintenance fees. The most famous example is Microsoft’s extended support for Windows XP.
  • Quadrant III Represents cases where software upgrade is simple and in many cases automated. A typical use case would be Anti-Virus software with auto-update capabilities and little effort investment on the customers’ side.
  • Quadrant IV Here we find extremely complex technology deployments (think large scale ERP systems), requiring extensive customization, and subsequent testing and adaptation when upgrading. These are extremely expensive, time consuming and very risk prone. Frequently customers would rather stay on their original release and avoid upgrading Vendors who refuse to support their older releases or who impose high costs for doing that may find their customers defecting to third party maintenance providers.

Understanding these categories and the choices you make will help you understand how your company’s policy and technology choices impact customers’ upgrade decision and your maintenance business.

What We Read This Week, 9 January 2015:

Who Provides Tech Support for the Internet of Things? – The writer examines the challenges associated with supporting the massively connected Internet of Things (HBR)

The Baffling Advances of Social Customer Service – The always excellent Esteban Kolsky on Social Customer service, and whether companies need to be where their customers are.

Customer Loyalty in the B2B World: What Does it Mean, and How to Achieve It – a marketing oriented discussion on managing and increasing customer loyalty. The main point, IMO, is the need to identify the influencers, a point I made on this blog as well as in several online discussions concerning customer satisfaction (mycustomer.com)

What We Read This Week, New Year 2015:

Happy New Year Everybody!

Are Ideas Killing Our Organizations? – presents the well known principle that it’s better to test an idea rather than fully develop it to perfection over a much longer period of time (Forbes)

False positives and false negatives in predicting customer lifetime value – Customer Lifetime Value has become very important as SaaS delivery gains prominence. Arie Goldshlager presents the potential errors and risks in this field. I recommend also visiting the article on which the post is based, here.

6 Tips For Managing Worldwide Offices – pretty self explanatory title, with very useful tips for those if us managing multiple locations and cultures (InformationWeek)

“Your Product is a Piece of Sh#t” – on the HootSuite blog, by their CEO Ryan Holmes on the company’s response to customer criticism

See the Experience You Are Giving Customers – Service Blueprinting is a very useful way of visualizing service experiences. This post provides an excellent insight into the discipline (Center for Services Leadership).

What We Read This Week, Boxing Day Edition, 2014:

I hope everybody had an excellent Christmas. Below is our weekly Enterprise Technology Support Management reading list for the week:

How to Prevent Experts from Hoarding Knowledge – An interesting, if somewhat obvious, perspective on losing knowledge as people retire (HBR)

Forrester’s Top Trends For Customer Service In 2015 – from the always excellent Kate Leggett at Forrester.

I Actually Enjoy Terrible Customer Service – And frankly speaking, so do I. An interesting discussion from the Kana blog.

Large Company Customer Experience Battles – discusses the differences between large enterprises and smaller companies as far as customer focus goes (Pivot Point Solutions).

Knowledge Management vs. Content Management – from David Kay.

Thank you all for following the blog, we’ll see you in the new year!