Monthly Archives: March 2015

Autoupdate Anyone?

radiation warning sign

I recently wrote a post asking Are Your Technology and Policy Aligned? Today I came across this story about a German basketball team and the fallout when the laptop controlling their scoreboard decided to upgrade itself at the most inopportune time, begging the question of vendor responsibilities vs. those of the customer

There are several options vendors and customers can choose from. These range from fully automated to fully manual, with one or two additional options mid-range, such as alert only, or alert with automatic download and user controlled install. How should vendors and customers navigate these options and what criteria should be used for their decisions?

To answer, we need to understand the business and technology risks associated with the decision. For example, most of us would think that updating the anti-virus signature file on our PC is trivial and are happy to let that happen in the background. On the other hand, even this very low risk update taking place at the same time as an extremely critical process would not be acceptable to some users. Vendors, therefore, have to offer their customers a variety of choices they can adapt to their needs. Even more importantly, vendors have to repeatedly educate their customers on the risks and benefits of their choices

Customers, on the other hand, should understand their options and the impact of choices, and the resulting risks they are taking. For example, relying on a single laptop to run a mission critical application is probably not the smartest thing anyone can do. That same laptop could have failed for a variety of other reasons, from hard drive crash to power supply failure. The only reason this event received press coverage is the timing it chose to update itself.

Lastly, I mentioned this post to a friend, his response was a link to this video:

Basic Concepts of Six Sigma

Six Sigma Blue Stripes Horizontal

I recently came across a post titled Using Six Sigma to Improve Customer Experience and Service. As it touches on several topics close to my heart I read it with great anticipation and sadly even greater disappointment.

Since I feel the author misses the basic concepts of six sigma and the many improvement opportunities it offers support and services organizations I decided to attempt and correct some of the misconceptions and offer a different perspective to some of the points discussed.

First and most obviously missing is the fact that six-sigma is an iterative improvement process. DMAIC, therefore, is circular as shown in the chart attached to the original post rather than being a one time linear activity, at the end of which is the six-sigma nirvana stage of 3.4 defects per million.

Second, the statistical concepts behind six-sigma are never mentioned, nor is the meaning of 3.4 defects per million opportunities (known as DPMO). Here is a brief explanation. Sigma is a measure of spread of a normally distributed population, and measures the number of standard deviations fitting within a certain range. That range, in turn, is the acceptable performance range, so performance within that range is considered good, and outside of it is bad. To achieve one sigma, for example, about 68% of the population must be within the acceptable performance range, for two sigma, 95.5%, and so on, as shown below:

Sigma ValuePercentage Within Range
~68.2%
~95.5%
~99.73%
99.993666%
99.9999426697%
99.9999998027%

The last item I’d like to touch on is the need to make quality improvement, six sigma or otherwise, an inclusive initiative. It should ensure each and every employee understands the improvement process and expected results, and is able to make contributions.

Obviously, it is not possible to increase sigma levels by going through the phases of DMAIC without transforming service delivery, and it is extremely unlikely that a single journey through DMAIC will take you into six sigma performance levels. However, the benefits of improvement, even from one sigma to two sigma are enormous. So, the value in six sigma is in the journey and in creating the continuous improvement culture rather than reaching the elusive ultimate destination.

How To Follow Up On Your Survey Results

Fill in the customer satisfaction survey

Recently I have seen a number of discussions and posts discussing ways to follow-up on NPS surveys. Strangely, the writers seem to focus on transforming NPS, which is a quantitative survey with rigorous interpretation methodology into a qualitative interview, where insights are gained from reading customers’ comments or follow-up interviews to ‘drill down’ for the reasons behind the ratings. This accentuates the challenge with NPS results being non-actionable, and poses several additional problems we must think about and offer other methods to supplement the ratings.

First, let’s discuss the reasons for not using comments and interviews for extra insights from surveys:

  1. Sample size is always a concern for surveys. Comments are not always filled by customers, therefore the sample size is further reduced
  2. The manual nature of interviews will inevitably make scalability and cost a concern, further reducing sample size
  3. With global operations, language and time zones may eliminate a portion of the customers due to your inability to conduct interviews or correctly interpret comments
  4. Confirmation bias, where interviewers and comment readers only account for responses that confirm existing concepts, may pose a significant threat to the success of the survey program
  5. Discrepancies between comments and actual drivers of dissatisfaction are well documented. Relying on comments only will prevent you from confirming the comments via metrics

Now, should you read survey comments, or interview customers who rate you poorly? Absolutely, but do not confuse that for your main insights. Comments and interviews provide illustration to the broader conclusion you derive from researching the details of the survey and correlating them with your operational and demographic information.

So, how should you go about analyzing the responses from your NPS survey in greater detail?

First, by all means, call your customers back to follow up on survey results. Call those who rate you poorly, as well as those who rate you well. But, also correlate the results with demographic and operational data. For example, how does score vary across regions or industries? Do customers using a certain feature or function rate better or worse than others? Does your score vary based on support case count or their duration? How do events over time impact customers’ score? Last, and equally important, remember that non-responsive customers do that for a reason as well. Can you identify different factors that drive customers’ response rate? Does any of that indicate their propensity to renew, or churn?

In conclusion, go ahead and experiment with your customer survey results and the drivers behind them. Do not assume that what works for others will necessary work for you. If you have access to a person with statistics knowledge seek their help in building a regression model that identifies the impact of each factor on customer satisfaction. If you don’t, there’s much you can do on your own to analyze the results and understand your company’s specific environment and reach conclusions on what to improve next.

Process Mining and Customer Support

Coal Miner Pump Fist With Pick Ax Retro

Recent months have been busy investigating the Process Mining discipline and its potential applications in enterprise technology support. A relatively recent discipline, Process Mining uses advanced analytics techniques to analyze logs from various systems and identify process flows and various other parameters parameters. The Process Mining Manifesto defines it as follows:

Process mining is a discipline positioned at the crossroads of computational intelligence, data mining, and process modeling and analysis. The idea of process mining is to discover, monitor and improve real processes by extracting knowledge from the event logs produced as part of on-going business activity.

At this point you may ask yourself what problem Process Mining will solve for you. To help answer this question I’ve created a short powerpoint presentation:

Most process driven organizations go about implementing their processes and the IT systems that support them in a manner that creates a disconnect between the process model as designed and implemented into the organization and the IT system on one side, and the ‘real life’ process as eventually used by the organization on the other.

Implemented correctly, Process Mining helps us answer two questions. First, what’s my organization doing? and second, how different is that to the prescribed process? We can then identify discrepancies and problems, isolate their root causes and take action to change either the model or the way it’s executed.

Customer support departments are very disciplined in recording all activities and interactions in a CRM system, which makes them prime candidates for Process Mining investigation.

How does process mining work? There are several commercial and open-source tools that will process your CRM’s log files and produce a chart of the executed process. Some will also allow you to filter for the most common paths, the longest process steps and bottlenecks. Contact us for more details and further information.

Are Your Technology and Policy Aligned?

time to upgrade

An interesting discussion on the ASP Group on Linkedin asked for ways to address customers who won’t upgrade their systems and keep requesting support. There are several valuable suggestions in that discussion, but I’d like to use this opportunity to address the how vendor choices impact customers upgrade decisions.

There are two main choices vendors can make that impact customers’ decision. First are policy choices: for example, the number of supported release and the options available to a customer on a non-supported release when encountering a problem. Second are engineering choices: complexity of the upgrade process, the amount of work the customer need to invest in the upgrade and whether any equipment needs to be acquired. These decisions influence customers and their ability, and desire, to upgrade. The interaction between policies and and technology choices will tend to drive customer choices and the resulting actions vendors can take.

The following matrix offers a quick visualization of the choices and their impact.

matrix

If we think about these four options, we can identify some familiar categories and use cases:

  • Quadrant I is empty due to the fact that simple or automated upgrades make it easy for customers to upgrade vendors can get away with strict policies, sunsetting releases rapidly, reducing support levels for backward releases and sometimes charging high maintenance fees.
  • Quadrant II Is for situations where upgrades involve high expense and risk, and the vendors are willing to continue supporting these releases either for customer retention of for the increased maintenance fees. The most famous example is Microsoft’s extended support for Windows XP.
  • Quadrant III Represents cases where software upgrade is simple and in many cases automated. A typical use case would be Anti-Virus software with auto-update capabilities and little effort investment on the customers’ side.
  • Quadrant IV Here we find extremely complex technology deployments (think large scale ERP systems), requiring extensive customization, and subsequent testing and adaptation when upgrading. These are extremely expensive, time consuming and very risk prone. Frequently customers would rather stay on their original release and avoid upgrading Vendors who refuse to support their older releases or who impose high costs for doing that may find their customers defecting to third party maintenance providers.

Understanding these categories and the choices you make will help you understand how your company’s policy and technology choices impact customers’ upgrade decision and your maintenance business.