Autoupdate Anyone?

radiation warning sign

I recently wrote a post asking Are Your Technology and Policy Aligned? Today I came across this story about a German basketball team and the fallout when the laptop controlling their scoreboard decided to upgrade itself at the most inopportune time, begging the question of vendor responsibilities vs. those of the customer

There are several options vendors and customers can choose from. These range from fully automated to fully manual, with one or two additional options mid-range, such as alert only, or alert with automatic download and user controlled install. How should vendors and customers navigate these options and what criteria should be used for their decisions?

To answer, we need to understand the business and technology risks associated with the decision. For example, most of us would think that updating the anti-virus signature file on our PC is trivial and are happy to let that happen in the background. On the other hand, even this very low risk update taking place at the same time as an extremely critical process would not be acceptable to some users. Vendors, therefore, have to offer their customers a variety of choices they can adapt to their needs. Even more importantly, vendors have to repeatedly educate their customers on the risks and benefits of their choices

Customers, on the other hand, should understand their options and the impact of choices, and the resulting risks they are taking. For example, relying on a single laptop to run a mission critical application is probably not the smartest thing anyone can do. That same laptop could have failed for a variety of other reasons, from hard drive crash to power supply failure. The only reason this event received press coverage is the timing it chose to update itself.

Lastly, I mentioned this post to a friend, his response was a link to this video:

Basic Concepts of Six Sigma

Six Sigma Blue Stripes Horizontal

I recently came across a post titled Using Six Sigma to Improve Customer Experience and Service. As it touches on several topics close to my heart I read it with great anticipation and sadly even greater disappointment.

Since I feel the author misses the basic concepts of six sigma and the many improvement opportunities it offers support and services organizations I decided to attempt and correct some of the misconceptions and offer a different perspective to some of the points discussed.

First and most obviously missing is the fact that six-sigma is an iterative improvement process. DMAIC, therefore, is circular as shown in the chart attached to the original post rather than being a one time linear activity, at the end of which is the six-sigma nirvana stage of 3.4 defects per million.

Second, the statistical concepts behind six-sigma are never mentioned, nor is the meaning of 3.4 defects per million opportunities (known as DPMO). Here is a brief explanation. Sigma is a measure of spread of a normally distributed population, and measures the number of standard deviations fitting within a certain range. That range, in turn, is the acceptable performance range, so performance within that range is considered good, and outside of it is bad. To achieve one sigma, for example, about 68% of the population must be within the acceptable performance range, for two sigma, 95.5%, and so on, as shown below:

Sigma ValuePercentage Within Range
~68.2%
~95.5%
~99.73%
99.993666%
99.9999426697%
99.9999998027%

The last item I’d like to touch on is the need to make quality improvement, six sigma or otherwise, an inclusive initiative. It should ensure each and every employee understands the improvement process and expected results, and is able to make contributions.

Obviously, it is not possible to increase sigma levels by going through the phases of DMAIC without transforming service delivery, and it is extremely unlikely that a single journey through DMAIC will take you into six sigma performance levels. However, the benefits of improvement, even from one sigma to two sigma are enormous. So, the value in six sigma is in the journey and in creating the continuous improvement culture rather than reaching the elusive ultimate destination.

How To Follow Up On Your Survey Results

Fill in the customer satisfaction survey

Recently I have seen a number of discussions and posts discussing ways to follow-up on NPS surveys. Strangely, the writers seem to focus on transforming NPS, which is a quantitative survey with rigorous interpretation methodology into a qualitative interview, where insights are gained from reading customers’ comments or follow-up interviews to ‘drill down’ for the reasons behind the ratings. This accentuates the challenge with NPS results being non-actionable, and poses several additional problems we must think about and offer other methods to supplement the ratings.

First, let’s discuss the reasons for not using comments and interviews for extra insights from surveys:

  1. Sample size is always a concern for surveys. Comments are not always filled by customers, therefore the sample size is further reduced
  2. The manual nature of interviews will inevitably make scalability and cost a concern, further reducing sample size
  3. With global operations, language and time zones may eliminate a portion of the customers due to your inability to conduct interviews or correctly interpret comments
  4. Confirmation bias, where interviewers and comment readers only account for responses that confirm existing concepts, may pose a significant threat to the success of the survey program
  5. Discrepancies between comments and actual drivers of dissatisfaction are well documented. Relying on comments only will prevent you from confirming the comments via metrics

Now, should you read survey comments, or interview customers who rate you poorly? Absolutely, but do not confuse that for your main insights. Comments and interviews provide illustration to the broader conclusion you derive from researching the details of the survey and correlating them with your operational and demographic information.

So, how should you go about analyzing the responses from your NPS survey in greater detail?

First, by all means, call your customers back to follow up on survey results. Call those who rate you poorly, as well as those who rate you well. But, also correlate the results with demographic and operational data. For example, how does score vary across regions or industries? Do customers using a certain feature or function rate better or worse than others? Does your score vary based on support case count or their duration? How do events over time impact customers’ score? Last, and equally important, remember that non-responsive customers do that for a reason as well. Can you identify different factors that drive customers’ response rate? Does any of that indicate their propensity to renew, or churn?

In conclusion, go ahead and experiment with your customer survey results and the drivers behind them. Do not assume that what works for others will necessary work for you. If you have access to a person with statistics knowledge seek their help in building a regression model that identifies the impact of each factor on customer satisfaction. If you don’t, there’s much you can do on your own to analyze the results and understand your company’s specific environment and reach conclusions on what to improve next.

Process Mining and Customer Support

Coal Miner Pump Fist With Pick Ax Retro

Recent months have been busy investigating the Process Mining discipline and its potential applications in enterprise technology support. A relatively recent discipline, Process Mining uses advanced analytics techniques to analyze logs from various systems and identify process flows and various other parameters parameters. The Process Mining Manifesto defines it as follows:

Process mining is a discipline positioned at the crossroads of computational intelligence, data mining, and process modeling and analysis. The idea of process mining is to discover, monitor and improve real processes by extracting knowledge from the event logs produced as part of on-going business activity.

At this point you may ask yourself what problem Process Mining will solve for you. To help answer this question I’ve created a short powerpoint presentation:

Most process driven organizations go about implementing their processes and the IT systems that support them in a manner that creates a disconnect between the process model as designed and implemented into the organization and the IT system on one side, and the ‘real life’ process as eventually used by the organization on the other.

Implemented correctly, Process Mining helps us answer two questions. First, what’s my organization doing? and second, how different is that to the prescribed process? We can then identify discrepancies and problems, isolate their root causes and take action to change either the model or the way it’s executed.

Customer support departments are very disciplined in recording all activities and interactions in a CRM system, which makes them prime candidates for Process Mining investigation.

How does process mining work? There are several commercial and open-source tools that will process your CRM’s log files and produce a chart of the executed process. Some will also allow you to filter for the most common paths, the longest process steps and bottlenecks. Contact us for more details and further information.

Are Your Technology and Policy Aligned?

time to upgrade

An interesting discussion on the ASP Group on Linkedin asked for ways to address customers who won’t upgrade their systems and keep requesting support. There are several valuable suggestions in that discussion, but I’d like to use this opportunity to address the how vendor choices impact customers upgrade decisions.

There are two main choices vendors can make that impact customers’ decision. First are policy choices: for example, the number of supported release and the options available to a customer on a non-supported release when encountering a problem. Second are engineering choices: complexity of the upgrade process, the amount of work the customer need to invest in the upgrade and whether any equipment needs to be acquired. These decisions influence customers and their ability, and desire, to upgrade. The interaction between policies and and technology choices will tend to drive customer choices and the resulting actions vendors can take.

The following matrix offers a quick visualization of the choices and their impact.

matrix

If we think about these four options, we can identify some familiar categories and use cases:

  • Quadrant I is empty due to the fact that simple or automated upgrades make it easy for customers to upgrade vendors can get away with strict policies, sunsetting releases rapidly, reducing support levels for backward releases and sometimes charging high maintenance fees.
  • Quadrant II Is for situations where upgrades involve high expense and risk, and the vendors are willing to continue supporting these releases either for customer retention of for the increased maintenance fees. The most famous example is Microsoft’s extended support for Windows XP.
  • Quadrant III Represents cases where software upgrade is simple and in many cases automated. A typical use case would be Anti-Virus software with auto-update capabilities and little effort investment on the customers’ side.
  • Quadrant IV Here we find extremely complex technology deployments (think large scale ERP systems), requiring extensive customization, and subsequent testing and adaptation when upgrading. These are extremely expensive, time consuming and very risk prone. Frequently customers would rather stay on their original release and avoid upgrading Vendors who refuse to support their older releases or who impose high costs for doing that may find their customers defecting to third party maintenance providers.

Understanding these categories and the choices you make will help you understand how your company’s policy and technology choices impact customers’ upgrade decision and your maintenance business.

What We Read This Week, 9 January 2015:

Who Provides Tech Support for the Internet of Things? – The writer examines the challenges associated with supporting the massively connected Internet of Things (HBR)

The Baffling Advances of Social Customer Service – The always excellent Esteban Kolsky on Social Customer service, and whether companies need to be where their customers are.

Customer Loyalty in the B2B World: What Does it Mean, and How to Achieve It – a marketing oriented discussion on managing and increasing customer loyalty. The main point, IMO, is the need to identify the influencers, a point I made on this blog as well as in several online discussions concerning customer satisfaction (mycustomer.com)

What We Read This Week, New Year 2015:

Happy New Year Everybody!

Are Ideas Killing Our Organizations? – presents the well known principle that it’s better to test an idea rather than fully develop it to perfection over a much longer period of time (Forbes)

False positives and false negatives in predicting customer lifetime value – Customer Lifetime Value has become very important as SaaS delivery gains prominence. Arie Goldshlager presents the potential errors and risks in this field. I recommend also visiting the article on which the post is based, here.

6 Tips For Managing Worldwide Offices – pretty self explanatory title, with very useful tips for those if us managing multiple locations and cultures (InformationWeek)

“Your Product is a Piece of Sh#t” – on the HootSuite blog, by their CEO Ryan Holmes on the company’s response to customer criticism

See the Experience You Are Giving Customers – Service Blueprinting is a very useful way of visualizing service experiences. This post provides an excellent insight into the discipline (Center for Services Leadership).

What We Read This Week, Boxing Day Edition, 2014:

I hope everybody had an excellent Christmas. Below is our weekly Enterprise Technology Support Management reading list for the week:

How to Prevent Experts from Hoarding Knowledge – An interesting, if somewhat obvious, perspective on losing knowledge as people retire (HBR)

Forrester’s Top Trends For Customer Service In 2015 – from the always excellent Kate Leggett at Forrester.

I Actually Enjoy Terrible Customer Service – And frankly speaking, so do I. An interesting discussion from the Kana blog.

Large Company Customer Experience Battles – discusses the differences between large enterprises and smaller companies as far as customer focus goes (Pivot Point Solutions).

Knowledge Management vs. Content Management – from David Kay.

Thank you all for following the blog, we’ll see you in the new year!

What We Read This Week, 19 December 2014:

What’s Lost When Experts Retire – Knowledge Management and Retention are key for every support manager and we usually spend significant amounts of time and effort ensuring knowledge is well documented and shared. This post touches on a different perspective of departing employees and the knowledge they take with them (HBR).

Measuring Customer Value in Experience?Wim Rampen is a very thoughtful blogger on customer service and interaction. In this post he describes the current state of customer value definitions and how it fails – I am waiting eagerly for the next post in this series

Skype’s newest app will translate your speech in real time – Supporting customers in other countries has always been a challenge for companies with smaller, single-country operations. While still in the future, skype Translator can transform that part of the support business in a very profound manner (The Verge).

Who Will Make Money in the IoT Gold Rush? – Internet of Things is one of the hottest discussion topics in recent memory. In the post, the author reviews some of the business challenges surrounding IoT and tries to predict the winners. I believe the jury is still out on the business models and technologies for enterprise class IoT implementations (sandhill.com)

What We Read This Week, 12 December 2014, Math Edition:

This week I’d like to recommend two longer form articles touching on slightly mode complex mathematical topics. I believe each of these articles is very pertinent to managing customer support, and specifically to measuring and thinking about our workload:

First, I’d like to recommend Log-normal Distributions across the Sciences: Keys and Clues(pdf). It touched on an alternative method of measuring populations different to the very common Gaussian system we are all very familiar with. The following image compares the two distributions:

I am sure the image on the right is very familiar to many of us. The firs example that comes to mind is the number of cases closed vs. their age.

The second post I’d like to share this week is The Power of Power Laws from John Hagel‘s blog, Edge Perspectives. Power last distribution is sometimes knows as Pareto Distribution, or the 80/20 rule:

This post establishes some solid foundations for thinking about Power Law distribution which we should all be familiar with.

I’ll return to these two concepts in a future post and discuss how we can use them to greater benefit in managing support operations.

Image source for both images: Wikimedia.