Monthly Archives: December 2013

Product Design and Customer Satisfaction

Rugby Player scoring a Try!

The usually excellent Barry Ritholtz posted a massive rant against the location of the power switch on his MacBook Air laptop, ending with the promise to reconsider the purchase of additional computers, with this flaw being the driver.

In support we come across similar situations regularly, especially with smaller implementations, where customers find that certain product features diminish the value they can derive to the point of eliminating all value generated by the product. Those customers also make it a point to tell us about it. This raises several questions. First concerns the mechanism for disseminating customer feedback into the decision making ranks. The second looks for a way to evaluate the scope of the problem (or in other words, what part of the customer base is affected by the problem and to what degree?)

Now, how would you address the challenge from Mr. Ritholtz? Obviously there is a range of options to handle the individual customer, but I am curious about methods to convert feedback into actionable information. Any ideas?

Is It An Airline? Is It A Hotel? No! It Is Enterprise Technology Support!

Drawing of scale on blackboard

The blog’s previous post discussed confirmation bias and the way it impacts our actions. Frequently we encounter a different type of bias, usually coming from individuals not very familiar with enterprise technology support, and holding their support teams against standards that may not necessarily apply. Support managers find themselves having to explain the difference between supporting enterprise technology and a number of popular consumer businesses, including an airline, an on-line apparel vendor, a hotel chain and a department store.

There is no doubt that each of these companies have mastered customer service for their customer base, and surely there is a great deal we can learn from them, but in order to do that we need to understand the differences and ensure we adopt those those behaviors that can help our customers and us.

In the past I wrote about the differences between high volume / low complexity operations and low volume / high complexity ones. There are additional models that can help us complete the picture.

Stacey‘s Matrix attempts to illustrate the complexity in decision making in relation to two dimension:

Stacie Matrix

The first, on the horizontal axis is the degree of certainty in which the problem addressed is understood, including the cause and effect chains. The vertical axis then illustrates the degree of agreement between the various participants on the way to resolve the problem.

If we use an airline as an example, the vast majority of cases handled by its service employees is relatively straight forward and falls within the “rational decision making” zone of the chart – both the customer and the airline are clear on the objective and the way to accomplish it. Even when recovering from a service failure (e.g., lost luggage or a cancelled flight) there usually is clarity concerning the cause and effect as well as agreement on the best way to resolve the problem.

Now let’s look at enterprise technology customer support situation. Most company have done a good job at documenting their knowledge and making it accessible to customers, partners and other users. This causes the support organization’s workload to contain an increasing proportion of cases where either the cause and effect chain are not clear, or there is little agreement on how to achieve the end result, or both. These disagreements take us away from the relative comfort of the rational decision making zone, and increasingly into judgmental or political decision making, or even into the complex decision making zone.

Now, in order to resolve the customer’s case, customer support must ultimately reduce the uncertainty and clarify the cause and effect chain for the problem. Once that has been accomplished the case can be resolved and the problem eliminated. Consumer brands rarely have to address the need for reducing complexity and gaining agreement on the problem definition or the cause and effect chain leading to it.

Comments Now Require Signing-In

ランチョンミート

Even though the blog enjoys a regular following, the number of non-spam comments has been minimal while spam comments have been in the thousands every month. To make administration easier, users wishing to leave a comment will be asked to sign in with their wordpress credentials before doing so.

Evaluating People By Metrics

??????????????????

Harvard Business Review has an excellent post discussing people evaluation through metrics. This is not a new topic for customer support managers. Operating in a heavily instrumented and measured discipline, it is easy for support management to focus on the measurable performance portions to evaluate their employees and ignore those that are more qualitative.

Over the years I have seen many different metrics for measuring and evaluating employees, from rating on a customer satisfaction scale to number of cases resolved, through a plethora of other metrics. The biggest, and most frequent, challenge was the attempt to evaluate individuals’ performance along the same scale, regardless of their formal role or how they operate. For example, most of us are familiar with the individuals who can process a large number of cases very efficiently, and with others who excel at methodically analyzing the most complex cases, one at a time. Both these individuals may carry similar titles, but evaluating them along the same performance guidelines is futile and frustrating to the employees and their managers.

A far better way to evaluate employees is to evaluate each along their role they play in the team, their strengths and the improvements their managers would like to see. Yes, this requires a greater involvement on the managers’ part as opposed to cookie cutter evaluation criteria, and it definitely makes stack ranking much harder, but I do believe that this system is a much more positive way for managing people and will create a much better functioning organization.

About Assumptions

old razor

A friend sent me a link to a study claiming that when asked, very fast typists could assign the correct letters to keys on a keyboard. Their underlying assumption was that their keyboard would not change and therefore the location of keys remains constant and did not require memorizing.

This led made me to think about the way those very fast typist would do with a different keyboard, and to the underlying assumptions we take with us from one role to the other without appreciation of how the different circumstances we operate in may require different solutions to those we are familiar with. How many decisions were made because the decision maker is in a somewhat recognizable situation and chose to blindly replicate past experiences, and how many of them are made because they represent the perfect response to the situation at hand?

Complexity and Volume

fading sound noise

Apologies for the blog’s long hiatus. A few personal projects took priority over posting.

The differentiation between high volume / low complexity and low volume / high complexity operations is probably familiar to most readers of this blog. We use it as a way to explain the difference between supporting enterprise technology and running a call center for a consumer product. As part of the research for my nascent book (yes, there is a book in the works!), I have been trying to find documented criteria for organizational differentiation. Strangely enough, I could not find anything documented anywhere and therefore this post is an attempt to start a discussion on these criteria. If any of the blog’s readers is aware of another source for this information please let me know via email or the comments section.

To illustrate the challenge, we can think of a continuum, on one end is an automated telephone service responding with the current time, on the other a single, most difficult problem imaginable. Maybe something on the scale of bringing Apollo 13 back to Earth. Analyzing the differences between these two extreme situations, we can come up with the following evaluation criteria:

 Low Volume / High ComplexityHigh Volume / Low Complexity
AutomationComplexSimple
Customer IntimacyImportantNot required
Amount of offline work (Customer and Support)HighMinimal
Technical Interaction LevelVery HighVery Low
Interactions / CaseHighLow, Single
Value GeneratedContentVolume

Some of these criteria are self explanatory, however, I feel that “Value Generated” requires a few additional words. With high volume / low complexity transaction, the main learning for the organization is in the demand for the service. The service provider may find correlation between demand to other variables (e.g., time of day, day of the month, other special events). On the other hand, going back to the Apollo 13 example, the value generated by that effort, besides bringing the crew back in one piece, was the content of the analysis performed by the various teams, the decision making processes and the learnings associated with that.

What is your experience in using the volume / complexity continuum? Have you seen other, more detailed discussions? What was the reaction you received?