Monthly Archives: September 2011

Automatic Case Closing, Good Idea Or Not?

Closing cases automatically is a highly debated topic in almost every support forum. In a previous post I wrote about closing cases before receiving the customers’ consent for doing so. That post was written from a purely high-complexity, enterprise support perspective.

A recent question on linkedin’s ASPI group (highly recommended) led me to think about this question in more detail and try to determine times when closing cases automatically can be a good idea. After some thought I arrived at a model that attempts to distinguish between the nature of the technology supported and the nature of the customer relationship:

One distinction the model does not make is whether support is waiting on additional information or confirmation whether a solution worked or not. This can determine the amount of time before closing the case or the number of reminders before closing the case

Do you close cases automatically? How do you determine when to close them? Do you issue reminders automatically? Is there human intervention or other input to the decision? How many of these cases are eventually reopened?

Support Managers: How much do you owe?

Agile practitioners have been discussing technical debt for a while, using this term for the gap between the investment in software development and the business value derived from that same software.

After being introduced to the term and the concept behind it I kept wondering about the best way to implement a similar concept in the customer support environment.

If we accept the concept that customer support is responsible for generating or protecting customer value, then we can come to the conclusion that outstanding support debt at any point in time can be defined as “the total amount of customer value that is unrealized due to unresolved cases” at that specific point in time.

When we look at our open cases (aka backlog) we can reasonably assume that a case that’s been open for a longer period has a bigger negative impact on customer value than a case that’s been open for a shorter period. Therefore, we can safely accept the amount of time a case has been open as an approximation to its negative impact on customer value, and the total impact support has on total customer value can then be represented by the total age of all open cases (regular blog readers will note that this number is similar to the pain report, but has no weighting for the severity of the case).

So, now that we have this metric defined, what do we do with it?

First, we have to understand that this metric is a trailing indicator, showing the success we had in accomplishing two goals that are common to most support organizations:

  1. Reducing the number of cases opened
  2. Processing cases faster

And like most other metrics, its main value is in the way it changes over time and how it is associated with changes in business volume. The best option I have found for that is dividing the change in support debt by the change in revenue over a certain period. If the resulting number is lower than 1, then the organization is improving its performance, if it is greater than 1, it is not managing its workload well.

In a future post I will talk about the relationship between this metric and other commonly used ones. Stay tuned.

An open question remain. In what way ca we relate this number to financial value? Any ideas?

Making The Case For Publishing Knowledge

Employees frequently see knowledge management initiatives as a way to siphon off their knowledge in favor of a cheaper / automated / whatever option (“you want us to document everything we know so that you can fire us and send our jobs to …”).  It is not an easy discussion for managers to win, and to even stand a chance must be presented in a positive rather than negative manner.

The series of charts below came to life in support of this goal. They attempt to open the discussion by presenting the personal benefit for employees to participate in a knowledge management scheme and publish what they know.

The premise behind these charts is that employee pay is determined by their top level skill. However, those same employees spend only a fraction of their time at that performance level and the majority of it at lower levels. This situation is bad for all parties.  Bad for employees since they do not get to exercise their most critical skills frequently enough and therefore can’t expand their abilities and knowledge. It is bad for the company since it is paying for skills it does not fully utilize and will likely be too busy doing other things when needed.

When we ask employees to think about their skills, they usually tend to think about their top skill – “I am a system administrator with 20 years experience”.  This skill determines their pay and organizational status.  When we look at each employee skill we can draw a continuum from zero to maximum skill:

Now, to determine whether the company is receiving the full benefit from what it pays its employees, and whether employees are utilizing their top skills we need to determine what keeps our employees busy and where they spend their time.  We are likely to see a distribution similar to this:

Clearly this is not an ideal situation.  A far more desirable situation would be shifting the organization to spend much more of its time at the top end of their performance level. It is good for the employees as well as the company since this is where they generate most value for the company as well as themselves. The chart, therefore, would shift to look like the green section in the figure below:

This chart shows that some of the lower skilled work (marked in red) is either eliminated through collaboration with engineering, or, more likely, handed off to to entities down the value chain. For example, self service plays well into this scheme, as do reseller training and enablement, further training and empowerment of the call center team, and so on.

What’s your experience in driving cooperation with knowledge management initiatives? What worked? What did not?

The Importance of Trends

In a comment to the previous post, titled “The Pain Report”, my long time friend and colleague David King made an excellent point that is worth repeating. The importance of any metric is not in a single value, but in the way it develops over time. This is how improvements or deteriorations are determined and this is how support organizations can preempt potential high visibility situations before they become so.

I have written about metrics and benchmarking in the past, that post and especially the links it contains are well worth visiting.

The Pain Report

One of the people I worked with in the past used to produce what he called “Pain Report”. This report attempted to predict the customers (or sales people) that would explode next and give support managers a way to proactively address problems.

The report was basically a weighted sum of the amount of time all cases were open. Here’s how it works:

  • Assign a weight for every severity level you have. The more severe the case, the higher the severity. For example, with a four level severity scale a possible a possible scale can be:
  • Now multiply the number of days each of the customer’s cases have been open, and what you have is the weighted number of days for every case
  • Sum the numbers you received, and you receive the customer’s pain index

The weight numbers in the table above are a way to assign relative urgency to the different severities, play with them and determine the weighting that works for you. Here is a sample of two customers’ pain report that demonstrates the influence of higher severity cases on the pain index:

Both customers have an identical number of cases open, and the total number of days is similar. The only difference between the reports are two higher severity cases that customer “A” has.

Now, this report does not attempt to predict the next angry phone call our CEO will receive. There are more variables at play here, from a pending deal to total lack of good will on the customer’s side. But, it does give us an additional perspective on the caseload and the way it is broken down by customer, and helps us diffuse potential trouble spots before they materialize. It will probably be far more useful in very high complexity environments, where customers have multiple cases open at any given time and support management needs to find a way to make sense of their workload.

Have you implemented such a report? Do you think it will useful in your environment?