Monthly Archives: April 2013

Support Basics: Case Lifecycle Measurement

Total Lunar Eclipse

A few days ago I had a very interesting conversation with an executive in the enterprise technology industry. We spoke about identifying the root cause for case delays in particular and ways to analyze case traffic in general. I was very surprised to learn that the concept of case lifecycle and the ability to track that for time as well as failed attempts was completely new to him. In this light, here’s a case lifecycle primer and a some metric guidelines associated with it.

If we think about a typical case, in most companies it would transition between multiple entities handling it for multiple reasons. Those may be the customer, the support team, an escalation team and possibly the engineering team. The case will spend some time waiting for a person or a technology resource to become available, be worked on and then transition to the next phase. As we well know, the case may spend varying amount of time in each phase and can transition between those phases repeatedly. For example, a case is opened, assigned to a support engineer, who analyzes it, proposes a resolution to the customer. The resolution does not work, the customer returns the case to the support engineer, who researches some more, requests additional information, etc.

The discussion we had concerned support managers’ ability to analyze information from case life cycle and associate the phases with improvement opportunities on one hand and impact on customer satisfaction on the other. To give us the level of detail needed, we must know, for every case, what state it is in. This state needs to include the entity that owns the next action (customer, support, engineering, etc) and details of that action. For example, a case status may be “Customer – Installing Fix”, “Customer – Collecting Information” or “Engineering – Fixing Bug”. Now, in order for this information to have any value beyond the immediate case status, we need to be able to map the entire case journey along a timeline, and do it for our entire caseload.

So, how do we do that? Enter the audit trail.

Most case management systems keep an audit trail, some will log all changes to the case, others only specific ones. In any case, recording status changes and their timestamp is the first step we need to take if we want to analyze our case load. Once we do that, we can begin to understand where our cases spend their time which will enable us to investigate the reasons for the delays. More adventurous managers will want to attempt regression analysis of various case parameters to determine the operational drivers of customer satisfaction, from case life to repeated requests for additional documentation.

Have you ever attempted to analyze your caseload this way? Have you been able to come to any meaningful insights and drive improvements?

Customer Support or Customer Success?

Man Woman face people problem puzzle

I recently read an interesting discussion on the ASPI linkedin group about the role of customer success manager and the differences between this role and others in the customer support world.

A number of people have written about customer success management in the past. For example, Mikael Blaisdell, here, yet the more I read about the customer success manager role, the clearer it has become that there is neither a broadly accepted definition of the role nor any agreement on how people in this role accomplish their task. But, thinking about it in the context of the direction the technology world is taking may have the key to understanding the progression from Customer Support to Customer Success.

Historically and until early in the previous decade, vendors’ engagements with their customer were very structured. Customers made large, long-term investments in technology upfront. The deployment plan was then developed jointly between the customers’ IT and business teams and the vendors’ sales and professional services groups (or a third party implementation partner). The role customer support played in this context was mostly reactive and was focused on rapid break-fix problem resolution

The growing adoption of SaaS delivery brought a very different approach to adopting new enterprise applications. There was no need for a project plan developed together with the IT team, server provisioning or storage space. All you needed was a credit card and you were in business. With BYOD you did not even have to install anything on your company’s laptop. All that’s required are a credit card, a need to accomplish something and sufficient amount of curiosity to go look for an app or tool that will address that need

This change introduced a new and very different challenge that SaaS vendors had to address and which traditional enterprise IT vendors never faced. In their world every small purchase (or download of a free application) could eventually develop into a large deployment. The conditions faced by those vendors were relatively challenging:

  • The initial investment made by customers is small and usually without minimal term commitment
  • The purchasing individual may not have complete view of the needs and the product’s abilities to fulfill them
  • Products were installed to experiment with rather than to achieve a clear goal

These lead to reduced level of commitment on the customers’ behalf and therefore made it easy to walk away from the product when even the smallest difficulty or challenge are encountered. Obviously these conditions are much more common with small, tool-like services rather than major enterprise products. An individual at an organization may, for example, experiment with dropbox, but a salesforce.com implementation would still be done using the traditional enterprise technology model

Taking all these into account, we can begin to see the difference between the engagement model in traditional enterprise deals and that for SaaS delivery. It is safe to assume that while large, enterprise implementations are driven by sheer momentum, visibility and active management, small, ad hoc purchases are made and often neglected as discussed earlier. The role of the customer success manager would then be identifying installations that are not progressing according to certain pre-dtermined success criteria (see this post on the totango blog for a very good discussion ) and helping those customers overcome challenges, understand the capabilities of what they have just bought and make the most out of it, hopefully expanding the product’s use and stickiness

Obviously, as the terminology around customer success proliferates, we see increased creep into areas traditionally covered by other roles, for example, support account managers or technical account managers, engagement managers and others. But, I still believe the clear distinction between customer support and customer success is in the latter being more proactive and focused on the implementation and usage expansion while the former is more reactive and break-fix focused.

What is your opinion around this topic? Have you implemented customer success management in your organization? How have you defined the role and goals?