Category Archives: Support Management

Can Your Product Make Espresso?

young girl standing and juggling with red balls

Wish lists and their management are a frequent point of discussion in the customer support and success world. A wish, or enhancement request, is created when a customer would like the product to act differently than it currently does. Enhancement requests are added to the product team’s backlog, together with requirements from marketing, bug fixes and other development items, such as platform and technology upgrades. All items in the backlog are usually prioritized by product management, some will eventually be developed while others won’t

Customer Success and Support teams, focusing on single customers or transactions, are usually concerned with enhancement requests that represent the needs of individual customers. Conversely, Product Management and Engineering will focus on the needs of broader segments of the customer base as well as the overall needs of the market. Consequently, discussions with product management or engineering tend to break down and leave everybody frustrated. The question is therefore, how can Customer Support and Success teams succeed in making the case to Product Management for product enhancement?

From my own experience, there are several key actions to take:

  • Screen – Ensure that any enhancement requests you promote are in line with the product’s strategic direction. If they are not, then you’ll be misleading customers and losing credibility both externally and internally
  • Quantify – the number of customers that need, or are expected to need, this enhancement. [How do you know how many will need it? Sometimes it is obvious, for example, customers using a certain application or technology platform, customers subject to certain regulations]
  • Assign value – Estimate the risk the company is facing as a result of not implementing this enhancement. Consider what the company will lose due to not implementing this enhancement, this can range from having a single disappointed system administrator to losing revenues from multiple large customers for several products
  • Bundle – Are there several enhancement requests that are similar enough to be combined, or close enough to be developed jointly?
  • Recruit Customers – create a forum for customers to help refine the definition of the enhancement requests and solicit votes and comments. Doing so will help you in assigning value to the each enhancement. Françoise Tourniaire has an excellent blog post on managing wish lists in your customer forums

Following these steps will help you come to the product planning discussion equipped with high quality information and better, more strategic arguments than the fairly common “VIP Customer x wants the product to make espresso” argument, and hopefully you’ll have better results to show for it.

Do You Know What Your Front-Line is Thinking?

decision...

One of the problems in running any large, globally distributed organization is the potential disconnect between executives in HQ and the teams on the front lines. Frequently it is very difficult to know what’s on the minds of the individual engineers and managers, especially in remote locations where the executive in charge flies in for a few days every six months at best, holds a team meeting and flies away again after a day or two

A partial remedy to this disconnect are periodical reviews with every first line manager and team lead in the organization. These reviews go by multiple names, from Quarterly Business Review, or QBR to Deep Dive. Many executives hold them with their direct reports as a checkpoint to track execution, raise problems and float ideas. A much smaller subset run such review meetings with front line managers. In this post I’d like to share the format that worked for me in the past and which I helped others implement as well

These review meeting can take a number of directions, they can be quantitative or qualitative, and they can explore or review. My preference was to focus the meetings mostly on qualitative and exploratory side, for three reasons:

  1. Support organizations, especially large and mature ones, have a very strong, often excessive, metrics and reporting discipline. Key metrics are reviewed regularly and visible to all stakeholders as well as broader populations. Consequently, the likelihood of new discoveries is low and the opportunity to exchange ideas and unstructured information is lost
  2. Focusing on metrics, where the executive “reviews” the performance will invariable give the meeting a critical atmosphere and create tension. It will not encourage a free discussion and building of communication channels which the executives should value and the team managers appreciate
  3. Numbers and charts are the safe zone for many support managers, often they are used to divert meaningful communications. Instead, having an open, qualitative, discussion forces a different layer of communication without the usual crutches

So, rather then focusing on metrics and dashboards we can have the discussion cover several basic items:

  • Recent accomplishment – this was a chance for the manager to demonstrate the successes of the team. Review the workload and the ways it changed over time, as well as several important metrics [I know I just contradicted myself, but please bear with me]. This should be the only part of the meeting where numbers and metrics play a primary role in the discussion rather than an illustration to a specific point
  • What works? While the previous section focuses on the team being reviewed, this can be an opportunity to provide feedback and insight on other parts of the organization, the company and share success stories. Examples could be any experiments the team has conducted, an initiative they set in place, and so on.
  • What doesn’t work? Surprisingly, this proved to be the hardest part of the meeting for many managers. There were attempts to rename it [e.g., “Things We Can Improve”], non responses [“I really can’t think of anything”] and more. But, for me, it was an extremely useful part of the meeting, for two reasons: First, it gave me a perspective into the difficulties the teams were facing, and second, it allowed me to understand how the managers functioned in a slightly stressful situation
  • People and development plans – a key part of every executive’s responsibilities should be the development of every person within the organization. It is easy to focus on individuals who stand out in one way or another, high-flyers, troublemakers and extroverts usually stand out. However, reviewing every individual in the team, even briefly, can provide additional insight into the team’s chemistry and all the individuals within it. Frequently this is the opportunity for the managers to voice concerns that otherwise do not make their way up the organization.
  • Anything else – ensure there’s sufficient time where the managers have the opportunity to discuss any point they feel they should but had no opportunity earlier in the meeting

Some helpful guidelines:

  1. Provide guidelines for preparation – ask the managers to take sufficient time to prepare, consult with their own team members, peers and direct manager
  2. Allocate sufficient time, between 60-90 minutes for every meeting
  3. limit the amount of paper or slides permitted. For example, require that each of the above sections be limited to a single slide, with 4-6 bullet points in each, otherwise you are doomed to death by powerpoint.
  4. Be prepared with questions, an understanding of the people and environment the team operates in and a good number of questions. A good place to start is the notes from the previous meeting with that manager as well as a briefing with their direct manager
  5. Resist the temptation to solve problems during the meeting
  6. Under no circumstances make this a punitive opportunity. Do that even once and the information will spread through the organization faster than you think
  7. Take notes and action items, but do not assign them outside of the organization’s management hierarchy

A generic version of slides for support business review can be found on slideshare

In summary, use this as an opportunity to learn more about the organization and, mostly, its people and the opportunities you can use to help them develop individually as well as increase the capabilities of your entire organization

Observations on Customer Experience Improvement

Good Better Best Keys Represent Ratings And Improvement

Just came across a very interesting article courtesy of McKinsey Quarterly‘s Classics mailing, discussing the need to optimize service delivered to customers needs. While it focuses on consumer businesses there are a few key learnings for enterprise technology support operations, demonstrated by this quote:

“Finding these savings requires rigor in customer experience analytics: […]. It also requires a willingness to question long-held internal beliefs reinforced through repetition by upper management. The executive in charge of the customer experience needs to have the courage to raise these questions, along with the instinct to look for ways to self-fund customer experience improvements.”

The McKinsey article identifies two main drivers of resistance to implementing effective improvements:

  • Organizational momentum and deep held, but often misplaced, beliefs. Frequently these beliefs are shared across management levels, causing the company to operate with ineffective goals that remain unchallenged
  • Lack of both rigorous analysis as well as the ability to present those results convincingly and effectively

A future post will discuss analytics techniques. Here I’d like to focus on a few causes preventing organizations from progressing towards effective and efficient improvements and combine a few of the blog’s previous posts for the benefit of new readers. First, we previously had a high level discussion of several differences between consumer and enterprise support, namely the different roles we interact with and the many more opportunities for friction.

We also wrote about complexity and volume. This is a very useful distinction, touching on the operational differences between high volume/low complexity operations and those with low volume/high complexity, as most enterprise support operations are.

Having said that, we frequently find concepts and metrics from consumer businesses that are not beneficial for the unique challenges of supporting enterprise technology. It becomes our job to educate the organization and provide deep, actionable insights to help the company excel efficiently. We’ll cover that in future posts

No Free Lunches, or How To Reduce Case Life

Clock and calendar

A few weeks ago I had an interesting discussion with a senior executive in an enterprise technology company. That person started his career as a support engineer, and at that time one of his main measurable objectives was case life – the average amount of elapsed time between opening and closing of cases in his ownership. His impression, at the time and even now, was that the only way in which he could impact that goal was to close cases prematurely. Presumably this was not the intended result of this goal.

That discussion opened the door for two questions: can case life be a valuable metric for managing support organizations, and if so, how can it be used in a meaningful and productive manner?

Case life, when we think of it, is an aggregate metric combining a variety of activities taking place throughout the case life, and many times more than once. Each activity has, therefore, a double impact on case life, first by its elapsed time, and second through repetition which, in turn, offers opportunities for improvement exist both by reducing the time each element takes, and by eliminating as many repetitions as possible.

When we examine the most common elements of a case life and analyze the influences on elapsed time and drivers of repetition we find that most of the time spent on a case is comprised of waiting for one of three activities to take place:

  • Producing problem documentation
  • Investigating the problem
  • Implementing a fix and verifying its effectiveness

We also know that for each of these activities we can reduce the time they take to perform, and the number of iterations required to bring them to completion. For example, producing the documentation required to investigate a problem will be much faster and require fewer repetitions when the documentation is generated automatically when the problem occurs for the first time. Having to wait for the problem to recur and then rely on verbal instructions from the support engineer will invariably create errors and require repeated attempts to get right. But, high impact changes require higher investment in tools or in the product.

If we map the options for reducing case life according to their complexity and anticipated impact, we’ll see something similar to the following chart, where the activities that impact case life the most are also those that requires the largest investment and involve higher levels of the organization:

Case Life Elements2

Taking all this into account, we can conclude that the ability of the support engineer to influence case life is relatively small and depends on their ability to manage the customer interaction efficiently and effectively. But, the bigger impact will be driven by higher investment and greater organizational focus on the various drivers.

Having reviewed this we can now answer the questions we initially posed:

  1. Is case life a valuable metric for the support organization?
  2. Should it be used to measure the support engineer?

The answer to the first question is a resounding yes. Case life, the way it develops over time and its composition gives us unique insight into the performance of the organization, helps us gauge the success of past actions and outline future development plans. On the other hand, measuring support engineers on case life is probably unproductive, and is likely to drive the behaviors discussed in the first paragraph. It is better to measure engineers on the specific element each needs to improve, and especially those they can directly influence

Are Your Technology and Policy Aligned?

time to upgrade

An interesting discussion on the ASP Group on Linkedin asked for ways to address customers who won’t upgrade their systems and keep requesting support. There are several valuable suggestions in that discussion, but I’d like to use this opportunity to address the how vendor choices impact customers upgrade decisions.

There are two main choices vendors can make that impact customers’ decision. First are policy choices: for example, the number of supported release and the options available to a customer on a non-supported release when encountering a problem. Second are engineering choices: complexity of the upgrade process, the amount of work the customer need to invest in the upgrade and whether any equipment needs to be acquired. These decisions influence customers and their ability, and desire, to upgrade. The interaction between policies and and technology choices will tend to drive customer choices and the resulting actions vendors can take.

The following matrix offers a quick visualization of the choices and their impact.

matrix

If we think about these four options, we can identify some familiar categories and use cases:

  • Quadrant I is empty due to the fact that simple or automated upgrades make it easy for customers to upgrade vendors can get away with strict policies, sunsetting releases rapidly, reducing support levels for backward releases and sometimes charging high maintenance fees.
  • Quadrant II Is for situations where upgrades involve high expense and risk, and the vendors are willing to continue supporting these releases either for customer retention of for the increased maintenance fees. The most famous example is Microsoft’s extended support for Windows XP.
  • Quadrant III Represents cases where software upgrade is simple and in many cases automated. A typical use case would be Anti-Virus software with auto-update capabilities and little effort investment on the customers’ side.
  • Quadrant IV Here we find extremely complex technology deployments (think large scale ERP systems), requiring extensive customization, and subsequent testing and adaptation when upgrading. These are extremely expensive, time consuming and very risk prone. Frequently customers would rather stay on their original release and avoid upgrading Vendors who refuse to support their older releases or who impose high costs for doing that may find their customers defecting to third party maintenance providers.

Understanding these categories and the choices you make will help you understand how your company’s policy and technology choices impact customers’ upgrade decision and your maintenance business.

Sometimes The Extra Mile Is Free

Restaurant scene

Recently I visited a coffee shop while waiting to meet a friend. Walking in, I was impressed – the place was large, well lit and tastefully decorated. The food and pastries in the display cases seemed attractive, with quality ingredients and professional preparation. Clearly, those who designed and built this business aimed high, and their prices reflected that. As a long time observer of service operations I started wondering – could they deliver on the promise of the decor and the food?

Considering I only had tea, I couldn’t tell anything about the food except that other diners seemed to be enjoying it. However, from the service perspective I left with a few nagging points that directly apply to other service organizations:

  • The food is served in nice porcelain dishes which the crew clears once the customer has left. But, they do not wipe the tables, consequently, each table had some crumbs. Very few, but noticeable. When clearing the tables they do not use a tray, so we got a chance to see one of the crew walking slowly with a pile of dishes on her arms, trying not to drop them. Solution – get a tray, and put a little wet towel on it. Clear the dishes into the tray, wipe the table and be done.
  • When ordering a drink they take your order at the counter and bring the drink to the table. But, the crew has no clue how to walk straight while holding a cup. It was very comical watching one of them holding a saucer with both hands and walking slowly trying not to spill the coffee. Solution – rehearse, work on your muscle memory.

In both these cases, not only did the operation look unprofessional, but the employees were visibly embarrassed.

  • Last – my tea was delivered to the table in a cup. There was nowhere to dispose of the teabag nor were there sugar or stirrer on the table. I had to get up and get them myself, negating the point of table service. Solution? You guessed it. Bring a saucer, and place a few bags of sweetener on each table.

Now, there is a common thread between all these points. Fixing them will cost the business absolutely nothing, but requires an observant manager with a burning desire to keep improving the service. This begs the question, how much improvement could each of us make to our support operations at zero cost while helping our employees increase their skill and professionalism? How much better can we make them? What if we took the time to observe our organization from the side, and inspect every move and every action as they are perceived by the customer? Sadly, in many situations this seems to be everybody’s last priority.

Broader impacts of organizational maturity

Office success

In a previous post we reviewed the various maturity stages for a support organization. Having done that, the next question becomes “so what?”, or in other words, how does support’s increasing maturity impacts the value it delivers to the company?

To answer that, let’s look at the relationships with internal constituents, such as corporate management and adjacent organizations, as well as external entities like partners and customers?

Most of us can imagine how the dialog between support and engineering evolves in line with the increasing maturity. Support organizations transition from a front-end where every customer case is escalated to identifying failures, advising customers on product use and having responsibility for the customers and for the well-being of the installed base. Similarly, the interaction with sales shifts from adversarial to a partnership.

But, how does all this deliver actual value to the company?

If we think about it, there are three ways customer support can add value to the company:

  • Increased efficiency, so support costs for unit of revenue grow slower than revenue does
  • Creating additional support revenue through value added services
  • Reduced discounting during the sales process through collaboration with sales, minimized problem impact on customers
  • Accelerated and enhanced consumption of the products, driving the customer to add capacity or licenses

Additionally, a mature support organization can provide the company extensive amounts of information about the products and the customers, for example:

  • Ways in which customers use the products
  • Challenges and difficulties customers face while using the products
  • The most commonly used features, and those used rarely or never at all
  • Interaction with third party partners, their capabilities and deficiencies
  • Customer satisfaction and its drivers and inhibitors

However, the reliability of this information and the ability of the support organization to be a valued partner depends on the maturity of the support organization, but also on the maturity of the entire company.

What’s your experience in taking your organization to higher maturity levels? Where did you encounter the biggest challenges?

Is It An Airline? Is It A Hotel? No! It Is Enterprise Technology Support!

Drawing of scale on blackboard

The blog’s previous post discussed confirmation bias and the way it impacts our actions. Frequently we encounter a different type of bias, usually coming from individuals not very familiar with enterprise technology support, and holding their support teams against standards that may not necessarily apply. Support managers find themselves having to explain the difference between supporting enterprise technology and a number of popular consumer businesses, including an airline, an on-line apparel vendor, a hotel chain and a department store.

There is no doubt that each of these companies have mastered customer service for their customer base, and surely there is a great deal we can learn from them, but in order to do that we need to understand the differences and ensure we adopt those those behaviors that can help our customers and us.

In the past I wrote about the differences between high volume / low complexity operations and low volume / high complexity ones. There are additional models that can help us complete the picture.

Stacey‘s Matrix attempts to illustrate the complexity in decision making in relation to two dimension:

Stacie Matrix

The first, on the horizontal axis is the degree of certainty in which the problem addressed is understood, including the cause and effect chains. The vertical axis then illustrates the degree of agreement between the various participants on the way to resolve the problem.

If we use an airline as an example, the vast majority of cases handled by its service employees is relatively straight forward and falls within the “rational decision making” zone of the chart – both the customer and the airline are clear on the objective and the way to accomplish it. Even when recovering from a service failure (e.g., lost luggage or a cancelled flight) there usually is clarity concerning the cause and effect as well as agreement on the best way to resolve the problem.

Now let’s look at enterprise technology customer support situation. Most company have done a good job at documenting their knowledge and making it accessible to customers, partners and other users. This causes the support organization’s workload to contain an increasing proportion of cases where either the cause and effect chain are not clear, or there is little agreement on how to achieve the end result, or both. These disagreements take us away from the relative comfort of the rational decision making zone, and increasingly into judgmental or political decision making, or even into the complex decision making zone.

Now, in order to resolve the customer’s case, customer support must ultimately reduce the uncertainty and clarify the cause and effect chain for the problem. Once that has been accomplished the case can be resolved and the problem eliminated. Consumer brands rarely have to address the need for reducing complexity and gaining agreement on the problem definition or the cause and effect chain leading to it.

About Assumptions

old razor

A friend sent me a link to a study claiming that when asked, very fast typists could assign the correct letters to keys on a keyboard. Their underlying assumption was that their keyboard would not change and therefore the location of keys remains constant and did not require memorizing.

This led made me to think about the way those very fast typist would do with a different keyboard, and to the underlying assumptions we take with us from one role to the other without appreciation of how the different circumstances we operate in may require different solutions to those we are familiar with. How many decisions were made because the decision maker is in a somewhat recognizable situation and chose to blindly replicate past experiences, and how many of them are made because they represent the perfect response to the situation at hand?

Complexity and Volume

fading sound noise

Apologies for the blog’s long hiatus. A few personal projects took priority over posting.

The differentiation between high volume / low complexity and low volume / high complexity operations is probably familiar to most readers of this blog. We use it as a way to explain the difference between supporting enterprise technology and running a call center for a consumer product. As part of the research for my nascent book (yes, there is a book in the works!), I have been trying to find documented criteria for organizational differentiation. Strangely enough, I could not find anything documented anywhere and therefore this post is an attempt to start a discussion on these criteria. If any of the blog’s readers is aware of another source for this information please let me know via email or the comments section.

To illustrate the challenge, we can think of a continuum, on one end is an automated telephone service responding with the current time, on the other a single, most difficult problem imaginable. Maybe something on the scale of bringing Apollo 13 back to Earth. Analyzing the differences between these two extreme situations, we can come up with the following evaluation criteria:

 Low Volume / High ComplexityHigh Volume / Low Complexity
AutomationComplexSimple
Customer IntimacyImportantNot required
Amount of offline work (Customer and Support)HighMinimal
Technical Interaction LevelVery HighVery Low
Interactions / CaseHighLow, Single
Value GeneratedContentVolume

Some of these criteria are self explanatory, however, I feel that “Value Generated” requires a few additional words. With high volume / low complexity transaction, the main learning for the organization is in the demand for the service. The service provider may find correlation between demand to other variables (e.g., time of day, day of the month, other special events). On the other hand, going back to the Apollo 13 example, the value generated by that effort, besides bringing the crew back in one piece, was the content of the analysis performed by the various teams, the decision making processes and the learnings associated with that.

What is your experience in using the volume / complexity continuum? Have you seen other, more detailed discussions? What was the reaction you received?