Monthly Archives: July 2013

Looking at Surveys? Here Are Some Practical Tips:

Ojo cibernético

Many companies run customer surveys, and sadly many enter this effort without giving sufficient thought to the benefits they intend to derive. When asked, a typical response would be “I’d like to know how we are doing against the competition” or “I’d like to know what customers think of us / our engineers / service / products” and so on. While those indeed are good reasons, there are a few additional points we should think about before starting a survey. This post will not get into the philosophical discussion of NPS or not, but focus on the practical sides of creating a survey that functions successfully

When thinking of surveys, wanting to know how you do, especially against the competition is interesting, but as we argued repeatedly in this blog and elsewhere, it is a futile exercise. The best use of a survey is to identify improvement opportunities and reinforce strengths based on customers’ feedback, and then confirming the effectiveness after actions have been put in place

The two basic operational goals to consider when running a survey are, firstly getting a sufficient number of responses, and secondly obtaining meaningful and actionable information from those responses. When those are accomplished organizations can incorporate the insights derived into their operation to drive sustainable improvement. This post will focus on the first two objectives – driving a high response rate and asking meaningful questions. In a subsequent post we will look at what we can do with the insights provided and the organizational implications of that

Driving the response rate There are several factors that influence survey response rates:

  • Effort – the number of questions to answer, their complexity and whether they are in the customer’s own language
  • Benefit – does responding to the survey affect the service?

Let’s look at effort first. We all heard of survey fatigue, which occurs when customers receive too many survey requests in a short period of time. This is more common with enterprise technology companies, where specific individuals are responsible for the vendor relationship and handle the bulk of the interactions. Many companies, therefore, use a pacing mechanism that limits survey requests to one every certain period of time (anywhere between one and four weeks seem to be common practice). Second is the number of questions, the fewer questions there are, the more likely customers will be to respond – many articles and books discuss survey design, I recommend you become familiar with at least a few of them. Last is language – companies that operate globally should invest the effort and translate the surveys to their main customer constituencies

Now, let’s discuss benefit. Customers would be more inclined to respond if they see tangible improvement in the service based on their feedback. Many companies choose to sit back and hope customers will notice those improvements, however, it is a good idea to help. First, engaging in some self promotion is never a bad thing – if you publish a newsletter or have a blog you can include a section on recent improvements, or you can have a small section on your support home page showing the same. Second, use every opportunity to talk to customers and tell them about improvements that are made in response to surveys. This is especially true when you call in response to poor surveys and listen to their comments (you do make those calls, right?)

Getting meaningful data Frequently we hear comments such as “I get the most useful input from the free text questions”. Well, there is a reason for that. Frequently we see questions that focus on the agent providing the service rather than the customer’s experience. These surveys will ask questions about courtesy, professionalism and so on, rather than the solution to the customer’s problem (timely? accurate? easily usable?) and the amount of time and effort it took to accomplish. Esteban Kolsky has written a number of good posts on customer surveys, start here and explore his blog for excellent insights. Now, by all means have a free text field for the customer to provide input on, but never confuse the squeaky wheel with the voice of your entire customer base

Asking the right questions is only the first step in making data meaningful. Once the survey is operational, there are three points we need to focus on:

  • Analyzing the data in order to understand what we need to fix or reinforce
  • Taking the right actions to drive improvements
  • Confirming or correcting the actions through a feedback loop

Edit – A long time friend reminds me of an excellent way to increase survey responses. When closing a case, support engineers should inform the customer that a survey may be coming their way, and they’d be very grateful if the customer responded