Now, personally, I do believe I know at least as much about building, delivering and analysing surveys as I do about technology application. But that is mostly because I know so little about technology. In both situations I would always welcome expert advice if I need to get something right.
Even IT listens to the CFO’s people when it comes to costs and accounting, yet many have potential access to significant expertise in their marketing people that goes untapped.
This feels important to me simply because of the all the bad surveying we still see. I suspect that availability of free services like Survey Monkey leads us to build and do surveys without any real planning, and without thinking through how we might analyse and use the results when we have them. Basically a good example of reducing the ‘Plan-Do-Check-Act’ cycle down to ‘Do’ - speedy and economic but not usually very effective.
As for the confusion and the wrong results taken from unrepresentative samples …
For simple, but telling, examples think about how many ‘customer survey’ results you have seen where in fact it is only users who have been addressed. It is an important thing, user satisfaction, but it isn’t customer satisfaction and we need to find out both and act accordingly on what we find. For example if you have 100% perfect user satisfaction, then the odds are your customers will think they are spending too much. And you will frequently see a mix of customers and users asked questions that are not really targeted at all, just asked because they can. This is often based on the – misplaced – belief that the more people you ask, then the more accurate the answer, ignoring the whole ‘sample selection process’.
Take a classic ITSM example, where a support unit routinely sends questionnaires to those who have made use of the service desk. This, of course, gives you a satisfaction result amongst those who have had sufficient problems to make them phone for help. Might you expect a rather lower score from these people than the ones who have been working quite happily without the need for support.
We know we need to care more and more about understanding what our customers – and users and other stakeholders – want and need. We also need to understand it is not always an easy task to find that out. There is a whole professional specialism out there that delivers this service – as service providers ourselves, proud of our professional expertise, should we recognise that more – and take some better advice before we ‘knock something up to measure satisfaction?
Maybe you do consult with your internal experts if you have them, or maybe you buy in expertise. It would be good to hear if you do.