The casual feedback we get from customers on the impact of our campaigns and the success or relevance of our products often needs to be supplemented with some hard facts. Asking the questions to get at the facts is hard than you might think, here are some ground rules.
The most common way of gathering customer feedback data is still via some sort of questionnaire, whether administered by phone, mail, face-to-face or e-mail. Ensuring that your questionnaire returns valid, unbiased results is easier said than done. We all have a tendency to use leading phrases, inappropriate questions or skewed designs that result in feedback to match the results that we have already decided we want.
If we are to get results we can confidently act on, we need to be careful about how we structure the questions we ask and how we measure the answers we receive.
Asking a Good Question
The foundations of any questionnaire are good, clear unambiguous questions. These are not hard to formulate providing that you, as questioner, can answer yes to each of the following:
- Will the respondent be able to understand the questions?
- Having understood the question, will they be willing to answer it?
- Having both understood and be willing to answer the question, will the respondent be able to answer it in a way that accurately reflects their feelings?
Understanding the Question
Crafting an understandable question is not as simple as it first sounds. The proliferation of TLAs, especially in the IT market, has led to many market researchers assuming comprehension where none exists.
But it is not just obscure technical terms that confuse respondents, the use of imprecise words, abstract concepts or trying to ask two questions at the same time are all common pitfalls. Don't be under the illusion that someone not understanding the question will say so, ask for clarification or avoid answering it. Nobody wants to appear ignorant and many questionnaires, by presenting a range of possible legitimate answers, encourage responses even where the question is meaningless to the respondent.
Do you think that Microsoft's Win8 is a serious threat to iOS and Android dominance of developer's mindshare and the mobile app marketplace?
This sample questions uses obscure terms (Win8, iOS, Android), imprecise words (dominance) and tries to ask 2 questions at the same time (developer's mindshare AND the mobile app marketplace.)
Willingness to honestly answer the question
Are you covering an embarrassing or personal subject in your question and if so are you administering the questionnaire in the right way? It doesn't take a genius to work out that with the more potentially embarrassing questions, you're more likely to get an honest answer if the questionnaire is administered remotely (ie. direct mail rather than face to face).
Additionally respondents are becoming more sophisticated in their ability to spot leading or biased questions that seem to be trying to put answers in the respondents mouth and may refuse to answer them.
The desire or pressure to give 'socially acceptable' answers often plays a part in some less than honest responses. In some subject areas (notably political) and especially in face to face interviews where an answer may be overheard by others, questions on political or moral issues might elicit a response more in keeping with perceived acceptable norms than the respondent's true opinion.
Ability to answer accurately
Many complex questions can best be answered by inviting an open-ended statement, accurately recording the exact words used by the respondent. The problem with this sort of answer is the virtual impossibility of analysing large numbers of such responses.
In many cases researchers resort to some kind of scaling or multiple choice system. Sometimes this is done by presenting respondents with a list of statements which the questionnaire designer feels adequately represent the range of legitimate answers. This runs the risk however of oversimplifying the issues involved and many respondents will find it hard to choose an answer that accurately reflects their true opinions.
More often rating scales are used that allow a full range of opinions to be applied to a statements. For instance, respondents may be asked if they:
- Agree a lot
- Agree a little
- Neither agree nor disagree
- Disagree a little
- Disagree a lot
Whilst this is a perfectly valid scale, questionnaire designers should ensure that they include a don't know option (although this need not initially be shown to the respondent), don't use a scale with more than seven points (too many choices cause confusion) and rotate the list amongst respondents (as there's a natural tendency to choose from the top of a list).