Jul
08
Posted by nnlmneo on July 8th, 2016
Posted in: Blog
I recently reviewed a questionnaire for a colleague, who used one of the most common question formats around: the Likert question. Here is an example:
This is not a question from my colleague’s survey. (I thought I should point that out in case you were wondering about my professional network.) However, her response options were laid out similarly, with the most positive ones to the left. So I shared with her the best practice I learned from my own training in survey design: Reverse the order of the response options so the most negative ones are at the left.
This “negative first” response order tends to be accepted as a best practice because it is thought to reduce positive response bias (that is, people overstating how much they agree with a given statement). Because I find myself repeating this advice often, I thought the topic of “response option order” would make a good blog post. After all, who doesn’t like simple little rules to follow? To write a credible blog post, I decided to track down the empirical evidence that underpins this recommended practice.
And I learned something new: that evidence is pretty flimsy.
This article by Jeff Sauro, from Measuring U, provides a nice summary article and references about the evidence for our “left-side bias.” “Left-side bias” refers to the tendency for survey-takers to choose the left-most choices in a horizontal list of response options. No one really knows why we do this. Some speculate it’s because we read from left to right and the left options are the first ones we see. I suppose we’re lazy: we stop reading mid-page and make a choice, picking one of the few options we laid eyes on. This speculation comes from findings that show the left-side bias is more pronounced if the question is vague or confusing, or if the survey takers flat-out don’t care about the question topic.
Here’s how left-side bias is studied. Let’s say you really care what people think about Justin Bieber. (I know it’s a stretch, but work with me here.) Let’s say you asked 50 people their opinion of the pop star using the sample question from above. You randomly assign the first version to 25 of the respondents (group 1) and the second version 2 to the other 25 respondents (group 2). Findings would predict that group 1 will seem to have a more favorable opinion of Justin Bieber, purely because the positive options are on the left.
Sauro’s references for his article do provide evidence of “left-side bias.” However, after reviewing his references, I drew the same conclusion that he did: the effect of response option order is small, to the point of being insignificant. I became more convinced that this was the case when I looked for guidance in the work of Donald Dillman, who has either conducted or synthesized studies on almost every imaginable characteristic of surveys. Yet I could not find any Dillman source that addressed how to order response options for Likert questions. In his examples, Dillman follows the “negative option to the left” principle, but I couldn’t find his explicit recommendation for the format. Response option order does not seem to be on Dillman’s radar.
So, how does all this information change my own practice going forward?
For new surveys, I’ll still format Likert-type survey questions with the most negative response options to the left. You may be saying to yourself, “But if the negative options are on the left, doesn’t that introduce negative bias?” Maybe, but I would argue that using the “negative to the left” format will give me the most conservative estimate of my respondents’ level of endorsement on a given topic.
However, if I’m helping to modify an existing survey, particularly one that has been used several times, I won’t suggest changing the order of the response options. If people are used to seeing your questions with positive responses to the left, keep doing that. You’ll introduce a different and potentially worse type of error by switching the order. More importantly, if you are planning to compare findings from a survey given at two points in time, you want to keep the order of response options consistent. That way, you’ll have a comparable amount of error cause by bias in time 1 and time 2.
Finally, I would pay much closer attention to the survey question itself. Extreme language seems to be a stronger source of bias than order of the response options. Edit out those extreme words like “very,” “extremely,” “best” and “worst.” For example, I would rewrite our sample question to “Justin Bieber is a good singer.” Dillman would suggest neutralizing the question further with wording like this: “Do you agree or disagree that Justin Bieber is a good singer?”
Of course, this is one of many nuanced considerations you have to make when writing survey questions. The most comprehensive resource I know for survey design is Internet, Phone, Mail and Mixed-Mode Surveys: The Tailored Design Method (4th ed.), by Dillman, Smyth, and Christian (Hoboken, NJ: John Wiley and Sons, Inc, 2014).