Questioning the survey: How surveys can skew the facts

December 23, 2014By carlainformation, news

Ah, the power of the question.

When putting my son to bed at night, I ask him: “Which do you want to do first, brush your teeth, or put on your pajamas?”

If this was a properly worded survey question, I’d also throw in an “other” option. But it’s not a survey—it’s me faking out my kid to force him into choosing one of two choices that I want him to make. That’s okay for bedtimes and five-year-olds, but not okay for survey questions.

Surveys have a powerful grasp on public sentiment, leaving the public vulnerable to biases, flaws, and misinterpretation of the results.

Surveys have a lot of power to shape public thought. The relative brevity of survey questions—and the deceptive simplicity of their results—can confound the issues that surveys are meant to shed light on.

These issues are often nuanced and complex. Reducing them to survey questions can lead to misinterpretation of the facts if the surveys are improperly designed. This can skew how the public perceives the facts that shape their opinions and their actions.

And that can lead to erroneous conclusions from a public trained to think in sound bites, reporters eager for a good story, and policy makers and advocates seeking to bolster their cause.

Of course, there isn’t anything implicitly devious behind a survey—data can cause confusion in any form (remember the controversial gun graph from April, 2014?). But surveys are vulnerable to the biases, flaws, oversights, and limitations of those asking the questions.

Media, policy makers, and advocates need responsible survey reporting as much as the public needs survey literacy skills

To get to the facts, survey nerds, data wonks, and most journalists are used to getting up close and personal with how the survey was designed before using that data to draw wider conclusions.

But that’s a lot to ask of the average reader, and media outlets (along with anyone with the power to shape public opinion) also bear responsibility for scrutinizing survey results (and being transparent about the specific questions that were asked, the methodology, and the original source).

A recent Pew survey on gun control sentiment shows that even highly respected polling and research organizations can make mistakes

Recently, one of my favorite polling outfits, The Pew Research Center, came under scrutiny for crafting poorly worded survey questions about public attitudes on gun control in a survey entitled, “Growing Public Support for Gun Rights” (full disclosure, I was employed by Pew from 2008-2013).

The survey results showed an apparent rise in public support of gun control and were—predictably—hailed by those who favored the results and lambasted by those that didn’t.

Gun control graphic by Pew Research Center

Headlines by news outlets with differing political views told two different stories about the same gun control survey:

Here’s how media outlets handled the story:

The progressive Mother Jones: Is Protecting Gun Rights Really a Growing Priority for Americans?

The conservative Washington Times : Support for gun rights at highest point in two decades

That’s par for the course in surveys, which are the go-to political football of pundits and the American public.

Putting politics aside, however, it’s fair to say that critics of the survey had a point when they claimed that the survey questions were poorly worded.

The survey questions asked respondents whether it was more important to “control gun ownership” or to “protect the right of Americans to own guns.” But that was a false dichotomy.

The gun control survey questions present respondents with a false dichotomy, a choice between two options that are NOT the mutually exclusive choices that the questions make them seem (one can obviously be in favor of gun control AND in support the right of Americans to bear arms).

Media Matters (a self-described progressive non-profit that monitors conservative media) and the New York Times were among many of the media outlets who questioned whether the survey questions were properly worded. Both reported the comments of Daniel Webster, the director of the Center for Gun Policy and Research at Johns Hopkins:

“I could not think of a worse way to ask questions about public opinions about gun policies.”

“Pew’s question presents one side emphasizing the protection of individual rights versus restricting gun ownership. The question’s implicit and incorrect assumption is that regulations of gun sales infringe on gun owners’ rights and control their ability to own guns.”

“The reality is that the vast majority of gun laws restrict the ability of criminals and other dangerous people to get guns and place minimal burdens on potential gun purchasers such as undergoing a background check. Such policies enjoy overwhelming public support.” —Daniel Webster, Johns Hopkins Center for Gun Policy and Research.

I’m not sure what the moral of this story is. I do know that it can be tough for a layperson bombarded by so-called “data” to discern the facts. Data literacy can help. And by scrutinzing surveys that produce unclear results, reporters and those who influence the public can make that job a little easier.

Resources on proper survey methods

If you’re interested, the National Council on Public Polls produced guidelines for journalists and polling (20 Questions A Journalist Should Ask About Poll Results), which is a pretty good resource that has often been cited. There’s plenty of other stuff out there too, including the survey guidelines put out by the good folks at Pew Research Center themselves. I learned quite a lot from this post as well (The Good, the Bad, and the Ugly of Public Opinion Polls), written by a retired professor of political science.