Digital & Social Media

Call Me, Maybe?

Call Me, Maybe?
Share

Polling has become ubiquitous in modern life, from insta-polls online to political push-polls, as well as those surveying consumer satisfaction with everything from hardware store experiences to meals at fancy restaurants. At the same time, the polling industry has transitioned from polls administered by professionals to those taken by consumers themselves. Many polls today are based on results from Facebook or Twitter users, or come from news organizations asking people to call, write, or tweet an answer. That makes some polling less expensive and seemingly more open to a wide range of people. But in fact, the result has been that it’s harder – and more expensive – for reputable researchers to find out what people really think.

Neither Internet nor mobile phone polling reach as broad an audience as the landline method, according to Virginia Sapiro, professor of political science at Boston University. “Until recently, a remarkably high proportion of Americans had landline telephones,” she said. “Saturation was pretty high. But now, fewer people have landline telephones than ever before. More use caller ID. More people seem to be refusing to participate in public opinion polls and surveys.” Indeed, the Pew Research Center, a nonpartisan think tank that conducts data-driven social science research, found that the response rate to its telephone surveys had fallen from 36 percent in 1997 to just 9 percent in 2012.

Accuracy can be another issue with modern polling. “(Internet or Tweet-based) polling are never accurate representations of opinion in general,” Sapiro said. “Nor is ‘push polling’ — polls that rig or bias the way a question is asked or that tries to change people’s views.”

“Good survey research isn’t cheap.”

Additionally, it turns out that the very act of asking a poll question online can change the answer. This difference is called a “mode effect,” meaning a difference in responses to a survey question attributable to the mode in which the question is administered, according to the Pew Research Center. In general, Pew found the mean difference to be 5.5 percentage points, based on a 2014 study. But in some areas, the gap between online and person-to-person answers was as large as 18 percentage points. That level of difference makes it harder to track trends over time, when the mode of interview has changed, and to handle inconsistencies when combining data gathered using different modes, according to Pew.

Some data driven organizations have embraced online surveys. At Georgetown University’s CARA, a non-profit which conducts social scientific studies about the Catholic Church, “we remove the interviewer and have respondents do self-administered surveys onscreen,” said Mark Gray, director of CARA Catholic Polls and a senior CARA researcher. Allowing survey panelists to mark their own on-screen questionnaires reduces social desirability bias, such as over-reports of church attendance or giving to charity, “and provides more honest data for a subset of questions,” Gray said. “We do this using a large national panel that uses probability-based sampling – random phone and mail selections.” That’s not to say Gray thinks all polls are equal. “Don’t get me wrong – there are bad polls out there,” he said. “The worst kind are the internet opt-in panels where people take surveys for free toasters and the like.”

Still, the real problem for veteran pollsters is getting a large enough sample size to meet best scientific practices. “The diminishing use of landlines can have some effect on the quality of sampling, but probably a bigger threat is the unwillingness of people to respond,” Sapiro said. “For the legitimate pollsters — that is, the ones trying to follow best practices – the biggest challenge is getting a good sample, which is increasingly a matter of whether people respond to the poll.”  

The Pew Research Center found that better-educated people tend to be more willing to do surveys and that the white demographic is somewhat overrepresented in current polls. But, that doesn’t necessarily taint the data, since “most of these biases can be corrected through demographic weighting of the sort that is nearly universally used by pollsters,” according to the Pew study. Scientists at Pew also found that “high-effort” polling was more successful at gaining a response. The problem is that reaching that 22 percent response rate is time-consuming. It means multiple call-backs to consumers and ascertaining that a generous mix of cell-phone respondents are included in the poll. Of course, the extra effort costs extra money.

But Gray said as long as the decision to respond to a survey is random, scientific polls will remain accurate. “As long as the responders and non-responders are relatively random, there will be virtually no effect,” he said. “If instead we found that there are ‘survey takers’ in the population and ‘survey avoiders’ as distinct and permanent groups, then there’d be concern.”

Sapiro’s advice for those who depend on polling results to understand what real people think is to pay attention to who did the work, what standards they use, and how to interpret the results wisely. “Be careful consumers and critical readers,” she said. “If you are hiring the services of a survey or polling outfit, do some research on the standards they use for their work. Look at the credentials of the people who will lead the project. And be aware: Good survey research isn’t cheap.”

Subscribe

About the author

Maggie Sieger is an award-winning journalist and former Time Magazine correspondent, published also by Reuters, the Chicago Tribune, Entertainment Weekly, Realtor Magazine and Readers Digest, among others. She is the author of Deep in the Heart, the First 50 Years of Duchesne Academy. Sieger currently works as a freelance writer and media consultant in Saint Louis, Mo.