• Saturday, April 20, 2024
businessday logo

BusinessDay

2023: The fetish and fallacies of election opinion polls in Nigeria (II)

The paradox of choice

While there is increasing evidence that non-response and misrepresentation bias might be the reason that polls are not producing accurately matched election results, these may not be the only problem of traditional methods of polling.

Traditional surveys in heavily polarised campaigns are affected by social-desirability biases (also called Bradley effect). For instance, the tendency of survey respondents not to tell the truth of intention of support for controversial candidates which could open themselves to social ostracism.

Monitoring social networks, the research further posited, represents an alternative to polls to capturing people’s opinions since it overcomes the low-response rate problem and it is less susceptible to social-desirability biases.

“Indeed, social network users continuously express their political preferences in online discussions without being exposed to direct questions. One of the most studied social networks is the microblogging platform Twitter. Twitter-based studies generally consist of three main steps: data collection, data processing and data analysis and prediction.

The collection of the tweets is often based on the public API of Twitter. It is a common practice to collect tweets by filtering according to specific queries, for example, using the name of the candidates in the case of elections. Data processing includes all those data-curation techniques which aim to guarantee the credibility of the Twitter dataset.

This is, for example, bot detection and spam removal. Data analysis and prediction, the core of all these studies, can be simplified in four main approaches: volume analysis, sentiment analysis, network analysis and artificial intelligence including machine learning and natural language processing.”

Sheldon R. Gawiser and G. Evans Witt, in a National Council on Public Polls report, rightly argue that the only polls that should be reported are “scientific” polls. Unscientific pseudo-polls, they warned, are widespread and sometimes entertaining, but they never provide the kind of information that belongs in a serious report. Examples, they noted, include 900-number call-in polls, man-on-the-street surveys, many Internet polls, shopping mall polls, and even the classic toilet tissue poll featuring pictures of the candidates on each roll.

One major distinguishing difference between scientific and unscientific polls is who picks the respondents for the survey. In a scientific poll, the pollster identifies and seeks out the people to be interviewed. In an unscientific poll, the respondents usually “volunteer” their opinions, selecting themselves for the poll. The results of the well-conducted scientific poll “provide a reliable guide to the opinions of many people in addition to those interviewed”.

The results of an unscientific poll tell us nothing beyond simply what those respondents say.
Critical questions to ask before sanctioning a poll survey result, according Gawiser and Witts, include: Who did the poll? The first question to always ask is, what polling firm, research house, political campaign, or other group conducted the poll?

“If you don’t know who did the poll, you can’t get the answers to all the other questions. If the person providing poll results can’t or won’t tell you who did it, the results should not be reported, for their validity cannot be checked. Reputable polling firms will provide you with the information you need to evaluate the survey. Because reputation is important to a quality firm, a professionally conducted poll will avoid many errors.”

Secondly, it is important to know who paid for the survey, “because that tells you and your audience who thought these topics are important enough to spend money finding out what people think. Polls are not conducted for the good of the world.

They are conducted for a reason; either to gain helpful information or to advance a particular cause. It may be that the news organisation wants to develop a good story. It may be that the politician wants to be re-elected. It may be that the corporation is trying to push sales of its new product. Or a special-interest group may be trying to prove that its views are the views of the entire country”.

Similarly, they noted, private polls conducted for a political campaign are often unsuited for publication. “These polls are conducted solely to help the candidate win and for no other reason.

The poll may have very slanted questions or a strange sampling methodology, all with a tactical campaign purpose. A campaign may be testing out new slogans, a new statement on a key issue or a new attack on an opponent.

But since the goal of the candidate’s poll may not be a straightforward, unbiased reading of the public’s sentiments, the results should be reported with great care”.

Read also: 2023: The fetish and fallacies of election opinion polls in Nigeria (I)

In addition, because polls give approximate answers, the more people interviewed in a scientific poll, the smaller the error due to the size of the sample, all other things being equal (ceteris paribus). A common trap to avoid is that “more is automatically better.” While it is absolutely true that the more people interviewed in a scientific survey, the smaller the sampling error, other factors may be more important in judging the quality of a survey.

This includes how the respondents were chosen. The key reason that some polls reflect public opinion accurately and other polls are unscientific junk is how people were chosen to be interviewed. In scientific polls, the pollster uses a specific statistical method for picking respondents.

In unscientific polls, the person picks himself to participate. The method pollsters use to pick interviewees relies on the bedrock of mathematical reality: when the chance of selecting each person in the target population is known, then and only then do the results of the sample survey reflect the entire population.

This is called a random sample or a probability sample. This is the reason that interviews with 1,000 American adults can accurately reflect the opinions of more than 210 million American adults.

Further questions to ask before giving legitimacy to and reporting any polling survey result should include: How many people were interviewed for the survey? How were those people chosen? What area (nation, state, or region) or what group (teachers, lawyers, democratic voters, etc.) were these people chosen from?

Are the results based on the answers of all the people interviewed? Who should have been interviewed and was not? Or do response rates matter? When was the poll done? How were the interviews conducted? What about polls on the Internet or World Wide Web?

What is the sampling error for the poll results? Who’s on first? What other kinds of factors can skew poll results? What questions were asked? In what order were the questions asked? What about “push polls”? What other polls have been done on this topic? Do they say the same thing?

If they are different, why are they different? What about exit polls? What else needs to be included in the report of the poll? And finally, I’ve asked all the questions. The answers sound good. Should we report the results?