The 2016 US presidential election provides a good test case of the accuracy of public opinion polls. Natalie Jackson, polling editor for The Huffington Post, explained:
There are lots of opportunities for things to go wrong in polls. Survey experts generally point to five areas where things can go awry:
- sampling: refers to the error produced by interviewing a random sample rather than the entire population whose opinion you are seeking.
- coverage: refers to having the ability to sample from the entire population ― for example, a poll done online can only reach people with internet access. A poll that uses only landline telephone numbers can only reach people with a landline.
- nonresponse: refers to all the people pollsters try to reach but can’t.
- measurement: refers to whether questions asked and answered actually measure what pollsters are trying to get at.
- post-survey: refers to anything analysts do with the data after collecting it, including weighting and likely voter selection.
These five areas make up the “total survey error” that researchers seek to understand.
The “margin of error” that most (but not all) polls report only addresses the potential for sampling error…Based on the sample size, statistics and a few other factors, the pollster can calculate the margin of sampling error. This describes how close the sample’s results likely come to the results that would have been obtained by interviewing everyone in the population — in theory — within plus or minus a few percentage points.