top of page
  • Writer's pictureJim Gagen

Old School – More Critical Thinking and Analysis of Polls

Election Year Reminder

By now almost everyone has seen one, if not hundreds, of polls on the Presidential Race. Recalling that 99% of the polls were wrong about the 2016 race, I thought it would be interesting to dig into why the polls were wrong then and look at what pollsters are doing now to avoid being inaccurate again.


Background

First of all, let’s look at some history of polls. As we know, in 2016 most pollsters were wrong. Specifically, the following polls had Hillary Clinton winning the Presidency: ABC News/Washington Post, AP, CBS News, Fox News, Monmouth, Moody’s, NY Times, Princeton, Rasmussen, Reuters and Sabato. Rasmussen was the most accurate polling service within .3% of the actual vote, but still had the final outcome incorrect. Going back further, we see that from 1988-2008, we see that another top pollster, Pew Research had anywhere from a 3.4-10.5 point error in predicting the outcome (3 points are considered margin of error, so these were all outside of this margin of error).


Why So Wrong?

Basically, it’s complicated and polls have to take a number of things into account. Probably most importantly, pollsters have to attempt to develop a valid sample, meaning the survey will measure what it’s supposed to measure. Therefore, they need to have a split between Democrats, Independents and Republicans that represent their percentages of registered or likely voters in the population. Also, many polls are national polls and do not sample based on the distribution of electoral votes which determine the outcome of Presidential Elections. In addition, the samples need to have age, income and education breakdowns among registered or likely voters similar to the general population. I don’t profess to know the sampling nor methodology of the polls, but from the numerous articles I’ve read on the subject, it seems like a number of these polling services do quick phone surveys without drilling down on many of these details. In fact, one of the reasons some analysts say the polls were wrong in 2016 was because the state polls didn’t screen for education. Because of this, the state polls were not accurate, which also skewed the electoral counts and the projections of the election winner.


What Else?

Beyond the sampling challenges, there are a number of things the polling companies can control and a few things they cannot control. Included in the latter are of those surveyed who are undecided, do not answer truthfully for whatever reason, or answer in favor of one candidate but who do not actually vote. Another factor that has come into play, given today’s politically charged environment, is the “shy” respondent. According to a July 22, 2020 Cato Institute Survey, 62% of Americans are afraid to share their political views. A study by Bloomberg reported on August 28, 2020 that 11.7% of Republicans; 10.5% of Independents and 5.4% of Democrats will not give their opinions in phone surveys. All these factors skew the poll results, making them less accurate.


What Have Polling Companies Done to Correct These Problems?

After hearing a number of political pundits say that nothing has really changed in the surveying, I did what any curious person does. I searched Google on the topic. After reading multiple articles on the subject, I confirmed that, surprisingly, it doesn’t appear that the polling services have done much to change their methodology. Some say they were statistically close in 2016 nationally, but that the state numbers were off. Others say their samples accurately reflect 2016 voter information. Others provide what seem like excuses rather than corrections. I wonder why these people still get paid as it seems like there should be some simple corrections they can make, but corrections would take time and everyone wants information NOW.


Quick Fix (If You Want to Score at Home)

While this may not be fully researched nor completely statistically valid, simple factors for inaccurate sample percentages can be applied to the final numbers to adjust them, making them potentially more accurate. For example, I recently heard an interview where the campaign spokesperson said the sample for Party X was 40% while the actual percentage of likely Party X voters in that state was only 32%. This is a huge disparity. Logic would suggest that this inflates Party X’s numbers by 25%. In this case Candidate X was leading in the polls with 49% to 40%. If Candidate Y’s sample was deflated by this same percent, we can adjust these number down by the percent of inflated and deflated samples. In this case:

· Party X: 32/40 x 49 = 39

· Party Y: 35/27 x 40 = 52

This completely flips the results and doesn't even take into account the "shy" respondents mentioned above.


In Closing

As I’ve stated before, research and statistics are great, but you also have to use some common sense when analyzing data. While I am not a researcher, I have managed marketing and media research departments and I understand that the key to reliable and valid research is an accurate sample. Common sense says that in this case if one party is over-sampled and the other is under sampled, the survey is invalid. Therefore, the sample needs to be corrected or adjustment factors need to be applied. The lesson here is to always look at any research with a critical eye, dig into the details, analyze and use some common sense, then decide if what they are telling you is valid or not.


Whether my math is wrong or my simple adjustments are statistically incorrect, feel free to comment or challenge.

22 views0 comments
bottom of page