The most compelling action in the 2016 Presidential campaign (before any actual votes have been cast) is on the Republican side. Most polling seems to reflect the same trends – i.e., Donald Trump leads with a plurality of support among Republicans, followed by Texas Sen. Ted Cruz, with the “establishment” candidates flailing.
But on the key question of who will actually win, it’s anybody’s guess. Why? First, polls are only a “snap shot in time.” Pollsters say this all the time, partly out of self-preservation, but it’s also true. Things can change quickly in campaigns, and intervening events can make a poll completed on Monday largely outdated by Friday.
In Iowa, where Cruz and Trump are competitive in the February 1 contest, it’s proving difficult to predict a winner ahead of time. I’d bet on Cruz, because I suspect his Get Out the Vote (“GOTV”) effort, crucial in getting supporters to the polls in the state’s quirky caucus system, is better. Like Ben Carson, Trump is likely to learn that winning campaigns are a lot harder than they look, and that a presidential campaign is a tough place to start your political career.
The much more important consideration in determining the “accuracy” of a poll, however, is the methodology used to conduct it. This can largely be broken down into two big questions –
1) Who are you asking, and;
2) How are you collecting the data?
Here’s how that plays out in Iowa. The best way to answer the question “Who will win?” –which is really “Who would win if the election were today?” – is for a human being to ask a person likely to vote in the caucus who they plan to vote for. Both of these elements – a live interviewer and targeting a likely voter – are crucial. Anything less than this is the first step on a quickly descending staircase of credibility and value.
Unfortunately, polls conducted like this are hard to find because they are so expensive. Why?
• You need to have a thoughtful way to determine who, really, is likely to vote;
• It takes time and real expertise to draft a sound, useful questionnaire;
• In conducting the poll, you need good callers who understand the questions and ask them the right way;
• You need callers who can be trained to pronounce candidate names and place names correctly;
• Polls that dig deep seeking more information, usually conducted by candidates, are long, and it’s difficult to keep participants engaged all the way through to the end;
• It’s harder to find voters in the era of the cell phone, and to determine a valid mix of cell phone and landline numbers;
• It takes time and expertise to interpret the data, weighing it to reflect voter demographics in the area.
Since few public polls approach this standard, they all should be greeted with healthy skepticism. Media outlets used to do them with the necessary rigor, but few can afford it anymore, with some national exceptions (NY Times, CBS, etc.). Local media outlets are, for the most part, unwilling to spend the money it takes to get really good data, which is important to remember as we approach November. Instead, they usually settle for online polls or automated telephone polls. (I think my dog Buster responded to one of those the other day.) What you end up with is very cheap data that barely passes the accuracy laugh test – but is sometimes reported by the media with a straight face. Meanwhile, the best information is usually found in the hands of well-funded candidates, is jealously guarded like the precious commodity that it is.
All of this may help explain the post-Iowa GOP landscape in a few weeks if/when the results from the caucuses do not reflect the “results” of the “national polls” published over the last few months.