Image
A close-up shot of pliers, wrenches, and other tools, all a bit greasy and clearly well-used.

What’s Running Under the Hood of Election Science?

A MEDSL miniseries on data sources

A great deal of the attention paid to elections (including the upcoming midterms) focuses entirely on the political races and candidates: who said what, which candidate might be ahead or behind expectations, updates on polling numbers and odds. This is, of course, entirely relevant and useful information. But focusing on the politics misses a huge piece of the picture — namely, how those elections are being run.

(Running with the art metaphor, you might say that if elections are a painting, polls and politics are the oils that give it form and meaning, but election management is the canvas without which the rest would not exist.)

Image
A photo of Bob Ross painting a sunset mountain landscape. Text over the bottom of the image reads "We're going to make some big decisions in our little world."

Art and elections: surprisingly alike!

Art and elections: surprisingly alike!

The last two years in particular have seen an increasing amount of attention paid to election administration. Hot-button issues, from voter ID and voter list maintenance to election security on and off the internet, have brought election management further into public consciousness and conversation.

But good policy — and good research — depends on good information, and that means good data. Just as tracking statistics and the impact of policy change had far-reaching changes on public health (see, for instance, the changes in New York City’s infant mortality rate in the early 1900s), so too can data about election processes and policies pave the way for positive effects on elections and voters. The work that has been done since 2012 on voters’ long waits at the polls is one excellent example: a concerted effort to understand those long lines gave scholars and election officials the foundation to understand why the lines were forming, and what it would take to address them.

We need data about elections. Where can we find it?

Election data come from — where else? — elections themselves. Information is collected by state governments, federal agencies, local election officials, external surveys, academics, and others. Sometimes data collection efforts are focused on a specific topic or issue of interest; sometimes they take a broad sweep across many aspects of election administration.

As the days tick down to the midterm elections on November 6, election geeks are preparing to collect as much data as we possibly can to get a clear look at how election administration fares in 2018. (You can read about our own plans for data collection here.)

In that light, we’ve put together a miniseries of blog posts about the data sources we (and others) rely on — sources like the Election Administration and Voting Survey, the US Census, election returns, and a few specific public opinion surveys. These sources inform projects like the Elections Performance Index, and in general have proven themselves more or less indispensable for assessing the state of election administration, policy, and processes in the US.

Public Opinion Surveys

First up: let’s talk public opinion.

Public opinion surveys are an important window into the inner workings of an election. While other methods of data collection might show the mechanical innards of election processes more clearly, surveys provide a way to examine the behavior and experiences of voters themselves. As with any tool, surveys have their drawbacks — we’ll get to those — but overall, they’re a valuable source of information that would otherwise be quite difficult to get.

While there are many public opinion surveys that include a focus on elections in one way or another, we’ll focus one of on the most prominent: the Survey of the Performance of American Elections.

The Survey of the Performance of American Elections (SPAE, for less of a mouthful,) is the only public opinion project in the country that is dedicated explicitly to understanding how voters themselves experience the election process. In doing so, it provides a comprehensive, nationwide dataset at the state level documenting election issues as experienced by voters.

The SPAE has been conducted after each federal election since 2008, when it was launched by the Pew Charitable Trusts. Bright and early on the Wednesday morning directly following Election Day, it’s sent out into the field across the US. A total of 200 registered voters in each of the 50 states (as well as Washington, DC,) are interviewed. This sample size makes it possible to paint a more accurate portrait of voters’ experience in each state, as well as chart any effects of changing election laws on a state level. It also allows us to make comparisons across state lines.

In general, the questions in the SPAE focus on three broad issue areas:

  1. Voting that takes place outside of a polling place (absentee voting, for example)
  2. Voter behaviors and experiences that states often track in different ways, like turnout
  3. Voter attitudes toward their own experience with voting (regardless of whether they cast an absentee ballot or voted at a polling place)

Data from the SPAE can tell a range of stories, from the very broad — what percentage of voters nationwide used a particular method to cast their ballot? — to the specific — how confident were voters in Nebraska that their vote was counted correctly?

In 2016, the SPAE showed that for most voters, their Election Day experience went off with barely a hitch: 98% of respondents said it was “very” or “fairly” easy to find their polling place, and 95% said that poll workers’ performance at that polling place was either “excellent” or “very good.” It also showed that polling place lines — which had been a significant concern in 2012 — were minimal for most voters in 2016.

On the flip side, the SPAE also highlights potential concerns or bumps in the road. In 2016, it found an uneven implementation of voter identification laws within states; 78% of respondents in states that allowed any form of ID, for example, reported being asked specifically to show photo identification. The number of voters reporting problems of any sort, from finding the polling place to encountering an issue with their registration, a voting machine, or a poll worker, was quite low — however, as the survey’s summary report points out, this “may represent a substantial problem” in cases where such problems might complicate close races or disputed ballot counts.

Overall, the SPAE and its quick deployment after federal elections means that it can shed light quickly on a number of important issues, including:

  • The average voting wait-time, as well as the time of day voters went to cast their ballots
  • Voters’ level of confidence that votes were counted as they were intended, and their beliefs about the prevalence of voter fraud
  • Levels of support for reform proposals (e.g.: automatic voter registration, voter identification, and no-excuse absentee balloting)
  • Non-voters’ reasons for abstaining
  • The number of voters who had problems voting

Another important survey: CCES

In addition to the SPAE, the Cooperative Congressional Election Study (CCES) has also been a useful resource for researchers and election officials. This survey, which is administered online, goes out annually to over 50,000 people. (In election years, it is administered in two waves, which are conducted before and after the election.)

Along with its core questionnaire, the CCES also allows participating research teams to design and add modules that are administered to a small subset of respondents. Often, these modules focus on a specific issue. MEDSL director Charles Stewart, for example, has written here about research he’s done with a CCES module on voters’ perceptions of voting machines.

As with any data tool, public opinion tools have drawbacks, and are most useful when they are applied with a clear understanding of what those drawbacks are.

Perhaps the most obvious drawback is the human tendency to over-report behavior that we think will make us look good. For surveys like the SPAE and CCES, this could influence many of the results — for example, non-voters who want to give a “better,” more “desirable” answer might blame their failure to vote on problems with election management or officials, rather than saying they forgot or chose not to vote.

Additionally, although the SPAE is fielded immediately following Election Day, it’s possible that respondents (especially those who voted early or by mail) will have difficulty remembering exactly how they behaved or what they experienced as they voted. Small sample sizes can also pose a problem; while the nationwide SPAE sample is quite large, only a small number of respondents may report an issue, making it difficult or impossible to draw conclusions or comparisons on a state-level basis.

Public opinion surveys have played an important role in gathering otherwise inaccessible data about election administration and its impact on the voters it serves. They have helped inform us where things are going right, or where states and voters may be facing challenges.

The results of a new survey, which is hot off the presses at the Pew Research Center, showcase the particular usefulness of surveys. The one-time survey is focused on the topic of voter confidence and election security — not entirely new issues, perhaps, but of particular urgency at the moment. The survey, in compiling and analyzing the views of Americans on the subject, gives election officials, academics, and the rest of us a close-up look we would not otherwise have at the nuances of the public’s concerns. It’s an excellent example of a survey that, on an ad hoc basis, dives deeply into a current issue of election management.

Importantly, surveys like the SPAE and CCES, because they are centered on content that has remained relatively stable in each new round, allow us to track issues and trends across time. They provide cornerstones for new research, providing essential historical context to the long-term issues of election management. Yet they do adjust to reflect of-the-moment concerns, avoiding obsolescence as they keep pace with new developments. (The 2008 SPAE survey, for example, did not include questions on voting by mail — but in 2012, as it rose in prominence, those questions were added.)

In the future, new questions — and new surveys — will continue to come and go as needed, allowing us to gain a better understanding of voters’ knowledge, attitudes, and experiences on old and new concerns alike.

Image
Headshot of Claire DeSoi

Claire DeSoi is the communications director for the MIT Election Data + Science Lab.

More
Topics About the EPI