Image
A page from a census report dated Jun 5, 1900, from Manhattan, New York.

Under the hood of election science, again

The U.S. Census as a source of data on elections

This is a continuation of our pre-midterm miniseries on the sources of data on election administration. For more information, read the first article of the series.


In 1964, for the first time, the monthly Current Population Survey (CPS) administered by the US Census Bureau included a supplement on voting. Since then, this Voting and Registration Supplement (VRS) has gone out with the CPS in November of every second year, gathering data from citizens on voting and registration (including, since the late 70s, data on congressional voting).

The questions weren’t complicated; they focused on whether individuals were registered to vote, and whether they had voted in specific national elections. In some iterations of the survey, these questions were the only ones included. Other years saw additional questions added — for example, about the time of day a voter went to the polls, where they went to the polls (or to register), or whether respondents had registered specifically for that year’s election. The VRS has also begun to include questions on the reasons non-voters did not cast a ballot — but more on that later.

Why the Census?

The CPS is conducted by the US Census Bureau in coordination with the Bureau of Labor Statistics. Its first core purpose “is to gather information about the workforce in the United States.” It’s the tool that allows us to calculate monthly labor force statistics. How does voting fit into that?

Perhaps the answer is already obvious to you. Besides being one of the oldest surveys in the country, it’s also one of the largest. The CPS is administered to a sample of 50,000 households, which are selected to give an accurate representation of the United States. The data it collects can be analyzed to describe what’s happening nationally, as well as on a state-by-state basis. While it’s a tool that seeks to ‘understand the workforce,’ there is a breathtakingly wide span of information that must feed into that understanding — of which voting is a piece.

Using the VRS

For just over half a century, the VRS has been gathering information on whether (and how) voters cast ballots and registered to vote; if someone did not vote, it asks what their reason was for staying home. The longevity and consistency of the supplement (and the resulting depth of its data,) open some very important doors for us to explore and answer questions about voting in the United States. Consider, for example:

  • What effects do changes in election law or regulations have on voter behavior?
  • What influence do personal characteristics have on voter participation and registration?
  • How frequently do non-voters cite certain reasons for not voting?

These can be difficult questions to get a handle on, scientifically speaking. The VRS, though, has laid the groundwork for meaningful research on them. Let’s take an example that did a great deal to shake up the elections world in the US: the National Voter Registration Act.

Enacted in 1993, the NVRA (or if you prefer: the “Motor Voter Act,”) was written to “enhance voting opportunities for every American” by making it easier to vote and keep your registration up-to-date. It made some significant changes to the way states could offer voter registration or maintain their voter lists. In turn, it’s reasonable to want some research after those sorts of changes are made to ensure policy is having its desired effect and identify any unintended consequences.

Enter the VRS. From 1996 to 2004, the supplement asked respondents whether they had registered to vote after January 1st, 1995, collecting valuable data that would allow researchers to assess what effect the NVRA was having, if any, on voter registration and behavior.

For a more recent example, we need look no further than the VRS’s questions about respondents’ reasons for not voting. One of the answers respondents can give is that they were deterred from voting due to an “illness or disability (own or family’s).” Here again, it’s an important question not only because of the obvious answer — everyone who is eligible should be able to vote — but also because there are a bevy of requirements in election law and state policy that are attempting to ensure voting is accessible.

Tools like the Elections Performance Index, which includes an indicator on the issue, use data from the VRS to highlight where states have succeeded or struggled in addressing the problem. Researchers can use the data to measure the effects of laws like the Help America Vote Act, which included provisions on access for people with physical disabilities.

This kind of data can also advance research on other issues, especially when it shows a need for further investigation. Lisa Shur, for example, recently contributed an article here that uses the VRS (via the EPI) to analyze the “voting gap” and barriers to voting for people with disabilities.

Just as with the SPAE and CCES in our last post, the VRS data are best used with an understanding of the survey’s limitations. (Not least, because of the same bias that leads us fallible humans to give an answer we think is “desirable” — like that we voted, when in truth we did not.)

While the VRS boasts an impressive depth to its data because of its longevity, it’s also limited in scope. After all, it’s only a handful of questions on a supplement to a bigger survey. Often, this constrains its ability to target current issues related to voting and registration — there is a need for other surveys (such as this recent one) to fill in those gaps. On a similar note, the VRS generally is not the most useful tool for research on local election practices beneath the state level, as the CPS often does not record the respondent’s county of residence.

There are a few other quirks to keep in mind because of the VRS’s format as part of the CPS. The CPS gathers data based on household interviews, rather than individual respondents, meaning that a significant percentage of the responses are by “proxy,” with one individual answering for other household members who may not be present. There are arguments to be made for whether this method makes it more or less accurate, but without going into those weeds we can safely say that it’s an important limitation to keep in mind.

Final Thoughts

Overall, however, the VRS remains an important, nonpartisan source of information about voting and registration in the United States. As it enters a new round of data-gathering with the 2018 elections, we will be looking forward to the new information it gleans — and excited to see what new research and insights on election administration grows out of those data.

Image
Headshot of Claire DeSoi

Claire DeSoi is the communications director for the MIT Election Data + Science Lab.

More
Topics About the EPI