Image
Two white men sit at a desk, with Mac laptops open in front of them. They are conferring over, and look to be editing, a stack of papers in between them.

The Election Administration and Voting Survey

A critical tool for researchers in election science

The Election Assistance Commission (EAC) has just published its biennial Election Administration and Voting Survey (EAVS), a study that provides seven of the seventeen indicators for the Elections Performance Index (EPI). The 2020 release provides us more insight into the effect of COVID-19 on the administration of the 2020 election. Here, I will give a broad overview of the EAVS and briefly discuss some of the responses to this year’s survey and their relevance to the 2020 EPI.

History of the EAVS

The EAVS provides data about election administration at the local election that was previously unavailable. The EAVS was born in the aftermath of the 2002 Help America Vote Act (HAVA), which itself was passed in response to the 2000 presidential election. HAVA built off of the 1993 National Voter Registration Act (NVRA), also known as the “motor voter law,” which set forth national requirements for voter registration at motor vehicle agencies and reporting the effects of those requirements to Congress. Six states are exempt from NVRA requirements due to a lack of voter registration or election day registration (Idaho, Minnesota, New Hampshire, North Dakota, Wisconsin, and Wyoming). As part of the NVRA, the Federal Election Commission was required to report to Congress after each federal election on the effect of the NVRA on election administration.

HAVA went beyond the NVRA, charging the newly created EAC to serve “as a national clearing house and resource for the compilation of information and review of procedures with respect to the administration of Federal elections.” As part of this original charge, the EAC is required to provide information on voluntary guidelines for HAVA requirements, certification of voting system hardware and software, and HAVA payments and grants.

Crucially, HAVA tasked the EAC with carrying out studies relating to election administration and the compilation of federal and state laws relating to election administration and voting. These tasks are reflected in the EAVS and the Election Administration Policy Survey (formerly the Statutory Overview). The EAVS has been published biennially since 2004; the statutory overview has been included with the survey since 2008.

The EAVS itself is divided into six sections, labeled A to F, covering voter registration, the Uniformed and Overseas Citizens Absentee Voting Act (UOCAVA) voting, domestic absentee voting, election administration, provisional ballots, and Election Day activities. All of these sections relate to specific requirements, such as Section B and UOCAVA voting, or best practices relating to election administration. While the initial years of the survey struggled with data completeness, the current EAVS has a much more complete dataset.
The EAVS is a source for standardized data from administrators that is unavailable elsewhere. As we saw in the 2020 election, turnout by mode is not provided by every state’s official or unofficial counts, but the EAVS provides it, and does so at the level of the local jurisdiction. The EAVS also provides information on provisional and mail ballots, voting machines and scanners, and poll workers.

Improvements to the EAVS

The EAVS is indispensable and much improved, but it still reflects some struggles. Many of these struggles relate to the translation of widely varying state and local practices into the standardized response categories of the survey. In recent years, the contractor that implements the EAVS, Fors Marsh Group, has partnered with the EAC to surmount some of these challenges. This includes adding clarification to questions that were confusing administrators, adding questions to clarify different election practices, improvements to the voting technology section, a changelog to track changed questions, and logic checks on data. Much of this effort is reflected in this year’s EAVS report, which is chock full of footnotes that provide nuance to the responses.

The EAVS is still a work-in-progress, as any heavy user of the data knows. One example of this is that some states do not track the number of advance votes separately into “mail” and “early in-person” categories. This failure is one of state recordkeeping, not the EAVS, per se. Yet, at a time when the use of early voting modalities is one of the top policy questions in election administration, the EAVS cannot answer with certainty the basic question, “how many mail ballots were cast in each state and local jurisdiction in 2020?”
Another example of how the EAVS is still a work-in-progress is found in the section that is central to assessing the effects of state and federal laws on list maintenance, Section A. For instance, it is almost never the case that the total number of registered voters in a state in one election cycle’s EAVS report equals the number reported in the previous report, plus additions, minus removals.

Power users of the EAVS, such as the MIT Election Lab, are aware of the balance that needs to be struck between valuing what the survey can bring to the table with cautions about scrutinizing the data before using it. It’s the most valuable in getting a national perspective on how Americans vote. Still, it’s no substitute for the administrative data maintained by the states themselves, which often differ from EAVS for many reasons that are justified. As MEDSL’s director, Charles Stewart III has written in a paper entitled “Is the EAVS a Reliable Guide to Voter List Maintenance?,” EAVS is “close enough for social science, but not close enough for the court house.”

Bridging the gap between the data that states and local jurisdictions gather and what the EAVS requests is the most important long term issue in improving the survey for its use in understanding election administration policy in the U.S. As states have upgraded their state election data management systems, many have been designed with EAVS in mind, which is probably why the completeness of the dataset has improved each election cycle. It has helped that the items on the survey have evolved very slowly.

The 2020 election will be the subject of scrutiny to researchers, the public, and policymakers for decades to come. With today’s release of the EAVS dataset, we anticipate that knowledge about how the election unfolded will explode and policy discussions will deepen.

Image
Headshot of Jack Williams

Jack Williams is a Senior Research Associate at the MIT Election Data + Science Lab.

More
Topics About the EPI