Scraping Nevada

January 28, 2015

ICYMI, Derek Willis wrote a piece for Source about his experience scraping Nevada precinct results. Check it out!

When we released our initial dashboard for downloading election results in July, we wanted to make it easy for anyone to grab CSV files of raw results with just a browser. We’ve continued adding states to our results site, the latest being North Carolina, Florida and — for a few elections — Mississippi. Pennsylvania will be on the way soon.

But we also wanted our results to be usable by developers as well, and we’re taking advantage of Github to help make that easier. Each time that we publish raw results data, which hasn’t been standardized beyond geography — we publish it first to a GitHub repository for that state. For example, you can find a repository for Mississippi results that can be cloned and/or accessed via API, avoiding manual downloads. The naming convention for the repositories is the same: openelections-results-{state}, and you might find partial results for states that don’t yet appear on the download map (like Iowa) because they’re still in progress.

Screen Shot 2014-12-22 at 2.18.09 PM

Using GitHub has two advantages for us — it helps to maintain a history of published changes, of course — but GitHub Pages also provides a filesystem for storing the raw CSVs that power the results site downloads. And should we need to move the CSV downloads to another location, we can do that, too. All of this underscores our commitment to using existing standards and practices rather than inventing new ones.

So if you were looking for election results CSVs as part of your holiday plans, we’ve got two ways to get them. Enjoy, and Happy New Year!

By Derek Willis

OpenElections is nearly two years old, and we’re not nearly done yet. In most states we still have a lot of work to do.

As our initial funding from the Knight Foundation winds down, we wanted to provide an update on where the project is and our plans going forward. The first thing to know is this: OpenElections is here to stay. Our timetable has expanded, and we’re looking at other sources of money to boost our capacity to process election results data, but the work we’ve done so far and your interest in it has convinced us of the need.

When we started, Serdar and I had between us years of experience working with election results data in multiple formats. We both worked at news organizations that routinely dealt with different types of data and various election systems.

We still have been surprised by the diversity of results that we’ve found. States like Pennsylvania, North Carolina and Florida have consistent and reliable data across time. Other states, like Arkansas, Colorado and Washington, have different formats and systems depending on the year. Then there are states like Mississippi and New York, which have required significant investments of time and effort.

In practice, that has meant a lot of work within individual states in order to load and process data from 2000 onward. Those efforts have taken more time than we anticipated, for two reasons. First, we have found that states have switched the systems and software they use to publish election results, in some cases multiple times in the past 15 years. We have found some abstractions – we released a separate library to handle states that use Clarity’s software – but in many cases this meant writing several different custom parsers for a single state.

Second, machine readable data is not a universal standard, and for many states it is a recent addition to their practices. This isn’t a criticism as much as it is a statement of reality. Officials from nearly every state we’ve been in contact with have been helpful and even supportive of the project. But we’re also not too far removed from all-paper elections, either.

In response to these factors, we’ve made some adjustments. The main one is to publish “raw” results data from states even before we standardize offices, candidates and parties. We think having election results in a fairly consistent format across a number of years is pretty useful, so we’re not going to wait until everything is done to release that. This week we’ve published raw results in North Carolina, Florida, Pennsylvania and (for recent elections) Mississippi. You can download these from our site or clone them from GitHub depending on your needs. We’ll continue to follow that path as we work on standardization.

Along the way we’ve been very fortunate to have had contributions from volunteers, who both gathered information about the state of election results and also contributed code to the project. We can’t thank all of you enough for your interest and contributions. This would be a much longer road without them, and we hope that you’ll stay involved.

We’d also like to recognize the people who have lived this project with us for most of the past two years. Geoff Hing has been the main point of contact for web development volunteers and has written the bulk of the code that powers the results loading and data display portions of the project. Geoff began a new job at The Chicago Tribune this week, although he’ll still be involved with OpenElections as a volunteer. We’re extremely grateful for his efforts.

Many more of you have emailed with or spoken to Sara Schnadt, the project manager for OpenElections. She’ll be with us through the end of the year as we plan our next steps, and her organizational skills, creative thinking and ability to wrangle two co-founders living on separate coasts has made OpenElections possible.

Investigative Reporters & Editors, a source of training and inspiration for journalists for decades, has made things easy for us by handling the accounting and grant management tasks. Both Serdar and I are proud to be “graduates” of IRE, and we’re thankful for their support of OpenElections.

The goal of OpenElections – to provide access to machine-readable, standardized election results – remains the same as when we began. The path to reach that goal is now a lot clearer than it was two years ago, and with your help we’ve learned a lot about how to get there. We’ll keep moving forward, and invite you to stay involved.



Introducing Clarify

November 26, 2014

An Open Source Elections-Data URL Locator and Parser from OpenElections

By Geoff Hing and Derek Willis, for Knight-Mozilla OpenNews Source Learning


State election results are like snowflakes: each state—often each county—produces its own special website to share the vote totals. For a project like OpenElections, that involves having to find results data and figuring out how to extract it. In many cases, that means scraping.

But in our research into how election results are stored, we found that a handful of sites used a common vendor: Clarity Elections, which is owned by SOE Software. States that use Clarity genferally share a common look and features, including statewide summary results, voter turnout statistics, and a page linking to county-specific results.

The good news is that Clarity sites also include a “Reports” tab that has structured data downloads in several formats, including XML, XLS, andCSV. The results data are contained in .ZIP files, so they aren’t particularly large or unwieldy. But there’s a catch: the URLs aren’t easily predictable. Here’s a URL for a statewide page:

The first numeric segment—15261 in this case—uniquely identifies this election, the 2010 primary in Kentucky. But the second numeric segment—30235—represents a subpage, and each county in Kentucky has a different one. Switch over to the page listing the county pages, and you get all the links. Sort of.

The county-specific links, which lead to pages that have structured results files at the precinct level, actually involve redirects, but those secondary numeric segments in the URLs aren’t resolved until we visit them. That means doing a lot of clicking and copying, or scraping. We chose the latter path, although that presents some difficulties as well. Using our time at OpenNews’ New York Code Convening in mid-November, we created a Python library called Clarify that provides access to those URLs containing structured election results data and parses the XML version of it. We’re already using it in OpenElections, and now we’re releasing it for others who work in states that use Clarity software.

See full piece on Source Learning


By Derek Willis

Opening election data isn’t just an American thing. Across Africa, organizations are at work gathering election results and voter data to make better tools and systems that help inform citizens about the political process.

I was a participant in a workshop organized by the Global Network of Domestic Election Monitors, which provides training and support for groups that monitor elections around the world. The three-day workshop, held in Johannesburg, South Africa, in September brought together more than a dozen representatives of organizations from across Africa as well as officials from the Electoral Commission of South Africa.

Governments in Africa publish their official results in a variety of formats, but most provide either electronic PDFs or CSV files. During the workshop, we discussed what else defined election data – in many parts of Africa, that includes not only voting locations but also details about observers, the security situation and the integrity of the voter roll. What I heard from participants like James Mwirima of Citizens’ Watch-IT in Uganda, Tidiani Togola of Mali and Chukwudera Bridget Okeke of TMG Nigeria was that election data was about so much more than the results.

Using OpenElections as an example, we talked about dealing with difficult to parse data, and even showed off the powers of Tabula for converting PDF tables into CSV files using Zimbabwean election results from 2013. Under the guidance of organizers Meghan Fenzel and Sunila Chilukuri of the National Democratic Institute, we worked on summarizing and visualizing voter registration data using Google Fusion Tables and Excel.

10628369_810827622271502_7667687462829274279_nSince most African countries have a single national election authority, results are often collected and published in a single location. South Africa, for example, publishes detailed results data in several formats and breakdowns, including by voting district. Some of the United States may want to take note: there’s a CSV download as well.

What I found at the workshop were election monitoring organizations who wanted to be able to use modern tools to help quickly and accurately  assess elections in their countries. Nigeria already has a robust effort preparing for elections next year.

A few times I was asked about the possibility of extending OpenElections outside the United States. While we’ve got our hands full with the variety of formats and results data that 50 state systems produce, there’s nothing I’d like to see more than our work being used in other places. That’s why I stressed the importance of publishing your code and data, not only so others can build upon them but so that people can see your work and evaluate its accuracy and integrity. Our elections – wherever they are – demand no less.

At ONA14 in Chicago in late September we unveiled the new OpenElections data download interface. We presented at the Knight Foundation’s Knight Village during their office hours for featured News Challenge projects, as well as during a lighting talk. OpenElections’ Geoff Hing and Sara Schnadt showed off their handiwork based on in-depth discussions and feedback from many data journos. The crowd at ONA was receptive, and the people we talked to were keen to start having access to the long awaited data from the first few states.

Screen Shot 2014-10-06 at 2.47.55 PM

As you can see from the data map view above, there are only three states that have data available so far. These are Maryland, West Virginia and Wyoming, for which you can download ‘raw’ data. For our purposes, this means that you can get official data at the most common results reporting levels, with the most frequently used fields identified but without any further standardization. We will have ‘raw’ data on all the states in the next few months, and will work on having fully cleaned and standardized data on all the states after this initial process is complete.

Screen Shot 2014-10-06 at 2.48.12 PM

As things progress, you will see updates to both the map view and the detailed data view where you can see the different reporting levels that have data ready for download so far.

Screen Shot 2014-10-06 at 4.30.19 PM

A pink download icon indicates available data, and a grey icon indicates that data exists for a particular race at a particular reporting level, but that we don’t yet have it online.

Screen Shot 2014-10-06 at 4.28.56 PM
The race selection tool at the top of the page includes a visualization that gives an overview of all the races in our timespan, and a slider for selecting a date range to review races in the download table. For states like Maryland (shown in the full page-view above), there are only two races every two years so this slider isn’t so crucial, but for states like Florida (directly above), this slider can be useful.

We encourage you to take the interface for a spin, and tell us what you think! And, if you would like to help us get more data into this interface faster, and you are fairly canny with Python, we would love to hear from you. You can learn more about what this would entail here.

An Interview with The Pew Charitable Trusts’ Jared Marcotte







OE: You come to civic infrastructure work via previous experience in corporate technology. Can you give me a little background on the Voting Information Project, how you became involved with it, and why this work is so interesting to you?

JM: Voting Information Project (VIP) is a partnership between The Pew Charitable Trusts and Google that started in 2008. Both organizations realized that voters were having difficulty finding the answers to common elections-related questions, such as “where do I vote,” “what’s on my ballot,” and “how do I navigate the elections process”. The project encourages states to publish public information pertaining to elections–geopolitical boundaries, polling locations and early voting sites, local and state election official contact information, ballot information, and other related data–in a standardized format allowing Google to make the data available through the Google Civic Information API. Our goal is to lower the barrier of access to this information, making it easier for elections officials to concentrate on running the elections.

I’ve worked at some great companies over the years, but my work, though challenging and interesting, felt a bit disconnected. I wanted to do more to potentially solve societal problems, which led me to VIP. I’d always found elections cerebral but daunting, since it was difficult to find the information I needed to cast an informed vote. Voting is one of the most important activities in civic life, so this project fulfilled my desire to “improve the world,” so to speak. Early last year, David Becker, Director of Elections Initiatives, offered me the opportunity to manage VIP at Pew. Considering how much I loved the project, it was easy to say yes.

OE: VIP is a collaboration between The Pew Charitable Trusts and Google’s Civic Innovation project. How does this work, and what resources do each entity bring to the table?

JM: Since providing election information is a distributed data problem–meaning the data we require is held in different databases across departments and, sometimes, jurisdictions–Pew, through Democracy Works and Election Information Services, provides engineering support to states to automate and centralize the publication of this information at the state-level. Pew also creates open source tools that leverage the API and allow states, campaigns, and civic organizations to use low-cost tools. Pew works with Engage and Lewis PR to broaden the project’s reach to potential organizations that may be interested in leveraging the data or the tools.

Pew has a great working relationship with Google. They offer an understanding of elections coupled with technical infrastructure and engineering that few others could match at scale. Additionally, they created the Voter Information Tool, which provides a single source of election information to voters and is one of the most visible artifacts of the project. Anthea Watson Strong, my counterpart at Google, has extensive experience with the project and campaigns, making her uniquely suited to manage Google’s role with this initiative.

OE: Can you describe how VIP impacts an individual voter, and how it eases their participation in the elections process?

JM: Though there are numerous tools, at its core, VIP allows a voter to enter their address and find their polling location and ballot information for every major election without ever providing any personally identifiable data. At Pew, we try to cover a number of different access points beyond Google’s Voter Information Tool. We’re working with Azavea to develop a white-label, accessible iOS application and a companion Android application that allows users to find election information. In the interest of bridging the digital divide, we’re also developing an SMS-based service to look up polling location information and registration status. Because the Civic Information API is accessible to the general public, civic organizations and individual developers can use the data in ways that we may not cover through our own open-source applications.

VIP also publishes all of the raw data, which tech collaborators use in various ways. One of the most fun examples was when Foursquare used the geographical polling location data in their application. A voter that checked-in to his/her polling location on Election Day received a virtual “I Voted” badge.

OE: What other Election initiatives are underway at Pew, and how do they all interrelate?

JM: Our core mission in election initiatives is to make elections more accessible, accurate, and cost-efficient. In addition to VIP, we have two other projects that work towards our goals.

The Upgrading Voter Registration (UVR) project partners with election officials, policy makers, technology experts, and other stakeholders to help states move towards more integrated, modern, and secure voter registration systems. This goal is accomplished through a number of initiatives, one of which is the Electronic Registration Information Center (ERIC), an independent non-profit whose membership is made up of representatives of the states that work to improve the quality of voter registration lists through a sophisticated data matching system.

Pew’s ethos is all about constant evaluation through data analysis. In keeping with the culture, the Elections Performance Index (EPI) is our measurement of elections administration based on 17 objective indicators (e.g. data completeness, turnout, voter registration rate, et al). Along with a massive amount of fascinating data and state fact sheets (e.g. Wisconsin [PDF]), the “crown jewel” of this project is the interactive. This year is also the first time that we’ve had the data to compare two presidential elections: 2008 and 2012.

OE: In light of the recent presidential report highlighting that current voting systems are at the end of their viable lifespan, are you aware of any new solutions underway?

JM: Innovation in voting technology is complicated by outdated certification requirements. Since the last time the federal standards were updated, smartphones became ubiquitous, and Apple, with the advent of the iPhone in 2007 and the iPad in 2010, changed the way we think about the capabilities of “mobile users.” Most states have state-specific certification standards, too, many of which are based closely on the federal standards. The result is an expensive and lengthy process to certify new voting technology that prevents entrepreneurs from developing new systems and limits the products available on the market. Vendors are unwilling to invest in innovative technology when there is no guarantee that there will be a market for their technology once it is certified.

Election officials are left treading water with outdated and insecure technology while waiting for new technology to be offered, knowing that the current system prevents innovation. While we are starting to think about creative solutions to the problems in the marketplace, two county-based projects are approaching this problem from their perspectives. The Travis County, Texas Elections Office is working with a number of academics to build STAR-Vote, a completely new election system. A similar initiative is also taking place in Los Angeles County called the Voting Systems Assessment Project (VSAP). VSAP is guided around set of principles defined by the Advisory Committee and the county is working with IDEO to create early prototypes (NB: In the interest of full-disclosure, I serve on the VSAP Technical Advisory Committee).

OE: Ideally, what kinds of organizations and systems would come together to make a robust, transparent and cost-effective elections infrastructure?

JM: VSAP is a solid start. Academics, civic organizations, the private sector, and the public all take part in the process in meaningful ways. With IDEO, they take a “human-centered” approach to the problems, which I believe makes this project transformative. Ideally, elections should be about what works for each individual voter, though this philosophy does introduce a number of unique challenges. Time will tell if initiatives like VSAP and STAR-Vote will change the elections technology landscape, but I’m optimistic.


Jared Marcotte is an officer for Pew’s election initiatives, which supports states’ efforts to improve military and overseas voting; assess election performance through better data; use technology to provide information to voters; and upgrade voter registration systems.

Marcotte primarily oversees work on the Voting Information Project, a partnership with Google that improves the availability of election information for voters and civic developers while easing administrative burdens on local election officials. He also serves as an advisor on other Election Initiatives projects where technical strategy or software engineering is a component of the work.

Previously, as a senior engineer at the New Organizing Institute, Marcotte worked on the Voting Information Project, a collaboration with state and local officials, Google, and Pew to develop a nationwide dataset of election-related information. Marcotte previously worked at Six Apart and IBM and as an interface and interaction designer on the Election Protection Coalition’s Our Vote Live,, and various enterprise-grade sites. He currently serves on the technical advisory committee for the Voting Systems Advisory Committee for Los Angeles County, California.

He holds a bachelor’s degree in computer science from the University of Vermont.

Eating Our Dog Food

July 15, 2014

By Derek Willis

When Serdar and I first talked about building a national collection of certified election results, we had a very specific audience in mind: the two of us. It seemed like every two years (or more frequently), one or both of us would spend time gathering election results data as part of our jobs (me at The New York Times, Serdar then at The Washington Post). We wanted to create a project that both of us could use, and we knew that if we found it useful, others might, too.

Precinct comparison

The New York Times

In the world of software development, using your own work is called eating your dog food, and we’ve done just that. While we’re nowhere near finished, I am happy to report that OpenElections data has proven useful to at least half of the original intended audience. Both last week and this week, The Upshot, a new politics and policy site at The Times that I work on, used results data from Mississippi collected by OpenElections to dig into the Republican primary and runoff elections for U.S. Senate. The analyses that Nate Cohn did on voting in African-American precincts would not have been possible using the PDF files posted by the Mississippi Secretary of State. We needed data, and we (and you) now have data.

We’ve completed data entry of precinct-level results for the 2012 general election and the 2014 Republican primary runoff elections, plus special elections from 2013, and we’re working on converting more files into data (we just got our first contributions from volunteers, too!). These are just the raw results as the state publishes them; we haven’t yet published them out using our own results format (but that’s coming soon for Maryland and a few other states). We provide the raw results for states that have files requiring some pre-processing – usually image PDFs or other formats that can’t be pulled directly into our processing pipeline.

The Mississippi example is exactly the kind of problem that we hoped OpenElections would help solve, and it’s only the beginning for how election results data could be used. Once we begin publishing results data, we’d love to hear how you use it, too. In the meantime, if you have some time, there’s more Mississippi data to unlock!

Home Screenshot

As we get the first few states’ data processed and ready to release, we are building an interface to deliver it to you, and to show our progress as we go. The live site (above) now shows metadata work to date, and the volunteers involved. Soon you will be able to toggle between this view and a map of the current condition of results data (below). Clicking on each state will show you details on the most cleaned version of available data.

Data Map

The color coding on this new map will change as we get more states online with ‘raw’ results, and fully cleaned data (what you see here is hypothetical and just to illustrate how the map will work). When we say ‘raw’, we really mean results that reflect the data provided by state elections officials. These results are only available at the reporting levels provided by the states and fields like party and candidate names have not been standardized.  These results do have standardized field names, so you will be able to more easily load and analyze the data across states. We will get as many states fully cleaned as we can, but our baseline goal for this year is to wire up and make available most of the data in a ‘raw’ state.

As we build the data interface, we would love to know what you think. Is the terminology we are using clear to you? Is the interaction clear? Is there anything else you would like to see here?

Download Page

If you click on the ‘Detailed Data’ link on the data map page, you will get to this download page showing all the races for the state you have chosen. You can download results at a variety of reporting levels, depending on what is available for this state. We will include rows in this view for all processed data (clean and raw) as well as any races we haven’t processed yet, just so that you know they exist.

Above the download table there is a slider that both gives you an overview of all the races available for a state, and a way to select just a specific date range for which to browse detailed results. You can filter results by race type – such as President, Governor, State Legislative races, etc.. If there are any other ways that you need to access the data, or if anything about this interface could be clearer, please let us know!

We will be building out a preliminary version of this interface in the next couple of weeks, and will revise it further based on what we hear from you.  

To tell us what you think, comment on interface elements here, or email us at

An Interview with IEEE Voting Systems Standards Committee’s John Wack and Sarah Whitt


OE: Can you describe your current work with elections standards, who the collaborators are, and how this fits into the larger context of what the Institute of Electrical and Electronics Engineers (IEEE) does?

JW: In the Voting Systems Standards Committee (VSSC), we are currently working to produce several standards and guidelines, including for election results reporting, for election management system export, for event log export, and hopefully soon for voter registration database export.  The collaborators include various election officials, voting system vendors, some people in industry, and others in academia. This fits quite naturally into IEEE’s framework.

SW: I joined the IEEE VSSC as an election official interested in the elections technology standards that IEEE and National Institute of Standards and Technology (NIST) were working on.  The VSSC is developing several standards related to elections: a standard for blank ballot distribution to military voters was published before I joined the team; and we are finishing up an Election Results Reporting standard, for which I am the working group chair.  The VSSC includes a wide range of participants, including election officials like myself, voting system vendors, academics, folks from NIST and the Elections Assistance Commission, election activists, media such as the AP, technologists, interested citizens, etc.  This is the first IEEE project I have been involved with, but it seems like a natural fit given the other technology standards that IEEE issues.

I have also been an active participant in the Pew-sponsored Voting Information Project (VIP), which works with states to provide election data such as polling places and sample ballots in a common data format for consumers like Google and Microsoft to use in their search engines and other tools to assist voters.

OE: How does this work relate to recent efforts to improve voting systems nationally?

JW: The IEEE was engaged in producing voting system standards prior to the passage of the Help America Vote Act (HAVA) in 2002.  The EEC and NIST then began producing voluntary voting system guidelines.  In recent years, though, the EAC has become somewhat inactive because of the absence of commissioners and thus, new voting system standards have not been approved.  NIST then began working with the IEEE as a pathway to developing needed voting system standards that can be adopted voluntarily by states.

SW: I have not been involved in national work related to voting systems, however Wisconsin and other states were involved in the voting system testing and approval process run by the National Association of State Election Directors prior to the Help America Vote Act of 2002.

OE: How did you both come to be doing this work?

JW: I had been managing some of the voting system standards development and wanted to work on common data format-related standards because I personally felt that it was important to build this sort of capability into voting systems and into voting system operations.  Transparency of data is very important for a number of reasons, including for testing, for security, and for public access of election data.  My role at NIST gave me the opportunity and freedom to work with IEEE and focus on this material.  It has been very gratifying in that I have become acquainted with a number of election officials and voting system vendors who are of the highest caliber and have contributed greatly to this overall project.

SW: I heard about the IEEE standards work through several elections IT colleagues I worked with on the VIP Project.  I help manage Wisconsin’s Statewide Voter Registration System so the mission of the VSSC to create interoperability between elections IT systems was very attractive to me.  Election officials today use multiple IT systems for various purposes, that don’t necessarily communicate with each other, so the ability to exchange data more easily between systems saves time and money, increases accuracy of the data in all systems, and allows synergy between systems for better analysis of data and ultimately better decision making.

OE: Can you describe your working group’s digital voting standards initiative, and how it will potentially impact voters, election organizers, and the people who design election systems?

JW: Having election data in a common format means that the format of the data is documented publicly and thus it is available, open to anyone.  The format is not proprietary, which frees small developers and election staff themselves to use commonly available tools such as web browsers to read the data.  When elections staff are designing new systems or attempting to cause systems to interoperate with each other, having the data in a publicly-documented format is a huge advantage.

SW: The VSSC is not working on digital voting standards per se, but we are working on standards for IT systems used in the Elections arena.  As I stated above, the first standard to come out of this group was for distributing blank ballots to military voters.  This standard will help states implement systems that deliver ballots to military and overseas voters electronically instead of on paper.

This reduces transit time and helps enfranchise America’s military and overseas civilians. The election results reporting standard we are finishing up will help the media and other groups that use election results.  Having results reported in a consistent way across states and jurisdictions will allow for easier aggregation of election results, which provides faster results reporting on election night as well as better analysis of election results after the election.

The standard also encourages reporting more data than is reported today.  Better analysis of election results data, and a more complete dataset, will help drive better policy.  There are many groups out there developing election administration tools that use common data formats (Open Source Election Technology, TurboVote, OpenElections, etc).  Today the only common data formats out there are really VIP and EML.  The VSSC is trying to create standards that fill the gaps where common data formats have not been available in the past.  These standards will also meet the needs of the diverse stakeholders involved in elections, which is truly the benefit we reap from having such diverse constituencies on the team.

OE: You have slightly different positions on the value and role of consistent and standardized  national voting administration systems vs a diverse and interoperable ecosystem of tools. Can you both talk about this, and why you hold the positions you do?

JW: I believe that it makes sense to have a national testing and certification program and a relatively high degree of uniformity among the states in the basic information technology.  This doesn’t mean that every state has to do it the same way, but it does mean that, regardless of equipment manufacturer and state, the data is in a publicly-documented format and can be tested as such across all states and territories.

The common data format standards can be likened somewhat to electrical code.  Having a uniform electrical code doesn’t necessarily mean that buildings need to look or feel the same – it just means that the electrical outlets and so forth are consistently uniform and licensed electricians can freely work on the electrical systems without having to understand proprietary information.  This is exactly the same with a common data format – voting systems can be different, they can be used differently across different states, they can be interconnected in different ways, they can have different user interfaces – but the underlying data formats are all publicly documented and uniform.  Anyone can write tools to operate on the data.

SW: America proudly carries on its tradition of a federated election system, which includes a national framework of laws to guide election administration across the country but also provides for states to set election policies that best meet the needs of their individual states.  A state’s ability to set its own laws for election administration is one of the great strengths of American democracy, but it does add complexity to the system.  The Help America Vote Act brought consistency and technology across states through mandatory statewide voter registration systems and accessible voting equipment.

In the post-HAVA world, state and local election offices have various technology systems to register voters, manage elections, facilitate voting, tabulate, report and certify election results, and track campaign finance information.  These systems come from various vendors or are home-grown.

Having some national baseline standards, and having common data formats for easy interchange allows for states and localities to purchase or build whatever systems best meet their needs, and allows those systems to interoperate.  If systems can interoperate regardless of who builds them, this allows for innovation in the marketplace, and can help reduce costs of individual systems.  But if standards are too burdensome or too literal, innovation is stifled and the vendor pool shrinks, offering states fewer choices at higher prices.  So there is really a sweet spot for standards where some integrity is assured, but election administrators have choices and systems are reasonably priced.

OE: What are the pros and cons of digital voting, and when is it more or less relevant and useful?

JW: I think in a perfect world, election day for both primaries and general elections would be national holidays – we would all vote on paper, election officials would have adequate time to make the instructions very clear, and election officials would have adequate time to carefully count all the paper, perform audits, and issue the results.  I could take this further, but of course we don’t live in that perfect world.

Digital voting makes lots of sense for many reasons, including that computerized voting interfaces can help voters to vote more accurately and prevent them from making common mistakes.  Computerized voting makes it much easier for election officials to administer elections – and to administer them accurately.  Yes, there are issues when voting electronically and there is no paper audit trail – but this has to be balanced against other factors, such as those I’ve mentioned.

SW: As a person working in an election administration office, I don’t really have a comment on digital voting.  That is really an issue for legislatures to determine.  If digital voting is made law at the state of federal level, we will administer the law to the best of our ability.

OE: What is your time-frame for finalizing a new national election results reporting standard, and how will it improve on the way things function now?

JW: The timeframe I am working towards includes getting the election results reporting standard out for public review by end of June, 2014, and having the final standard ready for IEEE approval in late fall/early winter. I expect that we will receive a good number of comments from various states – and this will be good – but at the same time it will require a fair amount of work to respond to the comments and we will no doubt need time to make some changes and improvements in the standard and the XML schema.

SW: We are hoping to finish up the election results reporting standard yet this year.  Once we are finished with the draft standard, it goes through the IEEE balloting process before it is officially released, which takes some time, so we are trying to get the draft ready for balloting as soon as possible so we can get it released in 2014.

This standard will improve things in several critical ways.  1.  It provides a common format for reporting so that results can be more easily aggregated across jurisdictions.  This allows for faster results reporting, it allows more groups to be able to report results and not just the media, and it allows for better analysis of data  2.  It provides additional data elements that are not always reported with election results, which results in better analysis of the data, and easier auditing of results.  3.  It supports three use cases — pre-election (i.e. election set-up information), election night reporting, and post election reporting (i.e. certified results or results for performing audits).

Supporting all three use cases allows for interoperability between election management systems, voting systems, results reporting systems and canvassing systems, which saves time and money in elections offices, as well as improving data accuracy.  So within an elections office, we can save time and money.  For people who consume election results, more groups will be able to consume results, they can get results faster, and do better analysis, which results in better policy. For voters, they find out winners sooner, and enjoy the benefits of better elections policy.

Standards may seem dry to some, but I think this is really exciting.  These are real, tangible benefits that will come out of this work.  I just feel grateful to be a part of it.

OE: What has the process of developing this standard been like? Who have been the stakeholders and has this been a new kind of collaboration in this space?

JW: Developing this particular standard was at first difficult.  We initially worked with a relatively large group of people, roughly 20 in size, and progress was very slow.  In particular, some people who had good intentions nonetheless impeded progress by focusing more on the process than on the need to get something done in a reasonable timeframe.  I convened a smaller group composed of election officials, vendors, and data modeling experts, and pushed for Sarah to be chair of the working group.

I feel very strongly that the work in IEEE should be managed by election officials, who above all understand elections and need the equipment to work well for them as well as for voters.  At the same time, vendors have a broad understanding of how elections are run across the United States, as well as organizations such as the Associated Press.  Working with a smaller group made things work much more smoothly and has resulted in a standard that, I believe, is much more applicable across all states.

SW: When I joined the VSSC (at that time it was just Project 1622) I was invited by colleagues in other election offices because they noticed there weren’t a lot of folks who actually work in elections administration on the team.  I since invited other election officials to join as well to help balance the group out.  The team we have right now working on Election Results reporting is really kind of a dream team — we have both major voting system vendors (Dominion and ES&S), the Associated Press, folks from state elections offices (WI, OH, WV), industry experts like Kim Brace and the folks at NIST, and interested parties in academics and the audit communities.  The kind of expertise that this broad stakeholder base has brought really improved the standard.  We applied a use-case approach to the standard so we could walk through real world scenarios for how this data is produced — what systems it comes from, what government level, at what time in the process, etc.  I think that’s how we were able to have it be so comprehensive.

We looked at the total election results reporting picture from the angles of the elections office producing the files, the vendors of the systems they will be using, and the consumers who will be using the data.  I think this represents a different type of collaboration than I have seen in the past.  We also used an inclusive approach to membership instead of exclusive — if you are interested in this standard or have opinions on how you think it should be done, join the team!

Anyone can sit at the table if they want to, and everyone at the table gets a voice.  We have a leadership structure through the working group chair and the standards editor to help filter through the comments, and we put a lot of things up for vote. So it’s a very democratic system.  This prevents the group from ignoring interested constituencies, and helps balance views from very different communities.

OE: What do you think of the recent presidential commission on elections administration? Will it affect how your work in any way?

JW: I can’t comment much on the presidential commission.  I do think that their report is imperfect – I wish they had gone into much greater detail and provided more specific recommendations in a number of areas.  However, they had a lot of work to do and there were many stakeholders besides me.  All in all, I have the belief that they worked hard and tried to do the right thing and mostly produced a good report that should be paid attention to – I was particularly gratified that they didn’t find much evidence of voter fraud to warrant voter ID laws that will result in needless litigation and state taxes spent on lawsuits.  They did end up validating our work

SW: I am very excited about the Presidential Commission’s report.  The bipartisan nature of the commission and the report takes a lot of the controversy out of the recommendations and gives election officials solid choices for how to improve their processes.  The commission report likely won’t impact our common data formats work very much, but as an IT person in an elections office, it was great for me to see the focus on improving elections technology in the report.  Some recommendations may require legislative changes in some states, and not all recommendations are a good fit for all localities, but the scope of the recommendations are broad enough that I feel like there is something in there for everyone.

I personally think these kinds of bipartisan efforts that focus on research and provide options are a great way for the federal government to drive policy in a way that is not as heavy handed.  The report itself appeared to be very well researched, well written, and overall of excellent quality.  As a taxpayer, I appreciate the quality of work this team did.

OE: What do you think is the best way forward to continue to innovate in this space? What kinds of relationships and models? And do you see any particularly pressing needs currently?

JW: The election results reporting standard was produced by first creating a UML model.  The advantages to producing a model include that one can focus on the data definitions and relationships as opposed to the format, e.g., XML.  Now, I believe that the data model should be abstracted upwards so that a higher level model can be created of election data in general.  This would help to provide a foundation for producing common data formats for various applications.  While the applications could be quite different, the format could be consistent and, as a result, systems will still interoperate.

Some of the more important areas to work in are, I believe, tablets and pure electronic devices.  While I am personally not a fan of Internet voting for the general public, I do believe that Internet voting for overseas military and individuals with disabilities is acceptable and that common data formats for ballot data should be developed to make systems more transparent and auditable.

SW: I completely concur with John on the need for an overall model of election systems.  While states run elections differently, we all have common sets of data that flow between common IT systems.  This type of high level modeling is critical to move towards better interoperability between elections IT systems.  Having a common understanding of election systems and data makes it easier for vendors (and homegrown state systems) to build interoperability into the next generation of systems — whether voting systems, statewide voter registration systems, voter information portals, online ballot delivery systems, e-poll books, election results reporting systems, campaign finance systems, the list goes on and on.  Ultimately with a common data model, we can also move towards more common formats for reporting data to the public.


John P. Wack is a researcher at the National Institute of Standards and Technology in the area of elections standards. He chairs several standards groups within IEEE and is managing the standardization of a common data format for election systems, working in conjunction with election officials, manufacturers, and others in the community. He is also an assessor for the National Voluntary Laboratory Accreditation Program and visits voting system test laboratories regularly to check compliance with requirements and standards. With the EAC’s TGDC, he has managed the development of the 2007 VVSG Recommendations to the EAC and the 2005 VVSG. Prior to working in elections, he authored and managed a variety of IT and network security guidance and assistance activities for NIST. His goals in the elections area are to make voting systems easier to manage by election officials, easier to use accurately by voters, and more transparent to test by election officials and testing labs.

Sarah Whitt is an IT professional with the Wisconsin Government Accountability Board, the state’s chief election agency.  She joined the agency in 2003 to help establish Wisconsin’s first statewide voter registration system, and is currently overseeing the modernization of that system.   She is chair of the IEEE Voting System Standards Committee’s Election Results Reporting working group, who is working on a common data format for publishing election results and election definition information. Through her experiences with elections and IT, she has learned that technology is of no use unless it is harnessed for good public policy.  She serves as a bridge between IT staff and policy makers to help ensure the public’s work is done effectively and efficiently.