Project description

Transparency and methodological rigor are core values of FiveThirtyEight. We don’t just want to provide answers — we want to be able to show how we got them and offer justification for every step along the way. Open data is a essential piece of this mission and in the past year we have continued to publish data and evaluate our own work in as open and transparent a manner as possible. We also maintain a database of polls and polling data that is unsurpassed (especially as outlets like HuffPo Pollster cut back on resources) and is a crucial resource for many other journalists and outlets. We publish our data on GitHub, but we also have an updating reader facing page for those who are not familiar with GitHub. We want our data to be accessible to all. Additionally, we just published an interactive app that evaluates and judges every forecast FiveThirtyEight has made since 2008 — over 1.3 million observations. We ask readers to trust us to predict a lot of things and we thought it was only fair and necessary to look back and see how we did. We also published all the data from that project: every prediction we made and what actually happened. Others can evaluate for themselves what we did.

What makes this project innovative?

We are providing a dataset and a project that will consistently evaluate the work we do and keep us accountable to our readers. While in keeping with our long-standing commitment to open data and transparency, our Checking Our Work project represents a true innovation both for us as an organization and as an example for other outlets that publish predictions and claim to have models that work as well or better than others.

What was the impact of your project? How did you measure it?

525,600 minutes. How can you measure a year? How about ... love? Though it was only launched yesterday, we have early signs that readers are finding it useful. They're spending an above average amount of engaged time on the page, according to Chartbeat, and the tweets announcing the projects received hundreds of likes and retweets.

Source and methodology

For "How Good Are FiveThirtyEight Forecasts?", we read many academic papers as well as a textbook about forecast verification. We settled on using calibration plots and Brier skill scores with bootstrapped confidence intervals.

Technologies Used

D3, Node, Ruby, R, GitHub, SQL

Project members

Jay Boice, Laura Bronner, Aaron Bycoffe, Rachael Dottle, Ritchie King, Ella Koeze, Dhrumil Mehta, Mai Nguyen, Andrei Sheinkman, Nate Silver, Gus Wezerek, Julia Wolfe

Link

Additional links

Followers

Click Follow to keep up with the evolution of this project:
you will receive a notification anytime the project leader updates the project page.