My portfolio over the last year reflects both a big year for FiveThirtyEight and for me. I designed and built two interactive cartograms for our large election prediction applications (for the U.S. House and U.S. Senate) and designed and built pages for individual teams in our NBA basketball predictions dashboard that display significant changes to our NBA basketball model. I was also the primary designer for FiveThirtyEight\’s live election night graphics — our biggest night of traffic since Election Night 2016.
In addition to contributing to these large site-wide projects, I also worked on many of my own projects or projects where I was the only graphics contributor. The two that I am most proud of from the past year are \”One Way To Spot A Partisan Gerrymander\” and \”The Midwest Is Getting Drenched, And It’s Causing Big Problems.\” The first is a scroll-based explainer of a metric that can help test for partisan gerrymandering that is shortly going in front of the U.S. Supreme Court. I did all the design and development on this project and it was well-received by people in the field and by our readers as a clear explanation of a complicated measure. The second is a story about how people are adapting to increased flooding due to climate change in the Midwest. I wrote and reported the story and produced static graphics which I brought from raw data to final product. The charts demonstrate a clear trend in flooding that is less written about than coastal flooding but can have equally devastating impacts.
What makes this project innovative?
Over the last year I have innovated in data viz form and technique even while working under tight constraints. For example, I was tasked with designing live election night graphics that would fit in the 300 pixels of our live blog right rail and would give both an overall picture of the night and highlight individual races. In the section titled, “Chances of winning in every race” I used a waffle chart style in which every race was arranged by its pre-election night probability. As races began showing results, they would fill in proportional shares of red and blue. As races were called, they would go fully red or fully blue. When you hovered on a square you could see the race and it’s latest odd. Though simple in form, this waffle chart was able to show 1) Which races were reporting results 2) Which way they were leaning 3) If they were leaning with or against expectation and 4) At the end of the night, how neatly or not our predictions had done. And all in 300 pixels. More generally, however, I think that I consistently innovate in finding opportunities to use visuals to explain complicated ideas and measures rather simply trends or patterns (not that those are not also important to visualize!). From the partisan bias metric in my story, “One Way To Spot A Partisan Gerrymander,” to how different measures of statistical significance work in the story “How Shoddy Statistics Found A Home In Sports Research” to how player ratings factor into our NBA predictions, I am consistently more interested in trying to use visuals to answer the “why” and “how” rather than simply “what.”
What was the impact of your project? How did you measure it?
I do not have access to analytics on FiveThirtyEight so I cannot offer quantitative data of either the superficial or significant variety. Some projects I work on I know are amongst the most highly trafficked and used pages at FiveThirtyEight, including our NBA predictions and our elections forecasts and election night pages. The impact of these pages is not just in clicks but in use — readers using our probabilities to inform their understanding of the world (and maybe also their sports betting). I measure the impact of the smaller projects I’ve worked on differently in that they are often aimed much more narrowly. We want any reader to get value and understanding from them, but also we are hoping to impact a specific debate or engage with a specific community. An example is the story “How Shoddy Statistics Found A Home In Sports Research” which was directly cited by some academic journals as to why they are no longer accepting submissions that use the suspect measure MBI to validate their results. Another case is the story “One Way To Spot A Partisan Gerrymander” which has been described by experts in the field of gerrymandering and redistricting such as Dave Wasserman (an analyst) and Nicholas Stephanopoulos (a lawyer) as a clear explainer of a complicated metric that is currently being presented to the Supreme Court. These kinds of responses to work suggest high engagement with the work I do and the ability of my visuals to impact opinion and understanding.
Source and methodology
The sources and methodologies vary widely for each project and story that I work on (as you'd expect). We do extensively document our sources and methodologies (even sometimes in separate methodology stories). If there is any particular story for which additional information would be useful, please let me know. As an example, I will describe how I collected and worked with the data on the story “The Midwest Is Getting Drenched, And It’s Causing Big Problems.” Each graphic element had a different source. The first chart’s data came from the Minnesota Department of Natural Resources from my source there (who is also quoted in the story). I did not have to do much analysis on this data because my source was able to provide it fairly cleanly and explain how it was collected and how he had used it for similar purposes. As always, in my experience, talking to someone about data is better than trying to guess what it is! The second chart was more complex. The data come from the U.S. Geological Survey, which has millions of streamflow gauges across the country. Up-to-date data from these gauges is made available via an R package, which was extremely helpful to me. Nevertheless, I spent time speaking to an expert on this data from the University of North Dakota to make sure I knew what it could be used for and also to help prune down from millions of gauges to the 200 sample I used. The ones I selected all have data going back at least 100 years which was important for this analysis. I then (using R) found the coefficient regression for the peak streamflow at each gauge over that time period. To verify my analysis, our quantitative editor looked over my work and performed the same regression to make sure the results could be repeated. The final graphic’s data came from NOAA. This data is provided as a raster file and I did not have to perform any analysis to generate the 100-year flood estimates (thankfully). However, I did use QGIS to add the color scale and contours so that the actual amounts could be read on the map. I was able to check my map against the “finished” version provided by NOAA to make sure the ranges were correct. Before using this data I also spoke to a data analyst at NOAA. I was not worried about analyzing the raw data incorrectly in this case, but did want to make sure I didn’t mischaracterize it.
Ella Koeze. I had many collaborators depending on the specific project at hand. I could never have accomplished so much this year without all of FiveThirtyEight to help.