Project description

Since October 2018, The Economist\’s data team has been showcasing the best of its data journalism in its own new print section, Graphic detail. Each week, a data journalist collaborates with a data visualiser to tell a data story primarily through graphics but with textual analysis too. Where suitable, a developer reimagines the print article as an interactive piece online. While our audience of Economist readers is generally pretty chart-literate, we don\’t shy away from more unconventional chart types if they benefit the story. We also try to reach new readers on social media through threads, animated gifs and other innovative ways of repackaging the story.

What makes this project innovative?

In our weekly print publication, space is limited and we usually condense our data visualisations to just small charts that tell you one thing very clearly. Now that we have our own page to fill each week, we are developing new techniques to illustrate a data story effectively without overwhelming the reader. We try to feature a large graphic that draws you in and to balance data density so that there is a clear visual trend that stands out, and more nuanced findings that reveal themselves as you spend time exploring the graphic.

What was the impact of your project? How did you measure it?

It is almost impossible to accurately measure how many readers engage with the print section each week, although we are conducting reader surveys to give us an insight into how people are reading the page. We do however know that the online incarnation of each page regularly features in the top ten most-read pages on, and that dwell time far outstrips articles with a similar amount of text, implying that readers are spending time digesting the charts. We have also seen many readers sharing screenshots of our interactive features, such as the "Build an American voter" and "Build a British voter" widgets.

Source and methodology

In general, our data journalists have sought to combine at least two data sources to find an interesting angle that hasn't been covered before. This can involve building web scrapers or liaising with academics, and the data-cleaning and data-combining processes alone can take weeks. They then analyse the data, usually in R, and involve a visualiser as early as possible in the process to ensure that their story will work visually. The sources for each story are given below the chart in print or at the bottom of the page online but they include YouGov, IMDb, Chapel Hill Expert Survey, Expedia and the World Bank. Where possible, we also make our cleaned datasets available on GitHub so that others can explore the data too.

Technologies Used

We often use scrapers built in Python or R to get hold of data, and most of our data exploration takes place in R. Where appropriate, we also build models in R. Our visualisers generally use ggplot to visualise the data and export an SVG or PDF that is imported into Illustrator and styled up according to our visual style guide. If we are not creating an interactive version of the piece, we'll use ai2html to ensure that all readers including those on mobile devices get the best possible version of the graphics. If we do create an interactive, we use d3 and ensure that it works well across all devices.

Project members

Dan Rosenheck, Matt McLean, James Fransham, James Tozer, Elliott Morris, Graham Douglas, Evan Hensleigh, Martin Gonzalez


Additional links


Click Follow to keep up with the evolution of this project:
you will receive a notification anytime the project leader updates the project page.