Daniel worked with a source to gain access to 1,683 images of Lego heads. He first published a piece to collect data on them. What emotion do you see in this face? Later he was able to compile the thousands of responses he received from our readers to be able to do the opposite. What type of Lego face do you want to see?
What makes this project innovative?
We had few doubts about the research this story is based on, however as a way to engage our readers and confirm the study's findings we devised a form where our readers could rate the emotions of Lego Minifigures then view the aggregate results. It's part poll, part quiz, and entirely innovative. In the end we had thousands of responses. They confirmed the study and provided data for us to use in future stories. To top it off, Daniel was able to provide a fun service back to our readers by using machine learning algorithms he was able to work the system in reverse: Give us an emotion and we'll give you a matching Lego head.
What was the impact of your project? How did you measure it?
We measured the success of this project through the number of our readers who were highly engaged with it. Thousands of readers told us what they saw in our set of Lego faces and even more came back to see their data put to work.
Source and methodology
We based our initial piece on the work of Dr. Christoph Bartneck. We replicated his study so that it could be tested on a much larger scale. For the follow up piece we used t-SNE and other machine-learning algorithms to match emotional inputs with the closest matching Lego head.