Skip to main content

View related sites

  • Thought leadership
  • Media
  • Newsletter
  • FAQ
  • Events
  • Contact us
  • Facebook
  • Twitter
  • LinkedIn

Cambridge Mathematics

  • About us
  • The Cambridge Mathematics Framework
  • Our services
  • For teachers and practitioners
  • Blogs
  • Research
  • Thought leadership
  • Media
  • Newsletter
  • FAQ
  • Contact us

Roll model: how can we predict the new President?

  • Cambridge Mathematics
  • Mathematical Salad
  • Roll model: how can we predict the new President?
  • Blogs
  • Teaching maths
  • Considering the research
  • Interviews and intersections
  • Mining for maths
  • Events and take-aways
  • Policy and big ideas

Roll model: how can we predict the new President?

by Lucy Rycroft-Smith, 08 November 2016

Tomorrow is a day that will go down in history. Think carefully about your plans for tomorrow, because it’s likely someone will ask you at some point in the future what you were doing when you found out….and I can’t finish that sentence yet, because right now we’re in the hypnagogic Schrodinger’s stage of the US election, where neither and both of the main candidates (sorry Gary Johnson) are the President simultaneously. We can picture the headlines. We can imagine the reactions. We can taste the disappointment or the relief. But in between these human narratives, and the rants and exequies and jeremiads across the media, there’s the steady hum of sleek machines doing the very opposite – rationally crunching the numbers. We are living in a time when we know how to do a lot more with data than just polling voters, so in theory we should be able to predict elections with much greater accuracy than ever before. Of course, mathematical models don’t care about feelings, ethics, or politics. They are (when used correctly) a near-faithful mirror to voting behaviour, however irrational. Right?

It’s definitely not that simple. There is territory between fact and opinion. During the final days of the US election, these two worlds have spectacularly collided in a modern Clash of the Titans-style maths-off – because not all models are equal. The choices made, the level of sophistication, and the adjustments to the modelling process will obviously affect the result. Maths, at its purest, cannot lie – but people must build and evaluate and modify models, and they are based on assumptions (see Bonini’s Paradox) that are always up for debate. Nate Silver, a prominent political forecaster from FiveThirtyEight, and Ryan Grim, the Washington bureau chief from the Huffington Post, even took their argument to Twitter this week, arguing heatedly that the other person’s model was ‘irresponsible’, ‘guessing’ or ‘didn’t reflect the data’ by turns. The difference is not trivial: Silver’s model predicted Clinton would win with a margin of 65%, Grim’s put Clinton’s margin at 98%. Who is right?

We could wait for the result. But we could also evaluate the methods used - and judge for ourselves. The problem with polling data is that things change very quickly – data from a few weeks ago quickly becomes stale, so it is difficult to know how to incorporate it into a model without skewing it completely. Silver used a method of adjusting past poll results in order to project them forward; this is called ‘trendline adjustment’ and is the big sticking-point of this debate. Another difference is that Silver’s model assumes a wider range of possible outcomes than Grim’s. If you’re interested in diving deeper, you can read an excellent comparison of the two models here.

There are also predictions based on market behaviour, economic indicators, and the usual cast of crystal-ball stories where people claim they have never been wrong (there’s a nice list here). Often people evaluate a model simply by its success on the most salient question – forgetting that a good model performs consistently over time, and that judging the success of the model is not just about the binary election result (after all, they almost universally agree on that), but the individual aggregate predictions that comprise it. Ultimately, it is about much more than just a 'right' or 'wrong' model  - and there is some truth in what Ezra Klein tweeted last week, 'Sad reality: we will never really know whose election forecast was right because they are all probabilistic'.

However, another fascinating feature of election models is that releasing them publicly changes voting behaviour in a political pseudo-version of the observer effect (measuring something changes the very value of it.) This means ever-more sophisticated models take into account their own effect , which is somewhat self-referencing of them (imagine a Russian doll of effect-measuring). Voters get complacent - or incensed - when they read the figures, and might behave differently. Something else to bear in mind is that some research suggests simply asking people who they expect to win is surprisingly more accurate than vote intention polls, prediction markets, quantitative models, and expert judgment.

Useful links

  • Home
  • About us
  • The Cambridge Mathematics Framework
  • Services
  • For teachers & practitioners
  • Blogs
  • Research
  • Thought leadership
  • Media
  • Newsletter
  • FAQ
  • Contact us

About Cambridge Mathematics

Cambridge Mathematics is committed to championing and securing a world class mathematics education for all students from 3 – 19 years old, applicable to both national and international contexts and based on evidence from research and practice.

  • Cambridge Mathematics

View Related Sites

  • University of Cambridge
  • Cambridge University Press & Assessment
  • Faculty of Mathematics
  • Faculty of Education

© Cambridge University Press & Assessment 2025

  • Sitemap
  • Accessibility and Standards
  • Data Protection
  • Use of Cookies
  • Statement on Modern Slavery
  • Terms and Conditions
Back to top
We use cookies. By clicking any link on this page you are giving your consent for us to set cookies