View related sites
11 May 2018
As it seems is the case with everything in education, there are two opposing schools of thought when it comes to teaching probability. You might not even be aware, but stats education has its own version of the ’trad vs prog’ debate: the classical and the frequentist approach to probability. While most teachers would agree that development of both calculation and empiricism are important in teaching probability, which comes up first or most often will have an impact on the student conception of this topic, and the two do contain potentially conflicting ideas.
Classical probability (Batanero et al, 2005) starts from the idea that the probability of an event can be defined mathematically for any given experiment if a complete set of equally likely outcomes can be established. This is a useful model and can be confined to situations easily reproduced in a classroom with dice and coins, but it has two major drawbacks. Firstly, it’s very difficult to relate to randomness and probability in the real world that students will have experienced, because it necessarily limits examples to highly artificial and simplistic situations. Secondly, it may reinforce some of the misconceptions exhibited in the “law of small numbers”: that probability can be thought of as deterministic in the short run. It can be very difficult to get students to let go of the idea that “I should throw a single two in every six rolls of a dice”.
Frequentist probability, on the other hand, begins with the idea of probability values emerging from some experiment in which the proportions of each outcome are recorded. This has the advantage that it can be applied to situations that would be far beyond student’s abilities to calculate with classical probabilities, and demonstrates with every additional trial the unpredictability of chance in the short run, and the enduring stability in the long run. Unfortunately, it is much harder to deal with a probability in which the relative frequency is 2796/5992, and while confident mathematicians will happily model this using a value of 1/2, this kind of thinking is challenging for students.
Clearly a balance between the two approaches is needed; and in particular tasks that relate the two ideas by drawing on the strengths and fundamental ideas of both. A simple activity that does this well involves flipping a coin multiple times and exploring the behaviour of the graph created by plotting the proportion of heads against the number of flips. For a small number of flips, the graph will be very spiky, but will stabilise around the value predicted by classical probability as the number of throws increases. Happily, while it always gets close to this value eventually, it is very easy to see that there is some variation even for large numbers of trials. Students could create their own graphs by conducting the experiment with a coin for a small number of throws, before moving on to looking at a simulation to allow very large numbers of trials to be undertaken almost instantly.
In Excel this can be done using the function =RANDBETWEEN(0,1) to generate either a 1 or a 0 which can be assigned to heads or tails respectively. With a bit of creative cell work (see below) the simulation can be set up and refreshed by simply hitting F9.
Once set up, the number of trials can be easily increased by dragging the formulae down to fill more rows and re-inserting the chart for the new data. This will allow students to get a sense of the long-term behaviour, as well as a good idea of the minimal number of trials needed for the pattern of stability to emerge.
As an alternative, I have coded a simple app that you can use which allows you to do the same thing here. By moving the slider, you can increase or decrease the number of trials more easily than in the Excel simulation.
Reference: Batanero, C., et al (2005). The nature of chance and probability in Exploring Probability in School. Springer