One of the primary studies continually brought up by the research contributors to the PhET project (see this page for a list of articles) is one that was carried out by Keller, Finkelstein, Perkins and Pollock in 2006. As part of a larger study, they pre and post-tested a control group who were given a standard physics demonstration as well as an experimental group who instead of the demonstration, were made to interact with a simulation. Both groups substantially increased their scores on the second testing, with the experimental group outperforming by 47% in terms of percentage change.
The raw scores alone bring to question the validity of the conceptual questions used, with both groups scoring nearly 60% before the demonstration. No mention was made of whether or not these results were statistically significant, which is problematic. It, frankly, seems like a very odd comparison to make in the first place. There has been an enormous amount of research suggesting that interactive engagement methods are on the order of twice as effective in conceptual physics education as traditional methods (Hake, 1997). A more telling comparison might be made between the use of a simulation and another interactive teaching method, such as an inquiry based experiment.
Squire, Barnett, Grant and Higginbotham (2004) performed an interesting study with year 8 science students studying electrostatics. They took one teacher’s five classes and separated them into a control group (2 classes) with the remainder put into the experimental group. Both were pre and post tested after a unit on electrostatics. The control group received “inquiry based” (the exact nature of which is left to the reader’s imagination) teaching methods consisting of lectures, experiments and demonstrations, while the experimental group spent most of its class time playing a simulation-based game. For my purposes, the game aspect muddies the water, as it is unclear to what degree the motivational aspects of the game component may have changed the nature of the learning experience. The results of the study were remarkable, with the experimental group outperforming the control to a statistically significant degree.
There were major problems with the design of the experiment (no attempt was made to randomize) and the degree to which the game monopolized class time was also puzzling. The overall results for both groups were pretty depressing, with relatively small overall improvements for both groups. It is also unclear how much of an effect the simulation game itself had versus the opportunities for discussion it created in the class. The study certainly points to the possibility of a powerful pedagogical tool in simulation games.
A study by Zacharia and Anderson (2003) used a different strategy. They took a group of pre and in-service teachers who were not trained in physics and performed an experiment on the usefulness of simulations as a precursor to doing an experiment, in place of extra practice problems. They found that the combination of lectures and practice problem sets had no (!) statistically significant influence on conceptual understanding. In contrast, when a period of simulation replaced some of the practice problems, a very significant difference in understanding was achieved. After the experiment, however, the group without the simulation experience tended to catch up in conceptual understanding, nearly equaling the test scores of those who used a simulation. No mention was made of whether or not those differences after the experiment were statistically significant.
Trials were for once properly randomized, but the sample size was laughably small, with 13 students taking part, each in four separate trials. Nonetheless, this is the soundest study I have seen thus far comparing simulation with other methods, and although the sample size is tiny, it seems likely that simulations can, in the right context, lead to conceptual understanding on par with a real life physics experiment.
Hake R. (1998) ‘Interactive-engagement versus traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses,’ American Journal of Physics, vol. 66 no. 1, pp. 64-74.
Keller C., Finkelstein N., Perkins K. and Pollock S. (2006) ‘Assessing the effectiveness of computer simulation in introductory undergraduate environments,’ in McCullough L., Hsu L. and Heron P. (Eds.), AIP Conference Proceedings Volume 883: 2006 Physics Education Research Conference, pp. 121-124, Syracuse, USA: American Institute of Physics.
Squire K., Barnett M., Grant J. and Higginbotham T. (2004), ‘Electromagnetism supercharged!: Learning physics with digital simulation games’ in Kafai Y., Sandoval W. Enyedy N. (Eds.), ICLS ’04 Proceedings of the 6th international conference on Learning sciences, pp. 513-520, International Society of the Learning Sciences.
Zacharia Z. and Anderson O.R. (2003), ‘The effects of an interactive computer-based simulation prior to performing a laboratory inquiry-based experiment on students’ conceptual understanding of physics,’ American Journal of Physics, vol. 71 no. 6, pp. 618-629.