Physics simulations as a tool for interactive engagement

Developers of physics simulations promote them as a valuable tool for interactive engagement (IE) (Wieman & Perkins, 2005), a framework for teaching physics that is well-established in the literature as being superior to traditional methods (Hake, 1998; Redish & Steinberg, 1999).  There have been many studies on the effectiveness of physics simulations, yet their status as a tool for promoting conceptual change in physics courses remains controversial.  Unfortunately, the majority of investigations have been case studies, with relatively few experimental studies undertaken.  While the type of simulation and the contexts in which they are used are certainly important, the question remains as to whether they can be used effectively in the support of interactive engagement teaching strategies.   I argue that simulations can be used effectively in support of IE methods as a tool for aiding observation, but that the implementation and possible side-effects of their use need to be considered carefully.

The term interactive engagement refers to a range of different teaching techniques that use “heads-on (always) and hands-on (usually) activities” (Hake, 1998, p. 65) and which emphasize the construction of knowledge by students and the teacher’s role as a facilitator of learning. They also directly address students’ pre-existing non-scientific conceptions (Knight, 2004).   Hake (1998) performed a meta-study of introductory physics courses, finding that traditional methods (instructor-centric teaching with lectures, tutorials and laboratories) tended to produce remarkably consistent and embarrassingly poor results on the force concept inventory (FCI), a test developed by physics education researchers to probe understanding of Newtonian mechanics.  His measure of comparison was the percentage of total possible improvement, or normalized gain[i].  He found that traditional methods produced a consistent normalized gain of approximately 23% on the FCI (e.g. an average student entering the class with a 30% would improve to a 46%) after a semester of instruction in mechanics.  This result was independent of pre-test scores and instructor. IE methods consistently led to normalized gains in the region of 30-70% with an average of 48%, a two standard deviation effect.  A further problematic feature of traditional methods is that they consistently promote counterproductive beliefs about physics.  Redish and Steinberg (1999) used a test designed to discover student attitudes on a scale of “independence/authority, coherence/pieces, and concepts/equations,” (p. 29) and found that a single semester of a traditional physics course led to a regression from “expert” (p. 33) beliefs of approximately one standard deviation.  Meanwhile, IE methods resulted in improvements of 2.5 standard deviations.

IE physics courses are typically ICT intensive, with the most common technologies being data loggers, video analysis software, motion detectors, force probes and computers (Knight, 2004).  For resource-poor physics classrooms, simulations appear to be an attractive alternative to the purchase of extra lab equipment and the development of new activities.  Or are they?

The literature on the effectiveness of physics simulations is full of controversy.  Some studies have suggested that physics simulations are powerful agents of conceptual change (Keller, Finkelstein, Perkins & Pollock, 2006; Squire, Barnett, Grant & Higginbotham, 2004; Zacharia & Anderson, 2003), while others have shown no benefit over alternative methods (Ronen & Eliahu, 2000; Steinberg, 2000).  The vast majority of these studies suffer from significant research design flaws, e.g. failing to adequately isolate the method of instruction in Squire, Barnett, Grant and Higgenbotham (2004), or having been done on far too small of a scale to find measurable effects, as in Zacharia and Anderson (2003).  Effectiveness studies have almost exclusively focused on a comparison with traditional methods.  An exception is Steinberg (2000) who showed no difference in effectiveness, compared with IE methods.  This study also suggested – based on casual, qualitative observations – that simulations may promote authoritarian views of physics.  Unfortunately, no quantitative research has investigated this issue.

A recent study by Trundle & Bell (2010) used a quasi-experimental design to test the effectiveness of computer simulations in teaching pre-service teachers about lunar phases. They compared three groups, the first of which used observations of nature, the second computer simulations for observation, and the third, a combination of the two.  Observations were supported by a research-backed IE teaching method.  They found no measurable differences in conceptual understanding between these groups.  While at first this seems a disappointing result, it is strong evidence that simulations, when used to promote well-researched IE style teaching methods, can replace other types of observation that may be difficult or impossible in a resource-limited environment.  It is also worth noting that all three types of observation resulted in the average study participant obtaining mastery of the concept of lunar phases.

At this point it is worth considering how, from a theoretical perspective, conceptual change is brought about in physics.  It is well-understood that students come into an introductory physics class with very strong alternative (non-scientific) conceptions of physical processes (Halloun, Hestenes, 1985a), which are often very similar to various non-scientific beliefs which have prevailed through much of history (Halloun, Hestenes, 1985b).  A brief look at human history indicates how difficult it is to break these conceptions.  A first step is to use these conceptions to make a prediction about a physical phenomenon, followed by a careful observation.   This will tend to put students into a state of cognitive dissonance (Tao & Gunstone, 1999) when they attempt to explain their observation, which can in turn lead to the adoption of new, scientific conceptions.  This is the primary mechanism of IE techniques (Wells, Hestenes & Swackhamer, 1995).  Tao & Gunstone (1999) found that when physics simulations are used to induce cognitive dissonance, they tended to promote conceptual change, but the change was difficult to maintain and generalize.

Used in isolation, physics simulations are unlikely to be any more effective than traditional methods.  They are, however, a technology that appears to be very good at promoting the careful observation of visualisations of physical phenomena.  It seems likely that in this role, they can play a very important part in the “predict-observe-explain” cycle (Tao & Gunstone, 1999, p.859; Trundle & Bell, 2010).  In an IE classroom, their use fits naturally as an activity after students have been asked to make a prediction of the physical phenomenon in question.  The observation phase can then be followed with discussions where alternative conceptions are explicitly confronted and by collaboration among students to build new models to explain their observations which can then be tested.  Where possible, other visualisations of phenomena should also be used to support connections to the physical world and aid generalization of the concept.  Not all simulations are created equal, and they need to be measured against criteria assessing their ability to confront common alternative conceptions and to support considered observation.  We should also assess their likelihood of promoting authoritarian views of physics, and avoid those that promote a rapid-fire trial and error approach geared towards obtaining the “correct” answer.  Clearly, our understanding of the role physics simulations can play in IE teaching methods needs further development, especially for physics topics with a known high-resistance to change.  Finally, further research is needed on how physics simulations affect attitudes towards physics and whether or not they undermine the beneficial effects IE methods have on these attitudes.


Hake R (1998) “Interactive-engagement versus traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses,” American Journal of Physics, 66(1):64-74.

Halloun I & Hestenes D (1985a) “The initial knowledge state of college physics students,” American Journal of Physics, 53(11): 1043-1055

Halloun I & Hestenes D (1985) “Common sense concepts about motion,” American Journal of Physics, 53(11): 1056-1065

Keller C, Finkelstein N, Perkins K & Pollock S (2006) “Assessing the effectiveness of computer simulation in introductory undergraduate environments,” in McCullough L, Hsu L & Heron P (Eds.), AIP Conference Proceedings Volume 883: 2006 Physics Education Research Conference, 121-124, Syracuse, USA: American Institute of Physics

Knight R (2004) Five Easy Lessons: strategies for successful physics teaching, San Francisco, USA: Addison Wesley

Redish E & Steinberg N (1999) “Teaching physics: figuring out what works,” Physics Today, 52:24-30

Ronen M & Eliahu M (2000) “Simulation – A bridge between theory and reality: The case of electric circuits,” Journal of Computer Assisted Learning, 16:14-26.

Squire K, Barnett M, Grant J & Higginbotham T (2004) “Electromagnetism supercharged!: Learning physics with digital simulation games” in Kafai Y, Sandoval W, Enyedy N (Eds.), ICLS ’04 Proceedings of the 6th international conference on Learning sciences, 513-520, International Society of the Learning Sciences.

Steinberg R (2000) “Computers in teaching science: to simulate or not to simulate?” American Journal of Physics, 68(7):S37-41

Tao P & Gunstone R (1999) “The Process of Conceptual Change in Force and Motion during Computer-Supported Physics Instruction,” Journal of Research in Science Teaching, 36(7):859-882.

Trundle K & Bell R (2010) “The use of a computer simulation to promote conceptual change: a quasi-experimental study,”  Computers and Education 54: 1078-1088

Wells M, Hestenes D & Swackhamer G (1995) “A modelling method for high school physics instruction,” American Journal of Physics, 63(7): 606-619

Wieman C and Perkins K (2005) “Transforming Physics Education,” Physics Today, 58:36-48

Zacharia Z & Anderson O (2003) “The effects of an interactive computer-based simulation prior to performing a laboratory inquiry-based experiment on students’ conceptual understanding of physics,” American Journal of Physics, 71(6):618-629.

[i] The normalized gain is defined as (Post-test % – Pre-test %)/(100% – Pre-test %)