Physics simulations as a tool for interactive engagement

Developers of physics simulations promote them as a valuable tool for interactive engagement (IE) (Wieman & Perkins, 2005), a framework for teaching physics that is well-established in the literature as being superior to traditional methods (Hake, 1998; Redish & Steinberg, 1999).  There have been many studies on the effectiveness of physics simulations, yet their status as a tool for promoting conceptual change in physics courses remains controversial.  Unfortunately, the majority of investigations have been case studies, with relatively few experimental studies undertaken.  While the type of simulation and the contexts in which they are used are certainly important, the question remains as to whether they can be used effectively in the support of interactive engagement teaching strategies.   I argue that simulations can be used effectively in support of IE methods as a tool for aiding observation, but that the implementation and possible side-effects of their use need to be considered carefully.

The term interactive engagement refers to a range of different teaching techniques that use “heads-on (always) and hands-on (usually) activities” (Hake, 1998, p. 65) and which emphasize the construction of knowledge by students and the teacher’s role as a facilitator of learning. They also directly address students’ pre-existing non-scientific conceptions (Knight, 2004).   Hake (1998) performed a meta-study of introductory physics courses, finding that traditional methods (instructor-centric teaching with lectures, tutorials and laboratories) tended to produce remarkably consistent and embarrassingly poor results on the force concept inventory (FCI), a test developed by physics education researchers to probe understanding of Newtonian mechanics.  His measure of comparison was the percentage of total possible improvement, or normalized gain[i].  He found that traditional methods produced a consistent normalized gain of approximately 23% on the FCI (e.g. an average student entering the class with a 30% would improve to a 46%) after a semester of instruction in mechanics.  This result was independent of pre-test scores and instructor. IE methods consistently led to normalized gains in the region of 30-70% with an average of 48%, a two standard deviation effect.  A further problematic feature of traditional methods is that they consistently promote counterproductive beliefs about physics.  Redish and Steinberg (1999) used a test designed to discover student attitudes on a scale of “independence/authority, coherence/pieces, and concepts/equations,” (p. 29) and found that a single semester of a traditional physics course led to a regression from “expert” (p. 33) beliefs of approximately one standard deviation.  Meanwhile, IE methods resulted in improvements of 2.5 standard deviations.

IE physics courses are typically ICT intensive, with the most common technologies being data loggers, video analysis software, motion detectors, force probes and computers (Knight, 2004).  For resource-poor physics classrooms, simulations appear to be an attractive alternative to the purchase of extra lab equipment and the development of new activities.  Or are they?

The literature on the effectiveness of physics simulations is full of controversy.  Some studies have suggested that physics simulations are powerful agents of conceptual change (Keller, Finkelstein, Perkins & Pollock, 2006; Squire, Barnett, Grant & Higginbotham, 2004; Zacharia & Anderson, 2003), while others have shown no benefit over alternative methods (Ronen & Eliahu, 2000; Steinberg, 2000).  The vast majority of these studies suffer from significant research design flaws, e.g. failing to adequately isolate the method of instruction in Squire, Barnett, Grant and Higgenbotham (2004), or having been done on far too small of a scale to find measurable effects, as in Zacharia and Anderson (2003).  Effectiveness studies have almost exclusively focused on a comparison with traditional methods.  An exception is Steinberg (2000) who showed no difference in effectiveness, compared with IE methods.  This study also suggested – based on casual, qualitative observations – that simulations may promote authoritarian views of physics.  Unfortunately, no quantitative research has investigated this issue.

A recent study by Trundle & Bell (2010) used a quasi-experimental design to test the effectiveness of computer simulations in teaching pre-service teachers about lunar phases. They compared three groups, the first of which used observations of nature, the second computer simulations for observation, and the third, a combination of the two.  Observations were supported by a research-backed IE teaching method.  They found no measurable differences in conceptual understanding between these groups.  While at first this seems a disappointing result, it is strong evidence that simulations, when used to promote well-researched IE style teaching methods, can replace other types of observation that may be difficult or impossible in a resource-limited environment.  It is also worth noting that all three types of observation resulted in the average study participant obtaining mastery of the concept of lunar phases.

At this point it is worth considering how, from a theoretical perspective, conceptual change is brought about in physics.  It is well-understood that students come into an introductory physics class with very strong alternative (non-scientific) conceptions of physical processes (Halloun, Hestenes, 1985a), which are often very similar to various non-scientific beliefs which have prevailed through much of history (Halloun, Hestenes, 1985b).  A brief look at human history indicates how difficult it is to break these conceptions.  A first step is to use these conceptions to make a prediction about a physical phenomenon, followed by a careful observation.   This will tend to put students into a state of cognitive dissonance (Tao & Gunstone, 1999) when they attempt to explain their observation, which can in turn lead to the adoption of new, scientific conceptions.  This is the primary mechanism of IE techniques (Wells, Hestenes & Swackhamer, 1995).  Tao & Gunstone (1999) found that when physics simulations are used to induce cognitive dissonance, they tended to promote conceptual change, but the change was difficult to maintain and generalize.

Used in isolation, physics simulations are unlikely to be any more effective than traditional methods.  They are, however, a technology that appears to be very good at promoting the careful observation of visualisations of physical phenomena.  It seems likely that in this role, they can play a very important part in the “predict-observe-explain” cycle (Tao & Gunstone, 1999, p.859; Trundle & Bell, 2010).  In an IE classroom, their use fits naturally as an activity after students have been asked to make a prediction of the physical phenomenon in question.  The observation phase can then be followed with discussions where alternative conceptions are explicitly confronted and by collaboration among students to build new models to explain their observations which can then be tested.  Where possible, other visualisations of phenomena should also be used to support connections to the physical world and aid generalization of the concept.  Not all simulations are created equal, and they need to be measured against criteria assessing their ability to confront common alternative conceptions and to support considered observation.  We should also assess their likelihood of promoting authoritarian views of physics, and avoid those that promote a rapid-fire trial and error approach geared towards obtaining the “correct” answer.  Clearly, our understanding of the role physics simulations can play in IE teaching methods needs further development, especially for physics topics with a known high-resistance to change.  Finally, further research is needed on how physics simulations affect attitudes towards physics and whether or not they undermine the beneficial effects IE methods have on these attitudes.

References

Hake R (1998) “Interactive-engagement versus traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses,” American Journal of Physics, 66(1):64-74.

Halloun I & Hestenes D (1985a) “The initial knowledge state of college physics students,” American Journal of Physics, 53(11): 1043-1055

Halloun I & Hestenes D (1985) “Common sense concepts about motion,” American Journal of Physics, 53(11): 1056-1065

Keller C, Finkelstein N, Perkins K & Pollock S (2006) “Assessing the effectiveness of computer simulation in introductory undergraduate environments,” in McCullough L, Hsu L & Heron P (Eds.), AIP Conference Proceedings Volume 883: 2006 Physics Education Research Conference, 121-124, Syracuse, USA: American Institute of Physics

Knight R (2004) Five Easy Lessons: strategies for successful physics teaching, San Francisco, USA: Addison Wesley

Redish E & Steinberg N (1999) “Teaching physics: figuring out what works,” Physics Today, 52:24-30

Ronen M & Eliahu M (2000) “Simulation – A bridge between theory and reality: The case of electric circuits,” Journal of Computer Assisted Learning, 16:14-26.

Squire K, Barnett M, Grant J & Higginbotham T (2004) “Electromagnetism supercharged!: Learning physics with digital simulation games” in Kafai Y, Sandoval W, Enyedy N (Eds.), ICLS ’04 Proceedings of the 6th international conference on Learning sciences, 513-520, International Society of the Learning Sciences.

Steinberg R (2000) “Computers in teaching science: to simulate or not to simulate?” American Journal of Physics, 68(7):S37-41

Tao P & Gunstone R (1999) “The Process of Conceptual Change in Force and Motion during Computer-Supported Physics Instruction,” Journal of Research in Science Teaching, 36(7):859-882.

Trundle K & Bell R (2010) “The use of a computer simulation to promote conceptual change: a quasi-experimental study,”  Computers and Education 54: 1078-1088

Wells M, Hestenes D & Swackhamer G (1995) “A modelling method for high school physics instruction,” American Journal of Physics, 63(7): 606-619

Wieman C and Perkins K (2005) “Transforming Physics Education,” Physics Today, 58:36-48

Zacharia Z & Anderson O (2003) “The effects of an interactive computer-based simulation prior to performing a laboratory inquiry-based experiment on students’ conceptual understanding of physics,” American Journal of Physics, 71(6):618-629.


[i] The normalized gain is defined as (Post-test % – Pre-test %)/(100% – Pre-test %)

Advertisements

It’s all about engagement and context matters!

Steinberg (2000) observed all sorts of variable behavior in how simulations were used.  Some students used a trial and error method with virtually no cognitive input in order to find the correct answer.  This description reminded me of the familiar observation of students keeping a finger in the back-of-the-book-solutions to a set of textbook problems while employing the plug-n-chug method so commonly used in introductory physics classes (for anyone unfamiliar, this refers to students plugging in the numbers given in the formulaically written question to the correct formula, with little to no conceptual understanding).  What I’m getting at here is that there is a common theme in all of the research I’ve read on the topic, that simulations are successful as a learning tool to the extent to which they engage the student in an active learning process, a view which is supported by Hake’s (1997) substantial meta-study of interactive engagement versus traditional teaching methods.

Physics students face a perhaps uniquely challenging task in facing the need to confront powerful personal misconceptions about the way the universe works.  The extent to which simulations are used to force the breakdown of these misconceptions through the powerful cycle of “predict-observe-explain” (Tao and Gunstone, 1999, p. 859) seems to largely determine their usefulness.  Simulations are certainly not unique in their potential to achieve this type of conceptual change, but seem to offer promise as a powerful tool towards that end when used properly.

Tao and Gunstone (1999) attempted to find an explanation of the method by which conceptual change is attained through the use of computer simulations.  They regularly interviewed 12 year 10 physics students through a unit on force and motion about their conceptual understanding of these topics and their interactions with the simulations.  Their main finding was that conceptual change is both very fragile and context dependent.  Students may accept a new explanation for a given scenario when confronted with the failure of their previous idea, but may also revert to their old explanation at a later time or fail to carry over the conceptual change to a new context.

How does this fit in with the other research I’ve discussed?  It may go some way to explaining how difficult it is to attain broad conceptual change in physics education, for one.  It also brings up other possible problems to do with substituting simulations for real world experiments.  If conceptual change really is so context dependent, might it be dangerous to base a lot of conceptual physics education on a computer based simulation?  In a world where students already fail to see the relevance of classroom physics to their every day lives, could we be widening this gap of perceived relevance by using a tool so detached from every day experience?  Then again, maybe it couldn’t get much worse!

References

Hake R. (1998) ‘Interactive-engagement versus traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses,’ American Journal of Physics, vol. 66 no. 1, pp. 64-74.

Steinberg R. (2000), ‘Computers in teaching science: to simulate or not to simulate?’ American Journal of Physics , vol. 68 no. 7, pp. S37-41.

Tao P.K. and Gunstone R.F. (1999), ‘The Process of Conceptual Change in Force and Motionduring Computer-Supported Physics Instruction,’ Journal of Research in Science Teaching, vol. 36 no. 7, pp. 859-882.

Playing devil’s advocate: claims that simulations don’t lead to conceptual change

There have been a few studies suggesting no benefit to simulations in terms of conceptual understanding.  Ronen and Eliahu (2000) performed a study on year 9 students studying electric circuits.  Four classes took part; two of them were given an introduction to a program designed to present simulation-based activities and given the program to take home.  Surprisingly, apparently no record of the use of the program by the students was maintained.  When it came time to post-test the groups, it shouldn’t have come as any surprise that no differences between the groups was observed.  In a true Mythbusters moment (you know, when the pull out the C4 because the original method of blowing something up has failed), the researchers bizarrely decided to run a completely different trial where the simulation software could be used to test their ideas before they were assessed on the function of their circuit design.  The study wasn’t allowed to fail.

A much more interesting study was done by Steinberg (2000).  He took three different tutorial groups from the same university level introductory physics class, all three of whom used methods informed by physics education research.  Two tutorials used computer simulations, while the third used paper and pencil techniques in a similar fashion.  Students were pre and post-tested on their understanding of air resistance.  No difference was seen in the conceptual understanding between the two groups, though both saw very significant gains in their conceptual understanding of air resistance.

This finding really shouldn’t come as any surprise: of course it matters what teaching method we are comparing with.  There is nothing magical about simulations! This article brought up a number of really interesting issues, which I will expand on in my next and final post on this topic.

References

Ronen M. and Eliahu M. (2000), ‘Simulation – A bridge between theory and reality: The case of electric circuits,’ Journal of Computer Assisted Learning, vol. 16 pp. 14-26.

Steinberg R. (2000), ‘Computers in teaching science: to simulate or not to simulate?’ American Journal of Physics , vol. 68 no. 7, pp. S37-41.

What sort of evidence is there on the effectiveness of simulations?

One of the primary studies continually brought up by the research contributors to the PhET project (see this page for a list of articles) is one that was carried out by Keller, Finkelstein, Perkins and Pollock in 2006. As part of a larger study, they pre and post-tested a control group who were given a standard physics demonstration as well as an experimental group who instead of the demonstration, were made to interact with a simulation. Both groups substantially increased their scores on the second testing, with the experimental group outperforming by 47% in terms of percentage change.

The raw scores alone bring to question the validity of the conceptual questions used, with both groups scoring nearly 60% before the demonstration. No mention was made of whether or not these results were statistically significant, which is problematic. It, frankly, seems like a very odd comparison to make in the first place. There has been an enormous amount of research suggesting that interactive engagement methods are on the order of twice as effective in conceptual physics education as traditional methods (Hake, 1997). A more telling comparison might be made between the use of a simulation and another interactive teaching method, such as an inquiry based experiment.

Squire, Barnett, Grant and Higginbotham (2004) performed an interesting study with year 8 science students studying electrostatics. They took one teacher’s five classes and separated them into a control group (2 classes) with the remainder put into the experimental group. Both were pre and post tested after a unit on electrostatics. The control group received “inquiry based” (the exact nature of which is left to the reader’s imagination) teaching methods consisting of lectures, experiments and demonstrations, while the experimental group spent most of its class time playing a simulation-based game. For my purposes, the game aspect muddies the water, as it is unclear to what degree the motivational aspects of the game component may have changed the nature of the learning experience. The results of the study were remarkable, with the experimental group outperforming the control to a statistically significant degree.

There were major problems with the design of the experiment (no attempt was made to randomize) and the degree to which the game monopolized class time was also puzzling. The overall results for both groups were pretty depressing, with relatively small overall improvements for both groups. It is also unclear how much of an effect the simulation game itself had versus the opportunities for discussion it created in the class. The study certainly points to the possibility of a powerful pedagogical tool in simulation games.

A study by Zacharia and Anderson (2003) used a different strategy. They took a group of pre and in-service teachers who were not trained in physics and performed an experiment on the usefulness of simulations as a precursor to doing an experiment, in place of extra practice problems. They found that the combination of lectures and practice problem sets had no (!) statistically significant influence on conceptual understanding. In contrast, when a period of simulation replaced some of the practice problems, a very significant difference in understanding was achieved. After the experiment, however, the group without the simulation experience tended to catch up in conceptual understanding, nearly equaling the test scores of those who used a simulation. No mention was made of whether or not those differences after the experiment were statistically significant.

Trials were for once properly randomized, but the sample size was laughably small, with 13 students taking part, each in four separate trials. Nonetheless, this is the soundest study I have seen thus far comparing simulation with other methods, and although the sample size is tiny, it seems likely that simulations can, in the right context, lead to conceptual understanding on par with a real life physics experiment.

References

Hake R. (1998) ‘Interactive-engagement versus traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses,’ American Journal of Physics, vol. 66 no. 1, pp. 64-74.

Keller C., Finkelstein N., Perkins K. and Pollock S. (2006) ‘Assessing the effectiveness of computer simulation in introductory undergraduate environments,’ in McCullough L., Hsu L. and Heron P. (Eds.), AIP Conference Proceedings Volume 883: 2006 Physics Education Research Conference, pp. 121-124, Syracuse, USA: American Institute of Physics.

Squire K., Barnett M., Grant J. and Higginbotham T. (2004), ‘Electromagnetism supercharged!: Learning physics with digital simulation games’ in Kafai Y., Sandoval W. Enyedy N. (Eds.), ICLS ’04 Proceedings of the 6th international conference on Learning sciences, pp. 513-520, International Society of the Learning Sciences.

Zacharia Z. and Anderson O.R. (2003), ‘The effects of an interactive computer-based simulation prior to performing a laboratory inquiry-based experiment on students’ conceptual understanding of physics,’ American Journal of Physics, vol. 71 no. 6, pp. 618-629.

What the promoters of simulations have to say

The use of simulations in conceptual physics education has its diehard supporters. In an article (somewhat ostentatiously) titled “Transforming physics education” Wieman and Perkins (2005) promote their use by arguing that simulations can be used to effectively make students more “expert-like” in their approaches to physics. I take this to mean that they help to develop critical problem solving skills, though this isn’t made entirely clear. They argue that a traditional physics education, on the other hand, tends to make students more “novice-like.” They casually mention that their research indicates – without referencing any particular study – that simulations outperform experiments with real equipment. One serious point of misunderstanding here is that they apparently don’t differentiate between experiments and demonstrations (my point here is that one is interactive while the other is passive, which makes a huge difference in student learning outcomes). They then proceed to try to explain the unsubstantiated “gains” of simulations over experiments by reference to the idea of “cognitive load” and that simulations require much less of a cognitive burden, which in turn enhances learning opportunities. Please excuse my skepticism.

This does bring up an interesting point. What exactly is the point of a physics experiment in the classroom? Is it solely to enhance conceptual knowledge and confront students’ misconceptions? On this level it could indeed be true (though I haven’t seen any evidence to support this) that simulations outperform experiments, or more likely, demonstrations. On the other hand, aren’t there a lot of other very important learning goals and skills that are part of the point of physics experiments? Wieman and Perkins note in their argument for simulations over experiments that students often waste a great amount of time worrying about the color of the insulation on the wires they use in an experiment on circuits. So are we really saying that the point of a physics experiment has nothing to do with teaching the type of independent thinking skills that might be used to figure out that the color of the plastic insulation isn’t significant to the results of a physics experiment?

References

Wieman C. and Perkins K. (2005) ‘Transforming Physics Education,’ Physics Today, vol. 58, pp. 36-48.

Effectiveness of simulations in conceptual physics education: What’s a simulation and what’s the point?

To get this off the ground, an example of an interactive simulation (thanks PhET!):

The simulation is designed to illustrate the motion of a simple pendulum.  You can play with all different variables, including length of the pendulum, mass and gravitational pull to see how it affects the motion of the pendulum.  You can even play with larger magnitude swings and observe anharmonic motion.

This is a pretty typical example of an interactive physics simulation.  You can play with lots of different variables and see what the physics throws at you.

The aim of this research journal is to work out just how effective these simulations are in teaching conceptual physics.  The idea is that they should provide students with “open learning environments” (Esquembre, 2002, p. 16) and help kids progress their knowledge of physics through “a process of hypothesis-making and idea-testing” (Esquembre, 2002, p. 16).  Sounds great, right?  The fact of the matter is, changing kids’ misconceptions about the physical world is beyond hard.  Pre and post-testing for understanding of physical concepts reveal embarrassing results for traditional physics teaching methods (Hake, 1998).  Previous innovations – e.g. teaching in a “studio” format aimed at decreasing class size, increasing collaboration and use of computers – haven’t fared any better (Cummings, Marx, Thornton & Kuhl, 1999).

So, are they useful?

A quick survey of the literature returns a resounding… it depends.  There have been countless studies of the effectiveness of simulations in teaching physics concepts.  The results range from no effect to declaring simulations to be the messiah of conceptual physics education.  Surprise! It turns out that the type of simulation used and the context in which they are used counts for a lot, not to mention what we are comparing with.  There is a relatively large body of physics education research out there, and quite a few different teaching methods to compare with.

In my next post I’ll examine the arguments of those declaring the second coming.

References

Cummings K., Marx J., Thornton R and Kuhl D (1999) ‘Evaluationg innovation in studio physics,’ American Journal of Physics, vol. 67 no. 1 pp. S38-44.

Esquembre F. (2002) ‘Computers in physics education,’ Computer Physics Communications, vol. 147 nos. 1-2, pp. 13-18.

Hake R. (1998) ‘Interactive-engagement versus traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses,’ American Journal of Physics, vol. 66 no. 1, pp. 64-74.