Physics simulations as a tool for interactive engagement

Developers of physics simulations promote them as a valuable tool for interactive engagement (IE) (Wieman & Perkins, 2005), a framework for teaching physics that is well-established in the literature as being superior to traditional methods (Hake, 1998; Redish & Steinberg, 1999).  There have been many studies on the effectiveness of physics simulations, yet their status as a tool for promoting conceptual change in physics courses remains controversial.  Unfortunately, the majority of investigations have been case studies, with relatively few experimental studies undertaken.  While the type of simulation and the contexts in which they are used are certainly important, the question remains as to whether they can be used effectively in the support of interactive engagement teaching strategies.   I argue that simulations can be used effectively in support of IE methods as a tool for aiding observation, but that the implementation and possible side-effects of their use need to be considered carefully.

The term interactive engagement refers to a range of different teaching techniques that use “heads-on (always) and hands-on (usually) activities” (Hake, 1998, p. 65) and which emphasize the construction of knowledge by students and the teacher’s role as a facilitator of learning. They also directly address students’ pre-existing non-scientific conceptions (Knight, 2004).   Hake (1998) performed a meta-study of introductory physics courses, finding that traditional methods (instructor-centric teaching with lectures, tutorials and laboratories) tended to produce remarkably consistent and embarrassingly poor results on the force concept inventory (FCI), a test developed by physics education researchers to probe understanding of Newtonian mechanics.  His measure of comparison was the percentage of total possible improvement, or normalized gain[i].  He found that traditional methods produced a consistent normalized gain of approximately 23% on the FCI (e.g. an average student entering the class with a 30% would improve to a 46%) after a semester of instruction in mechanics.  This result was independent of pre-test scores and instructor. IE methods consistently led to normalized gains in the region of 30-70% with an average of 48%, a two standard deviation effect.  A further problematic feature of traditional methods is that they consistently promote counterproductive beliefs about physics.  Redish and Steinberg (1999) used a test designed to discover student attitudes on a scale of “independence/authority, coherence/pieces, and concepts/equations,” (p. 29) and found that a single semester of a traditional physics course led to a regression from “expert” (p. 33) beliefs of approximately one standard deviation.  Meanwhile, IE methods resulted in improvements of 2.5 standard deviations.

IE physics courses are typically ICT intensive, with the most common technologies being data loggers, video analysis software, motion detectors, force probes and computers (Knight, 2004).  For resource-poor physics classrooms, simulations appear to be an attractive alternative to the purchase of extra lab equipment and the development of new activities.  Or are they?

The literature on the effectiveness of physics simulations is full of controversy.  Some studies have suggested that physics simulations are powerful agents of conceptual change (Keller, Finkelstein, Perkins & Pollock, 2006; Squire, Barnett, Grant & Higginbotham, 2004; Zacharia & Anderson, 2003), while others have shown no benefit over alternative methods (Ronen & Eliahu, 2000; Steinberg, 2000).  The vast majority of these studies suffer from significant research design flaws, e.g. failing to adequately isolate the method of instruction in Squire, Barnett, Grant and Higgenbotham (2004), or having been done on far too small of a scale to find measurable effects, as in Zacharia and Anderson (2003).  Effectiveness studies have almost exclusively focused on a comparison with traditional methods.  An exception is Steinberg (2000) who showed no difference in effectiveness, compared with IE methods.  This study also suggested – based on casual, qualitative observations – that simulations may promote authoritarian views of physics.  Unfortunately, no quantitative research has investigated this issue.

A recent study by Trundle & Bell (2010) used a quasi-experimental design to test the effectiveness of computer simulations in teaching pre-service teachers about lunar phases. They compared three groups, the first of which used observations of nature, the second computer simulations for observation, and the third, a combination of the two.  Observations were supported by a research-backed IE teaching method.  They found no measurable differences in conceptual understanding between these groups.  While at first this seems a disappointing result, it is strong evidence that simulations, when used to promote well-researched IE style teaching methods, can replace other types of observation that may be difficult or impossible in a resource-limited environment.  It is also worth noting that all three types of observation resulted in the average study participant obtaining mastery of the concept of lunar phases.

At this point it is worth considering how, from a theoretical perspective, conceptual change is brought about in physics.  It is well-understood that students come into an introductory physics class with very strong alternative (non-scientific) conceptions of physical processes (Halloun, Hestenes, 1985a), which are often very similar to various non-scientific beliefs which have prevailed through much of history (Halloun, Hestenes, 1985b).  A brief look at human history indicates how difficult it is to break these conceptions.  A first step is to use these conceptions to make a prediction about a physical phenomenon, followed by a careful observation.   This will tend to put students into a state of cognitive dissonance (Tao & Gunstone, 1999) when they attempt to explain their observation, which can in turn lead to the adoption of new, scientific conceptions.  This is the primary mechanism of IE techniques (Wells, Hestenes & Swackhamer, 1995).  Tao & Gunstone (1999) found that when physics simulations are used to induce cognitive dissonance, they tended to promote conceptual change, but the change was difficult to maintain and generalize.

Used in isolation, physics simulations are unlikely to be any more effective than traditional methods.  They are, however, a technology that appears to be very good at promoting the careful observation of visualisations of physical phenomena.  It seems likely that in this role, they can play a very important part in the “predict-observe-explain” cycle (Tao & Gunstone, 1999, p.859; Trundle & Bell, 2010).  In an IE classroom, their use fits naturally as an activity after students have been asked to make a prediction of the physical phenomenon in question.  The observation phase can then be followed with discussions where alternative conceptions are explicitly confronted and by collaboration among students to build new models to explain their observations which can then be tested.  Where possible, other visualisations of phenomena should also be used to support connections to the physical world and aid generalization of the concept.  Not all simulations are created equal, and they need to be measured against criteria assessing their ability to confront common alternative conceptions and to support considered observation.  We should also assess their likelihood of promoting authoritarian views of physics, and avoid those that promote a rapid-fire trial and error approach geared towards obtaining the “correct” answer.  Clearly, our understanding of the role physics simulations can play in IE teaching methods needs further development, especially for physics topics with a known high-resistance to change.  Finally, further research is needed on how physics simulations affect attitudes towards physics and whether or not they undermine the beneficial effects IE methods have on these attitudes.

References

Hake R (1998) “Interactive-engagement versus traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses,” American Journal of Physics, 66(1):64-74.

Halloun I & Hestenes D (1985a) “The initial knowledge state of college physics students,” American Journal of Physics, 53(11): 1043-1055

Halloun I & Hestenes D (1985) “Common sense concepts about motion,” American Journal of Physics, 53(11): 1056-1065

Keller C, Finkelstein N, Perkins K & Pollock S (2006) “Assessing the effectiveness of computer simulation in introductory undergraduate environments,” in McCullough L, Hsu L & Heron P (Eds.), AIP Conference Proceedings Volume 883: 2006 Physics Education Research Conference, 121-124, Syracuse, USA: American Institute of Physics

Knight R (2004) Five Easy Lessons: strategies for successful physics teaching, San Francisco, USA: Addison Wesley

Redish E & Steinberg N (1999) “Teaching physics: figuring out what works,” Physics Today, 52:24-30

Ronen M & Eliahu M (2000) “Simulation – A bridge between theory and reality: The case of electric circuits,” Journal of Computer Assisted Learning, 16:14-26.

Squire K, Barnett M, Grant J & Higginbotham T (2004) “Electromagnetism supercharged!: Learning physics with digital simulation games” in Kafai Y, Sandoval W, Enyedy N (Eds.), ICLS ’04 Proceedings of the 6th international conference on Learning sciences, 513-520, International Society of the Learning Sciences.

Steinberg R (2000) “Computers in teaching science: to simulate or not to simulate?” American Journal of Physics, 68(7):S37-41

Tao P & Gunstone R (1999) “The Process of Conceptual Change in Force and Motion during Computer-Supported Physics Instruction,” Journal of Research in Science Teaching, 36(7):859-882.

Trundle K & Bell R (2010) “The use of a computer simulation to promote conceptual change: a quasi-experimental study,”  Computers and Education 54: 1078-1088

Wells M, Hestenes D & Swackhamer G (1995) “A modelling method for high school physics instruction,” American Journal of Physics, 63(7): 606-619

Wieman C and Perkins K (2005) “Transforming Physics Education,” Physics Today, 58:36-48

Zacharia Z & Anderson O (2003) “The effects of an interactive computer-based simulation prior to performing a laboratory inquiry-based experiment on students’ conceptual understanding of physics,” American Journal of Physics, 71(6):618-629.


[i] The normalized gain is defined as (Post-test % – Pre-test %)/(100% – Pre-test %)

My epic failure at not whingeing

So, I’ve apparently had a catastrophic failure on the trying not to kick and scream front, mentioned in my first post on this topic.  Apologies to all the IWB fanboy/girls, not to mention all the people just sick of hearing me whinge (LOL, poor Meg); part of my disdain is connected to the cost, the rest is largely due to how I’ve seen them used so far in the classroom (damn you, ClickView).  They’re just a tool after all, and surely I can find some good uses.   What I really want to know is: how can I use an IWB effectively in my role as a facilitator of learning and not in the teacher-centric transmission style of teaching?

There has been a lot of work done on investigating the different types of pedagogies that IWBs can promote.  Kearney and Schuck (2008) performed a study investigating some of these in Australia.  They found that IWBs were often used to promote “whole-class interactions” (p. 10).  They acknowledged, however, that these were often teacher-centric and a “traditional authoritarian interaction” (p. 10).  Not exactly what I’m after, in terms of promoting interactive engagement.  On a more interesting note, they mention that IWBs could be used to reinforce the relevance of topics being taught, as real world applications were readily available through the IWBs interface with the web.  That’s all well and good, but how does this differ from a standard projector, besides saving the teacher two steps to their computer to use the mouse?

A study by Hennessy, Deaney, Ruthven and Winterbottom (2007) investigated strategies for using IWBs to promote participation among students in secondary school science.  They studied the use of IWBs by two experienced teachers who designed lesson plans carefully to integrate the use of IWBs into their teaching style.  Note the order: good pedagogy then integrated technology where it makes sense.  Both teachers attempted to focus on getting students to interact with the IWB in a hands-on manner.  In practice, the researchers noted that actual student interaction with the IWB was extremely limited, with a maximum of two students physically interacting with the whiteboard during any lesson they observed, for brief periods.  Time constraints were identified as the predominant hindrance to student participation.

I’m willing to admit that I can see using an IWB to get students playing with a physics simulation, for instance, or for demonstrating how a piece of software works while students play along on computers in groups.  It seems it may be possible to make IWBs “interactive” after all, it might just take some creativity to get there.  I hope that in my classroom, this will be when the pedagogy requires it rather than the other way around.

References

Epic Fail [Online image] Retrieved from  http://www.sav3rio.com/2010/02/epic-fail/

Hennessy, S , Deaney, R , Ruthven, K and Winterbottom, M (2007) ‘Pedagogical strategies for using the interactive whiteboard to foster learner participation in school science’, Learning, Media and Technology, 32: 3, 283 — 301

Kearney, M, and Schuck, S (2008) “Exploring pedagogy with interactive whiteboards in Australian schools,” Australian Educational Computing, 23(1): 8-14

IWB cost/benefit – what other alternatives do we have?

For argument’s sake, let’s say for the moment that I believe that IWBs actually do have a small statistically significant effect on student motivation and engagement (I can feel my nose growing as I type).  Is it worth the cost?  Or even better, what other (cheaper!) alternatives are there to encourage engagement?  I’ll be a bit selfish and focus on physics and mathematics here, because that’s where my knowledge base and interest is.

IWBs vary a lot in price.  Many of the ones I’ve seen in classrooms recently fall in the price range of US $4000-$6000 (I hate to think what they cost in Australia!), not including the cost of a computer to power it.  The ones at my prac school appear to be at the upper end of that range.  They’re a slightly better deal when bought en masse, but still, they’re not exactly cheap.  Even their biggest fans acknowledge cost as a set-back (Lipton & Lipton, 2000).   What other technologies could I purchase for that sort of cash?  For a physics class, I could buy a class set (by which I mean around eight) of cheap, functional computers and have money left over for some digital lab equipment to use with them (e.g. wiimotes and a digital video camera).  Better yet, for half the cost of a replacement bulb for the projector, I could buy a class set of small mobile (albeit analogue, LOL) whiteboards that would put an IWB to shame in terms of the interaction they would facilitate in the classroom.  Think of the ass-kicking pedagogy that such a simple tool supports.  Students work in small groups, sharing ideas, peer instruction, the lot, all for $20 (or if you live in the US, $2).

You might (fairly) ask whether there is any research to support my implied thesis that these tools could lead to better learning outcomes.  The answer is a resounding hell yes.  In physics, there is a mountain of research supporting that an appropriate use of these kinds of tools, for instance, in a modeling context, can lead to hugely enhanced learning outcomes.  Quarter of a standard deviation effects, be damned, try two plus.  See, for example, Wells, Hestenes & Swackhamer (1995) or Hake (1998).

References

Hake R. (1998) “Interactive-engagement versus traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses,” American Journal of Physics, 66(1): 64-74

Lipton, M and Lipton, L (2010) “Enhancing the radiology learning experience with electronic whiteboard technology,” American Journal of Roentgenology, 194: 1547-1551

Wells, M; Hestenes, D and Swackhamer, G (1995) “A modelling method for high school physics instruction,” American Journal of Physics, 63(7): 606-619

IWB use and correlations with learning outcomes

Moss et. al. (2007) performed a very substantial study on the impact of the widespread introduction of IWBs in London schools.   This is the only study I have seen that addresses learning outcomes.  Over several years, they tracked surveys aimed at revealing student motivation in addition to student performance via test scores.  They also recorded a large quantity of teaching sessions with the IWBs and used them to report on the type of pedagogies being used by teachers.

They did find an increase in student motivation when IWBs were first introduced, which quickly faded with time. More interestingly, they found only one instance of a positive effect on test scores year to year, in English classes.   Embarrassingly, negative effects were found several times in mathematics and science classrooms.  None of the effects were statistically significant.  Oops!

This study suffers from the usual problems to do with self-reporting of student motivation levels.  The statistical analysis of student test scores, however, appears to be by design, fairly sound.  This is essentially because they tracked the same teachers through the transition phase of introducing IWBs. Thus for the most part, they were able to cleanly isolate the IWBs as the primary change in the classrooms over this period.  I would have obviously liked to have seen a control group used.  There is the chance that other system wide changes to school policies may have affected the outcomes.  This wasn’t discussed in the study, however, so it is difficult to know.  The effectiveness of the testing practices for measuring student learning outcomes is certainly up for debate as well.

References

Moss, G; Jewitt, C, Levaaic, R; Armstrong, V; Cardini, A and Castle, F (2007) The interactive whiteboards, pedagogy and pupil performance evaluation: an evaluation of the Schools Whiteboard Expansion (SWE) Project: London Challenge. DfES Research Report 816 (London, DfES).

IWBs: enhancing classroom engagement?

Early research on IWBs focused on descriptive aspects of classroom use, their potential as a display tool for integrating multimedia and on their ability for demonstrating multiple representations of ideas (Higgins, Beauchamp & Miller, 2007).  IWBs were seen as capable of catering to a wider range of learning styles and as a tool to quicken lesson pacing (Higgins, Beauchamp & Miller, 2007).

Despite the enormous amounts of money being invested in this technology, there hasn’t been  a lot of experimental research done on how they might further learning outcomes or on how to use them effectively (Cheung & Slavin, 2011).  I’ll attempt to focus on either experimental or quasi-experimental studies.  I don’t see as much value in the multitude of qualitative case studies (some of which have been funded by IWB manufacturers) with stories of how much the kids in Ms. X’s class really felt engaged by the use of an IWB.

Torff & Tirotta (2009) performed a study on upper primary mathematics students, trying to ascertain whether IWBs impact self-reported levels of motivation in students.  They took a relatively large group (773) of these students in a single New York school district that had had access to IWBs for a number of years and divided them into two groups based on how their teachers responded to a survey regarding how often they used IWBs in their classroom.  The students were given a survey on their motivation for and enjoyment of mathematics.  Researchers found a very small, yet statistically significant contribution – approximately ¼ of a standard deviation – to student motivation from the use of IWBs.  They noted that this effect was much smaller than previous researchers had found in smaller studies.

The use of self-reported motivation level as dependent variable is pretty suspect, if unavoidable.  A more interesting study would have been to try to ascertain whether or not there were any measurable enhanced learning outcomes using IWbs.

More importantly, I wonder about their groupings for the study.  No attempt was made to see if there were any other differences between the teachers using IWBs and those who were not.  All of the teachers had had unrestricted access to IWBs (every classroom in the district had been outfitted with an IWB three years before this study took place).  I can’t help but think that any teacher who had been given access to this kind of technology and still hadn’t used it in the classroom for three years might have a higher than average likelihood of being burnt out and switched off.  An image comes to mind here of a mathematics professor I once had who hadn’t changed his overheads for the classes he taught in over a decade (they were referred to amongst the student population as “the dead sea scrolls”).  His lack of effectiveness as an engaging teacher wasn’t due to his not using nifty technology, but rather to the fact that he had stopped trying years ago.

References

Cheung, A and Slavin, R (2011) “The effectiveness of Education Technology for enhancing reading achievement: a meta-analysis” Best Evidence Encyclopedia, Johns Hopkins University School of Education, Retrieved from http://www.bestevidence.org/word/tech_read_Feb_24_2011.pdf

Higgins, S; Beauchamp, G and Miller, D (2007) “Reviewing the literature on interactive Whiteboards,” Learning, Media and Technology, 32: 3, 213-225

Torff, B and Tirotta, T (2010) “Interactive whiteboards produce small gains in elementary students’ self-reported motivation in mathematics,” Computers and Education, 54: 379-383

Interactive, you’re doing it wrong

funny pictures of cats with captions
see more Lolcats and funny pictures, and check out our Socially Awkward Penguin lolz!

Ok, it’s confession time. Call me a luddite if you must, but I’ve recently often found myself biting my tongue while people around me talk about how great interactive whiteboards (IWBs) are for education.  Don’t get me wrong, I get the whiz bang/curb appeal completely.  The question that keeps coming up for me is, what sort of pedagogy do they support?   The fact of the matter is that, due to their cost and size, a classroom is only ever likely to have one IWB in it, which means they will tend to support a teacher-centric model of education, discouraging group work and collaboration.  When I consider the provocation “What sort of teacher do you want to be?” pretty much the furthest thing from my mind is an image of myself in front of a classroom with an IWB yacking away and playing CNN host on election night in a whirl of pointless visual wizardry while students watch on in a daze.   I can’t help but suspect that education departments everywhere are confusing student excitement over cool gadgetry with meaningful engagement.

Why write about them then? Because IWBs are everywhere and it seems likely that when I start teaching next year my classroom will either have one or will be getting one soon.  At the school where I’m doing my prac, most classrooms have one.  Those that don’t will have one installed by the beginning of next school year.  A teacher at a public college in the ACT recently told me that all science classrooms at his school will have one next year.  In 2007, 51% of Australian high schools had at least one IWB and 10% of Australian year 8 science teachers used them “often” or “nearly always”, while the number for mathematics teachers was 11% (Ainley,Eveleigh, Freeman &  O’Malley, 2010).  That’s four years ago.  The IWB industry had revenues of nearly US $1 billion in 2008 (Futuresource, 2009) and at that time it was projected that one out of every six classrooms in the world would have an IWB by 2012 (Futuresource, 2009).  While there doesn’t appear to be any more recent definite industry wide data available, Smart Technologies alone expects to have nearly US $800 million in revenue over the 2011 fiscal year (Smart, 2011).    If I’m going to have one in my classroom, I need to find out what they’re capable of and gather ideas about the kind of pedagogy they can support.

So, here I am.  Trying not to kick, trying not to scream, ready to attempt to talk objectively about IWBs.

References

Ainley, J; Eveleigh, F; Freeman, C; and O’Malley, K (2010) “ICT in the Teaching of Science and Mathematics in Year 8 in Australia: report from the IEA Second International Technology in Education Study (SITES) survey,” ACER Research Monographs.

Futuresource Consulting (2009), “Interactive Whiteboard market shows no real signs of recession,” Retrieved from www.futuresource-consulting.com/…/2009-03_IWB_Update_release.pdf

Smart Technologies (2011), “SMART Reports Third Quarter 2011 Financial Results,” Retrieved from http://investor.smarttech.com/releasedetail.cfm?ReleaseID=548563

It’s all about engagement and context matters!

Steinberg (2000) observed all sorts of variable behavior in how simulations were used.  Some students used a trial and error method with virtually no cognitive input in order to find the correct answer.  This description reminded me of the familiar observation of students keeping a finger in the back-of-the-book-solutions to a set of textbook problems while employing the plug-n-chug method so commonly used in introductory physics classes (for anyone unfamiliar, this refers to students plugging in the numbers given in the formulaically written question to the correct formula, with little to no conceptual understanding).  What I’m getting at here is that there is a common theme in all of the research I’ve read on the topic, that simulations are successful as a learning tool to the extent to which they engage the student in an active learning process, a view which is supported by Hake’s (1997) substantial meta-study of interactive engagement versus traditional teaching methods.

Physics students face a perhaps uniquely challenging task in facing the need to confront powerful personal misconceptions about the way the universe works.  The extent to which simulations are used to force the breakdown of these misconceptions through the powerful cycle of “predict-observe-explain” (Tao and Gunstone, 1999, p. 859) seems to largely determine their usefulness.  Simulations are certainly not unique in their potential to achieve this type of conceptual change, but seem to offer promise as a powerful tool towards that end when used properly.

Tao and Gunstone (1999) attempted to find an explanation of the method by which conceptual change is attained through the use of computer simulations.  They regularly interviewed 12 year 10 physics students through a unit on force and motion about their conceptual understanding of these topics and their interactions with the simulations.  Their main finding was that conceptual change is both very fragile and context dependent.  Students may accept a new explanation for a given scenario when confronted with the failure of their previous idea, but may also revert to their old explanation at a later time or fail to carry over the conceptual change to a new context.

How does this fit in with the other research I’ve discussed?  It may go some way to explaining how difficult it is to attain broad conceptual change in physics education, for one.  It also brings up other possible problems to do with substituting simulations for real world experiments.  If conceptual change really is so context dependent, might it be dangerous to base a lot of conceptual physics education on a computer based simulation?  In a world where students already fail to see the relevance of classroom physics to their every day lives, could we be widening this gap of perceived relevance by using a tool so detached from every day experience?  Then again, maybe it couldn’t get much worse!

References

Hake R. (1998) ‘Interactive-engagement versus traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses,’ American Journal of Physics, vol. 66 no. 1, pp. 64-74.

Steinberg R. (2000), ‘Computers in teaching science: to simulate or not to simulate?’ American Journal of Physics , vol. 68 no. 7, pp. S37-41.

Tao P.K. and Gunstone R.F. (1999), ‘The Process of Conceptual Change in Force and Motionduring Computer-Supported Physics Instruction,’ Journal of Research in Science Teaching, vol. 36 no. 7, pp. 859-882.