IWB use and correlations with learning outcomes

Moss et. al. (2007) performed a very substantial study on the impact of the widespread introduction of IWBs in London schools.   This is the only study I have seen that addresses learning outcomes.  Over several years, they tracked surveys aimed at revealing student motivation in addition to student performance via test scores.  They also recorded a large quantity of teaching sessions with the IWBs and used them to report on the type of pedagogies being used by teachers.

They did find an increase in student motivation when IWBs were first introduced, which quickly faded with time. More interestingly, they found only one instance of a positive effect on test scores year to year, in English classes.   Embarrassingly, negative effects were found several times in mathematics and science classrooms.  None of the effects were statistically significant.  Oops!

This study suffers from the usual problems to do with self-reporting of student motivation levels.  The statistical analysis of student test scores, however, appears to be by design, fairly sound.  This is essentially because they tracked the same teachers through the transition phase of introducing IWBs. Thus for the most part, they were able to cleanly isolate the IWBs as the primary change in the classrooms over this period.  I would have obviously liked to have seen a control group used.  There is the chance that other system wide changes to school policies may have affected the outcomes.  This wasn’t discussed in the study, however, so it is difficult to know.  The effectiveness of the testing practices for measuring student learning outcomes is certainly up for debate as well.


Moss, G; Jewitt, C, Levaaic, R; Armstrong, V; Cardini, A and Castle, F (2007) The interactive whiteboards, pedagogy and pupil performance evaluation: an evaluation of the Schools Whiteboard Expansion (SWE) Project: London Challenge. DfES Research Report 816 (London, DfES).


IWBs: enhancing classroom engagement?

Early research on IWBs focused on descriptive aspects of classroom use, their potential as a display tool for integrating multimedia and on their ability for demonstrating multiple representations of ideas (Higgins, Beauchamp & Miller, 2007).  IWBs were seen as capable of catering to a wider range of learning styles and as a tool to quicken lesson pacing (Higgins, Beauchamp & Miller, 2007).

Despite the enormous amounts of money being invested in this technology, there hasn’t been  a lot of experimental research done on how they might further learning outcomes or on how to use them effectively (Cheung & Slavin, 2011).  I’ll attempt to focus on either experimental or quasi-experimental studies.  I don’t see as much value in the multitude of qualitative case studies (some of which have been funded by IWB manufacturers) with stories of how much the kids in Ms. X’s class really felt engaged by the use of an IWB.

Torff & Tirotta (2009) performed a study on upper primary mathematics students, trying to ascertain whether IWBs impact self-reported levels of motivation in students.  They took a relatively large group (773) of these students in a single New York school district that had had access to IWBs for a number of years and divided them into two groups based on how their teachers responded to a survey regarding how often they used IWBs in their classroom.  The students were given a survey on their motivation for and enjoyment of mathematics.  Researchers found a very small, yet statistically significant contribution – approximately ¼ of a standard deviation – to student motivation from the use of IWBs.  They noted that this effect was much smaller than previous researchers had found in smaller studies.

The use of self-reported motivation level as dependent variable is pretty suspect, if unavoidable.  A more interesting study would have been to try to ascertain whether or not there were any measurable enhanced learning outcomes using IWbs.

More importantly, I wonder about their groupings for the study.  No attempt was made to see if there were any other differences between the teachers using IWBs and those who were not.  All of the teachers had had unrestricted access to IWBs (every classroom in the district had been outfitted with an IWB three years before this study took place).  I can’t help but think that any teacher who had been given access to this kind of technology and still hadn’t used it in the classroom for three years might have a higher than average likelihood of being burnt out and switched off.  An image comes to mind here of a mathematics professor I once had who hadn’t changed his overheads for the classes he taught in over a decade (they were referred to amongst the student population as “the dead sea scrolls”).  His lack of effectiveness as an engaging teacher wasn’t due to his not using nifty technology, but rather to the fact that he had stopped trying years ago.


Cheung, A and Slavin, R (2011) “The effectiveness of Education Technology for enhancing reading achievement: a meta-analysis” Best Evidence Encyclopedia, Johns Hopkins University School of Education, Retrieved from http://www.bestevidence.org/word/tech_read_Feb_24_2011.pdf

Higgins, S; Beauchamp, G and Miller, D (2007) “Reviewing the literature on interactive Whiteboards,” Learning, Media and Technology, 32: 3, 213-225

Torff, B and Tirotta, T (2010) “Interactive whiteboards produce small gains in elementary students’ self-reported motivation in mathematics,” Computers and Education, 54: 379-383

Interactive, you’re doing it wrong

funny pictures of cats with captions
see more Lolcats and funny pictures, and check out our Socially Awkward Penguin lolz!

Ok, it’s confession time. Call me a luddite if you must, but I’ve recently often found myself biting my tongue while people around me talk about how great interactive whiteboards (IWBs) are for education.  Don’t get me wrong, I get the whiz bang/curb appeal completely.  The question that keeps coming up for me is, what sort of pedagogy do they support?   The fact of the matter is that, due to their cost and size, a classroom is only ever likely to have one IWB in it, which means they will tend to support a teacher-centric model of education, discouraging group work and collaboration.  When I consider the provocation “What sort of teacher do you want to be?” pretty much the furthest thing from my mind is an image of myself in front of a classroom with an IWB yacking away and playing CNN host on election night in a whirl of pointless visual wizardry while students watch on in a daze.   I can’t help but suspect that education departments everywhere are confusing student excitement over cool gadgetry with meaningful engagement.

Why write about them then? Because IWBs are everywhere and it seems likely that when I start teaching next year my classroom will either have one or will be getting one soon.  At the school where I’m doing my prac, most classrooms have one.  Those that don’t will have one installed by the beginning of next school year.  A teacher at a public college in the ACT recently told me that all science classrooms at his school will have one next year.  In 2007, 51% of Australian high schools had at least one IWB and 10% of Australian year 8 science teachers used them “often” or “nearly always”, while the number for mathematics teachers was 11% (Ainley,Eveleigh, Freeman &  O’Malley, 2010).  That’s four years ago.  The IWB industry had revenues of nearly US $1 billion in 2008 (Futuresource, 2009) and at that time it was projected that one out of every six classrooms in the world would have an IWB by 2012 (Futuresource, 2009).  While there doesn’t appear to be any more recent definite industry wide data available, Smart Technologies alone expects to have nearly US $800 million in revenue over the 2011 fiscal year (Smart, 2011).    If I’m going to have one in my classroom, I need to find out what they’re capable of and gather ideas about the kind of pedagogy they can support.

So, here I am.  Trying not to kick, trying not to scream, ready to attempt to talk objectively about IWBs.


Ainley, J; Eveleigh, F; Freeman, C; and O’Malley, K (2010) “ICT in the Teaching of Science and Mathematics in Year 8 in Australia: report from the IEA Second International Technology in Education Study (SITES) survey,” ACER Research Monographs.

Futuresource Consulting (2009), “Interactive Whiteboard market shows no real signs of recession,” Retrieved from www.futuresource-consulting.com/…/2009-03_IWB_Update_release.pdf

Smart Technologies (2011), “SMART Reports Third Quarter 2011 Financial Results,” Retrieved from http://investor.smarttech.com/releasedetail.cfm?ReleaseID=548563

It’s all about engagement and context matters!

Steinberg (2000) observed all sorts of variable behavior in how simulations were used.  Some students used a trial and error method with virtually no cognitive input in order to find the correct answer.  This description reminded me of the familiar observation of students keeping a finger in the back-of-the-book-solutions to a set of textbook problems while employing the plug-n-chug method so commonly used in introductory physics classes (for anyone unfamiliar, this refers to students plugging in the numbers given in the formulaically written question to the correct formula, with little to no conceptual understanding).  What I’m getting at here is that there is a common theme in all of the research I’ve read on the topic, that simulations are successful as a learning tool to the extent to which they engage the student in an active learning process, a view which is supported by Hake’s (1997) substantial meta-study of interactive engagement versus traditional teaching methods.

Physics students face a perhaps uniquely challenging task in facing the need to confront powerful personal misconceptions about the way the universe works.  The extent to which simulations are used to force the breakdown of these misconceptions through the powerful cycle of “predict-observe-explain” (Tao and Gunstone, 1999, p. 859) seems to largely determine their usefulness.  Simulations are certainly not unique in their potential to achieve this type of conceptual change, but seem to offer promise as a powerful tool towards that end when used properly.

Tao and Gunstone (1999) attempted to find an explanation of the method by which conceptual change is attained through the use of computer simulations.  They regularly interviewed 12 year 10 physics students through a unit on force and motion about their conceptual understanding of these topics and their interactions with the simulations.  Their main finding was that conceptual change is both very fragile and context dependent.  Students may accept a new explanation for a given scenario when confronted with the failure of their previous idea, but may also revert to their old explanation at a later time or fail to carry over the conceptual change to a new context.

How does this fit in with the other research I’ve discussed?  It may go some way to explaining how difficult it is to attain broad conceptual change in physics education, for one.  It also brings up other possible problems to do with substituting simulations for real world experiments.  If conceptual change really is so context dependent, might it be dangerous to base a lot of conceptual physics education on a computer based simulation?  In a world where students already fail to see the relevance of classroom physics to their every day lives, could we be widening this gap of perceived relevance by using a tool so detached from every day experience?  Then again, maybe it couldn’t get much worse!


Hake R. (1998) ‘Interactive-engagement versus traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses,’ American Journal of Physics, vol. 66 no. 1, pp. 64-74.

Steinberg R. (2000), ‘Computers in teaching science: to simulate or not to simulate?’ American Journal of Physics , vol. 68 no. 7, pp. S37-41.

Tao P.K. and Gunstone R.F. (1999), ‘The Process of Conceptual Change in Force and Motionduring Computer-Supported Physics Instruction,’ Journal of Research in Science Teaching, vol. 36 no. 7, pp. 859-882.

Playing devil’s advocate: claims that simulations don’t lead to conceptual change

There have been a few studies suggesting no benefit to simulations in terms of conceptual understanding.  Ronen and Eliahu (2000) performed a study on year 9 students studying electric circuits.  Four classes took part; two of them were given an introduction to a program designed to present simulation-based activities and given the program to take home.  Surprisingly, apparently no record of the use of the program by the students was maintained.  When it came time to post-test the groups, it shouldn’t have come as any surprise that no differences between the groups was observed.  In a true Mythbusters moment (you know, when the pull out the C4 because the original method of blowing something up has failed), the researchers bizarrely decided to run a completely different trial where the simulation software could be used to test their ideas before they were assessed on the function of their circuit design.  The study wasn’t allowed to fail.

A much more interesting study was done by Steinberg (2000).  He took three different tutorial groups from the same university level introductory physics class, all three of whom used methods informed by physics education research.  Two tutorials used computer simulations, while the third used paper and pencil techniques in a similar fashion.  Students were pre and post-tested on their understanding of air resistance.  No difference was seen in the conceptual understanding between the two groups, though both saw very significant gains in their conceptual understanding of air resistance.

This finding really shouldn’t come as any surprise: of course it matters what teaching method we are comparing with.  There is nothing magical about simulations! This article brought up a number of really interesting issues, which I will expand on in my next and final post on this topic.


Ronen M. and Eliahu M. (2000), ‘Simulation – A bridge between theory and reality: The case of electric circuits,’ Journal of Computer Assisted Learning, vol. 16 pp. 14-26.

Steinberg R. (2000), ‘Computers in teaching science: to simulate or not to simulate?’ American Journal of Physics , vol. 68 no. 7, pp. S37-41.

What sort of evidence is there on the effectiveness of simulations?

One of the primary studies continually brought up by the research contributors to the PhET project (see this page for a list of articles) is one that was carried out by Keller, Finkelstein, Perkins and Pollock in 2006. As part of a larger study, they pre and post-tested a control group who were given a standard physics demonstration as well as an experimental group who instead of the demonstration, were made to interact with a simulation. Both groups substantially increased their scores on the second testing, with the experimental group outperforming by 47% in terms of percentage change.

The raw scores alone bring to question the validity of the conceptual questions used, with both groups scoring nearly 60% before the demonstration. No mention was made of whether or not these results were statistically significant, which is problematic. It, frankly, seems like a very odd comparison to make in the first place. There has been an enormous amount of research suggesting that interactive engagement methods are on the order of twice as effective in conceptual physics education as traditional methods (Hake, 1997). A more telling comparison might be made between the use of a simulation and another interactive teaching method, such as an inquiry based experiment.

Squire, Barnett, Grant and Higginbotham (2004) performed an interesting study with year 8 science students studying electrostatics. They took one teacher’s five classes and separated them into a control group (2 classes) with the remainder put into the experimental group. Both were pre and post tested after a unit on electrostatics. The control group received “inquiry based” (the exact nature of which is left to the reader’s imagination) teaching methods consisting of lectures, experiments and demonstrations, while the experimental group spent most of its class time playing a simulation-based game. For my purposes, the game aspect muddies the water, as it is unclear to what degree the motivational aspects of the game component may have changed the nature of the learning experience. The results of the study were remarkable, with the experimental group outperforming the control to a statistically significant degree.

There were major problems with the design of the experiment (no attempt was made to randomize) and the degree to which the game monopolized class time was also puzzling. The overall results for both groups were pretty depressing, with relatively small overall improvements for both groups. It is also unclear how much of an effect the simulation game itself had versus the opportunities for discussion it created in the class. The study certainly points to the possibility of a powerful pedagogical tool in simulation games.

A study by Zacharia and Anderson (2003) used a different strategy. They took a group of pre and in-service teachers who were not trained in physics and performed an experiment on the usefulness of simulations as a precursor to doing an experiment, in place of extra practice problems. They found that the combination of lectures and practice problem sets had no (!) statistically significant influence on conceptual understanding. In contrast, when a period of simulation replaced some of the practice problems, a very significant difference in understanding was achieved. After the experiment, however, the group without the simulation experience tended to catch up in conceptual understanding, nearly equaling the test scores of those who used a simulation. No mention was made of whether or not those differences after the experiment were statistically significant.

Trials were for once properly randomized, but the sample size was laughably small, with 13 students taking part, each in four separate trials. Nonetheless, this is the soundest study I have seen thus far comparing simulation with other methods, and although the sample size is tiny, it seems likely that simulations can, in the right context, lead to conceptual understanding on par with a real life physics experiment.


Hake R. (1998) ‘Interactive-engagement versus traditional methods: A six-thousand-student survey of mechanics test data for introductory physics courses,’ American Journal of Physics, vol. 66 no. 1, pp. 64-74.

Keller C., Finkelstein N., Perkins K. and Pollock S. (2006) ‘Assessing the effectiveness of computer simulation in introductory undergraduate environments,’ in McCullough L., Hsu L. and Heron P. (Eds.), AIP Conference Proceedings Volume 883: 2006 Physics Education Research Conference, pp. 121-124, Syracuse, USA: American Institute of Physics.

Squire K., Barnett M., Grant J. and Higginbotham T. (2004), ‘Electromagnetism supercharged!: Learning physics with digital simulation games’ in Kafai Y., Sandoval W. Enyedy N. (Eds.), ICLS ’04 Proceedings of the 6th international conference on Learning sciences, pp. 513-520, International Society of the Learning Sciences.

Zacharia Z. and Anderson O.R. (2003), ‘The effects of an interactive computer-based simulation prior to performing a laboratory inquiry-based experiment on students’ conceptual understanding of physics,’ American Journal of Physics, vol. 71 no. 6, pp. 618-629.

What the promoters of simulations have to say

The use of simulations in conceptual physics education has its diehard supporters. In an article (somewhat ostentatiously) titled “Transforming physics education” Wieman and Perkins (2005) promote their use by arguing that simulations can be used to effectively make students more “expert-like” in their approaches to physics. I take this to mean that they help to develop critical problem solving skills, though this isn’t made entirely clear. They argue that a traditional physics education, on the other hand, tends to make students more “novice-like.” They casually mention that their research indicates – without referencing any particular study – that simulations outperform experiments with real equipment. One serious point of misunderstanding here is that they apparently don’t differentiate between experiments and demonstrations (my point here is that one is interactive while the other is passive, which makes a huge difference in student learning outcomes). They then proceed to try to explain the unsubstantiated “gains” of simulations over experiments by reference to the idea of “cognitive load” and that simulations require much less of a cognitive burden, which in turn enhances learning opportunities. Please excuse my skepticism.

This does bring up an interesting point. What exactly is the point of a physics experiment in the classroom? Is it solely to enhance conceptual knowledge and confront students’ misconceptions? On this level it could indeed be true (though I haven’t seen any evidence to support this) that simulations outperform experiments, or more likely, demonstrations. On the other hand, aren’t there a lot of other very important learning goals and skills that are part of the point of physics experiments? Wieman and Perkins note in their argument for simulations over experiments that students often waste a great amount of time worrying about the color of the insulation on the wires they use in an experiment on circuits. So are we really saying that the point of a physics experiment has nothing to do with teaching the type of independent thinking skills that might be used to figure out that the color of the plastic insulation isn’t significant to the results of a physics experiment?


Wieman C. and Perkins K. (2005) ‘Transforming Physics Education,’ Physics Today, vol. 58, pp. 36-48.