Skip to content

Evaluating expert advice on schools and learning

Time to read: 3 minutes

Note: I’ve already blogged about the criteria that I use to assess the quality of formal education research, this post builds on those ideas to present an ethical method for evaluating education ideas that come from other sources.

There has been a bit of discussion lately about whose research to believe. Some have suggested that Kevin Donnelly and Wiltshire shouldn’t be considered experts. Others have suggested that teachers shouldn’t be considered researchers. I don’t like these suggestions. I don’t like them at all. Firstly, I don’t think whether Donnelly and Wiltshire are experts or not, or whether teachers are researchers, changes the validity of their arguments whatsoever. Experts and researchers, are labels and have nothing to do with the arguments the person is making.  Secondly, evaluating arguments and findings based on the person making the arguments, as opposed to the arguments themselves, opens us to the whims of the experts and researchers we put our faith in. It also prohibits us from learning from others we disagree with.

Also, a few weeks ago during a some what robust discussion on Twitter, someone quizzed me on my experience as a teacher, their argument was that if I hadn’t taught my opinions on schools shouldn’t be listened to. When I refused to answer their questions about my experience, they took it as me admitting that I’d never taught… which was untrue, I refused to respond, not because I hadn’t taught (I have), but because I don’t think anyone’s arguments should be evaluated by anything except the strengths of their arguments. (Note: If you’re still confused about my teaching experience and are desperate to know, you’ll find me on linkedin, but please don’t try to use my experience to win an argument against me! If you want to argue with me, argue against my beliefs not against who I am or what I have or haven’t done.)

My point is. If we need to resort to trying to discredit someone just because we disagree with them, then the ethical thing to do is to construct better arguments. While it might be beneficial (for the majority in schools) in the short term to have Donnelly and Wiltshire labelled as non-experts and therefore have their ideas dismissed out of hand, the better response to understand where Donnelly and Wiltshire are coming from and what are the fundamental differences that we have with them. Not only does this approach enable us to articulate a counter narrative, it also enables us to have a clearer understanding of what we believe about learning and teaching. Of course, the third, and often better, option is just to ignore!

Of course constructing a counter narrative can be hard, particularly when many commentators, such as Donnelly and Wiltshire hide their true beliefs behind outrageous language and exaggerated examples.

Knud Illeries provides a useful model for understanding and comparing the different approaches to learning with his Three Dimensions of Learning. There are a few variations of Illeries’ model but basically he suggests how we believe people learn can be mapped against two axis, content to incentive, and individual to social. You simply, pick a point within the triangle that best represents what you believe about learning.


Illeries model is useful in that it enables us to identify some of core differences between what different people believe about learning (and teaching.) While, I wouldn’t recommend using this model to adequately explain the differences between the various educational theories and theorists as some try to do, it does help us to start identifying where we agree and disagree with others, which in turn enables us to construct and articulate a well formed argument.

So where do Donnelly and Wiltshire sit on Illeries Three Dimensions of Learning?

Again, pulling some quotes from the article that has incited so much angst

“kumbaya” childish and emotive but suggests they don’t view learning as being social, rather solely individual.

“progressive new age fads” just ignore this, it just emotive and meaningless.

“blames on the fact students have been handed autonomy” again suggests an individual view of learning but also suggests they see content being far more important than incentives and emotions.

“It’s a good idea to have self-discovery but kids need to have knowledge” more evidence that content is far more important than emotions/incentives, but maybe the authors aren’t on the absolute extreme?


Reading that newspaper article I think it is safe to place Donnelly and Wiltshire as viewing learning as an individual pursuit that focusses entirely on content. As such rather than attempting to argue with them about students are “rolling around on the floor” and whether schools are too “kumbaya” we would be better suited arguing about why we believe emotions and motivations are crucial to learning, or why we believe learning requires social interactions, or both. After all, that’s what really matters, and what Donnelly and Wiltshire fail to see when they see kids rolling on the floor and teachers at the side rather than at the front….

Further, while Donnelly and Wiltshire are quick to critique other approaches they are not as open to revealing their own view of how schools should operate. Sure they mention the teacher at the front (which fits with where I’ve placed them on Illeries model) but what else do they want to our schools to be? If they don’t want kids rolling on the floor, what do they want? Kids in rows? In silence? Reading solely from text books and/or doing dictation? Placing them on a model like this helps us to start anticipating what they might believe and asking those questions of them?

For me, these questions are much more important then whether they are experts or not!

The purpose of this article isn’t to unpack and disprove Donnelly and Withshire’s ideas, so I’ll leave that to others. I hope that I have presented a framework that you may find useful. This approach might be less useful however when discussing learning and teaching with those whose beliefs are similar to our own. In that case other models might be more useful, for instance, Tom has some really nice work unpacking different beliefs about inquiry learning, I’ll try to post about it soon.

As for subtle differences in highly instructional models, sorry, you’re on your own, maybe ask Donnelly and Wiltshire!



Oh, one last thing, if you can’t work out where an expert fits in Illeries’ model my best is that they’re being deliberately ambiguous and probably can be placed up there near Donnelly and Wiltshire…

The toxic myth of good and bad teachers

Time to read: 13 minutes

There are a number of claims made by various people about the effect on a student having a good teacher versus having a bad teacher. Most of these claims are nonsensical, and rather than increasing the likelihood of improvement in schools, they do a great deal of damage to teachers, students and schools, and make school improvement much less likely.


Because there aren’t many good teachers and there aren’t many bad teachers, most teachers are just average. We know this, because we know that teacher quality, measured across all teachers, results in a normal distribution, a bell curve. Sure we’d find that there are a few high performing teachers at the top end and a few low performing teachers at the bottom, but the far majority would be in the middle with not that much separating them. If you’re a teacher who is reading this post, I’ve got bad news for you, you’re almost certainly an average teacher. Just as I am almost certainly an average teacher. While we’d like to think that we’re high performing, compared to our colleagues, the actual evidence points to the contrary.

It would be same if we measured the quality of carpenters, golfers, doctors, lawyers, public servants, scientists, whoever… but for some reason we don’t complain about the quality of other professions. Teacher bashing has become a convenient excuse for far too many critics.

A few week or so ago, The Age newspaper identified some of the A+ teachers helping students to 40+ in VCE here in Victoria, Australia. In this article five teachers, whose VCE results stand out clearly above other teachers, are identified as great teachers. One teacher had ten out of the top 14 students in VCE Sociology, another taught 17 of the top 33 students for his Business, and another taught 2 of the top 8 students in their Australian History. The results these teachers have achieved are exceptional, and clearly it is impossible to believe that every teacher could produce these kind of results, but it is also clearly wrong to suggest that just because they aren’t producing these kinds of results that they are bad teachers.

Obviously, every parent whose child is undertaking VCE would love to have teachers who produced these results. Yet to believe that it is possible for the majority of teachers to produce results similar to these four teachers in this article is plainly wrong. It is impossible for most teachers to have a number of students in the top bracket of students. Of course it shouldn’t be surprising that there are teachers who produce results like these, rather it is to be totally expected, as obviously when we consider the distribution of teacher quality, its distribution is shaped like a bell curve.


This is why statements from people like John Hattie are so misleading. According to Hattie “teachers account for a variance of 30% in student achievement.” I’m not convinced this is true but even if it is, is Hattie describing the maximum variance within to the two limits of the bell curve? If he is then the 30% variance only applies to a tiny fraction of exceptional good teachers compared with the tiny number of exceptionally poor teachers. For the far majority of teachers, whose quality lies in the middle of curve, the variance will be non-existent. Sure there may be a theoretical maximum 30% difference within the absolute best and the absolute worse, but for the far majority the variance will be close to zero.


Looking at the graph above then Hattie’s 30% maximum variance within good and bad teachers, even if it is true, is vastly overstated. Three standard deviations from the mean of 15% falls at 6% and 24%, which cover the middle 99.7% of teachers. As such there is only 18% difference within the middle 99.7% of teachers. Looking at two standard deviations which account for 95% of all teachers, slims the variance to 12%! While the middle 68% of teachers (1 standard deviation) only shows a 6% variance in student achievement, a far cry from the stated 30%!

Of course Hattie might not believe that teacher quality fits a normal distribution, but if so he needs to justify why he believes this and how many good and bad teachers there is in our schools. He may also suggest that the absolute maximum difference is more than 30% and the 30% figure correlates to the third or even the second standard deviation from the mean, but if that was the case, then surely that would be the figure promoted or he would explain how many teachers are subject to this variance. But he doesn’t so I believe it is safe and proper to fit a normal distribution to his variance claims.

Dylan Wiliam tries to similarly tries to promote the myth of good and bad teachers when speaking at the ALT-C conference in 2007 he said, “If you get one of the best teachers, you will learn in six months what an average teacher will take a year to teach you. If you get one of the worst teachers, that same learning will take you two years.” Again, I’m not agreeing with these figures, in fact I highly doubt them unless of course Wiliam is speaking about pure memorisation and direct instruction, which he well may be.

Wiliam gives us a little more information than Hattie though. He suggests that his data doesn’t fit a normal distribution, where the average would be in the middle of the lower and upper bounds. If Wiliam’s data fitted a normal distribution then the average would be 15 months instead of 12 months, as 15 months is 9 months more than the lower bound of 6 months, and 9 months less than the upper bound of 24 months.  As such, Wiliam’s assertion fits what is called a positively skewed distribution, as shown below.


By graphically representing Wiliam’s figures, it is obvious that his claims are overblown. It is clear that the far majority of teachers produce about the same results, and the teachers at the lower and upper ends of the impact are in the tiny minority. Also according to Wiliam, most teachers, that is more than half, are not producing a year’s worth of learning a year! While the mean is 12 months, in a positively skewed distribution the mode will be less than the mean. In Wiliam’s world, more than 50% don’t produce a year’s worth of learning in one year… and somehow it is their fault??


Even if the maximum difference within the best and the average teaching is 18 months, then the actual variance of what is an average teacher, and the variance within the far majority (1, 2 or 3 standard deviations from the mean) would be much, much smaller, and again just like Hattie’s 30% variance extremely overstated for the majority of teachers and students. Even if Hattie’s and William’s figures are correct, the point to their message must be that this difference does not occur regularly in our classrooms but rather it is an extremely rare exception.


The cumulative effect of Hattie, Wiliam and others suggesting that these rare and extreme examples of teacher quality variance are in fact common occurrences, results in teacher quality being viewed as a much bigger problem than it is. Yes, if their figures are accurate, it should be a big concern that 0.015% of teachers impact student learning outcomes much less than others (probably only measure solely through test scores) but it is a rare problem rather than systemic problem than many people believe, and it should be seen and treated as such. Also, it should be recognised that the problem of variance in professional quality is not unique to teaching but rather occurs at the same distribution and to the same degree in every profession.

Atul Gawande posed the question “What happens when patients find out how good their doctors really are?” in his 2004 article Under The Bell Curve. Gawande describes the efforts over  117 cystic fibrosis clinics across the US over the last 60 years. We’d like to think that when we go to hospital we would get the same quality of care and would have the same expected outcomes regardless of which hospital we attend and which doctor attends to us. Yet, Gawande tells us that isn’t the case at all, there are good doctors and bad doctors, with the good hospitals in 1997 reporting life spans 16 years above the average for cystic fibrosis patients! Gawande points us to the bell curve and reports that the far majority of doctors and hospitals however are average and their patients have much shorter life expectancies.

Gawande then explores how hospitals reacted to the news that the care they were providing to their patients was average, and the efforts they used to increase the quality of the average majority. While there have been substantial improvements, Gawande insists that the bell curve remains, and will always remain, and the difference in life expectancy for cystic fibrosis (and patients will other life threatening illnesses) will always be dependent on the quality of the care they receive.

Gawande finishes his article examining himself as a surgeon. What if he found out that he was just average, or worse? For Gawande however, the problem of being average isn’t as big as settling for being average, something I assume that Gawande admits that he would rather quit being a surgeon than doing.

So do the doctors and hospitals how provide average quality care for cystic fibrosis patients at their clinics want to improve want to improve? One would hope so, but simply identifying them as average doesn’t mean they are happy being average as there is no evidence to suggest this. The bell curve in itself does and cannot distinguish those who want to improve from those who don’t. In fact, Gawande points to patients who chose to stay with their average doctors because of the care they feel built up over a number of years.

It is distressing for teachers to acknowledge the bell curve. After all we all want to view ourselves as being good teachers as opposed to being average but a realistic understanding of how skills and knowledge are distributed across of a cohort forces us to face this unwelcome truth.

Of course it would be easier if we actually could measure teacher quality which would allow us to measure, identify and quantify good, bad and average teachers. The problems is that we don’t have an universal way of understanding teacher quality, while various groups have tried they haven’t done a good job of this. The previous government here in Victoria, unsuccessfully tried to implement a system where school principals would rate their teachers from 1 to 5 in order to identify 20 to 40% of them as being underperforming and not eligible for pay promotion. Clearly those suggesting this system believed that teacher quality in Victorian schools didn’t fit a normal distribution but rather a negatively skewed distribution. Of course, the only reason they had for this was budgetary.

This is where the toxic nature of talking about good and bad teachers is revealed. After all does it matter more about the actual distribution of teacher quality, or does it matter more about what people believe the distribution looks like? What happens when a myth is propagated that teacher quality doesn’t fit a bell curve but rather fits a negatively skewed distribution?

Furthermore, in the absence of appropriate data we do what most people do, we assume that we are a good teacher and therefore we are the definition of a good teacher. And if we’re not teachers ourselves, when base our view on good teachers on the teachers we had when we were at school. It’s almost as if we say, “I might not know what teacher quality is, but I know a great teacher when I see one.” Which might sound reasonable… but in reality these ideas have an incredibly narrow view of what a teacher is, and quickly descend into discrimination and teacher bashing.

Discrimination and teacher bashing? How?

Well, some people believe that to be a mythical great teacher you need to be a highly passionate caring teacher. In this narrative great teachers are in the mould of Miss Honey from Roald Dahl’s “Matilda” with a rare gift to inspire and connect with their students. These people point to inspirational teachers who taught them when they are in school, or the inspirational teacher they believe themselves to be.

This narrow understanding of teacher quality creates unrealistic expectations, it really is impossible for every teacher, in most schools, to have an amazing rapport with each and every student. As a result quality teaching becomes a teacher who displays their passion for teaching by working long hours and having teaching as their only real priority. Who is always positive and never has a bad day!

While we’d all like every teacher to be passionate about teaching, but discrimination happens when we expect every teacher to be only thinking about teaching and willing to put in every hour they can. Single-parents and others for a range of reasons, who are unable or unwilling to devote every waking hour to teaching are quickly labeled as bad teachers, who should be moved on, overlooked for promotion or discriminated in other ways. People with problems in their personal lives, or suffer from medical conditions might not always project this image of the inspirational teacher, and when we’re on the hunt for bad teachers these people can soon be in our sights…

Others believe that to be a mythical great teacher you need high level knowledge and skills. A good teacher is so much smarter and more knowledgeable than a bad teacher. Pretty soon though we’re lining up those teachers we don’t think are knowledgeable enough and moving them on. Tests have recently be proposed here in the Australia to check that new teachers are literate enough teach, despite them having passed their teaching degree and all their school teaching placements.

Older teachers who are not up with technology might be the first to go. Next might be women who have taken maternity leave and have a big gap in their experience or who are not able to (in our eyes) balance family/work. Next might be those who aren’t on Twitter day and night, attending professional conferences whenever they can in order to keep their skills up to date.

We’re all too quick to blame and label those teachers who aren’t just like us. Rather than celebrating diversity and considering what it might offer our students and our education system, we see diversity and being undesirable. We see diversity as being different from good, and we blame those for not being exactly like our picture of an ideal teacher, and we make erroneous assumptions about them.

Again this something that Dylan Wiliam gets really wrong when he says, “if we create a culture where every teacher believes they need to improve, not because they are not good enough but because they can be even better, there is no limit to what we can achieve.” While this might sound reasonable, sort of, where is Wiliam’s evidence that every teacher doesn’t currently want to improve? My confident guess is that teacher’s desire to improve is also distributed as bell curve, and Wiliam’s assertion that there are many teachers that don’t want to improve is misguided and overstated.

William’s attempt to link a teacher’s desire to improve to the variance of teacher quality is also false. You cannot overcome the bell curve by wishing it away, no more than every golfer can play as well as professional golfers if only they wanted to improve! It is silly. And why does Wiliam’s faith in limitless potential derive from? Surely finding better approaches for learning and teaching is where limitless potential might be found, such as via new pedagogical approaches afforded by modern technologies?

But those who talk about good and bad teachers don’t want to find new pedagogical approaches, they’re happy with the system we’ve got. And shame on anyone who can’t be a good teacher in their system and can’t reap good results using their approaches. According to these experts, it’s not the bell curve that’s the problem, it is the teachers themselves.

Not only does this lead to discrimination, with anyone who doesn’t fit their mould being labelled a bad teacher. It also leads to not focussing on what could actually improve student learning outcomes. While we try to narrow the quality gap, whether it be Hattie’s 30% or Wiliam’s year and a half year, we’re not concerned with why all teachers can’t successfully teach in Hattie and William’s systems. We’re not looking for pedagogical approaches, (constructivism anyone?, inquiry anyone?) that might not be so susceptible to such variances due to teacher quality.

Consider the Measures of Effective teaching project whose goal is to identify effective teaching. I’m still at a loss why you wouldn’t just use test scores as a predictor of future test scores, unless of course you’re trying to pretend that student learning isn’t just about test scores. Of course, if you want to try to pretend that you can measure effective teaching beyond test scores you can then appear to agree that learning isn’t just about test scores, which I guess is why METS suggest approaches that weigh test scores somewhere between 33% and 50%…

In order for every student to achieve success we need learning and teaching approaches that are suitable for average teachers. We need to recognise that education of our students is far more than test scores. That is the first stage and until we’ve done that we need to lay off teacher quality. If Hattie, Wiliam, and others do believe that education is all about test scores then they need to be honest and upfront about that before we start labelling teachers as good and bad.

How many good and bad teachers there actually are matters a lot. Take for example, the report: Great Teaching, Inspired Learning What does the evidence tell us about effective teaching? where the authors say:  “Modelling by the US economist Erik Hanushek estimates that if a student had a good teacher as opposed to an average teacher for five years in a row, the effect would be sufficient to close the average performance gap associated with low-socioeconomic status.”

But how likely is it that a student had a good teacher as opposed to an average teacher five years in a row? If we want the results that Hattie and William suggest the best of the best teachers can achieve, then we’re looking at the teachers above the third standard deviation or 0.015% of teachers. How likely is it that a student would have these teachers for five years in a row? We can working this out by multiplying 0.15 with itself five times

0.015 x 0.015 x 0.015 x 0.015 x 0.015 = 0.00000000007%

This is so unlikely you wonder why Hanushek would even bother suggesting this.

If we believe that teacher quality fits with a normal distribution how many standard deviations are we going to choose to identify good teachers, that is, where do we set the bar? Say we set the middle 68% as the average (one standard deviation) which means the top 16% will be good teachers? How likely is a student to have a good teacher five years in a row? Only 0.0001%! Alternatively, if we believe that only 80% of teachers are good, then only 30% of students will have a good teacher for five years in a row. And where do Hattie, Wiliam and their peers set the bar, for what is tolerance of teacher quality for which their pedagogical approaches work?

We have two choices, first we follow the path of Hattie, Wiliam and their peers who think that our pedagogical approaches are set in stone and appropriate and our teacher variance is the problem, or we can decide that teacher variance shouldn’t impact student learning, rather instead our pedagogical approaches should ensure all students equally experience learning success. Make no mistake, a focus on teacher quality is incompatible with a focus on pedagogical innovation and improvement, and conversely a focus on pedagogical innovation and improvement is incompatible with a focus on pedagogical quality. We need to choose which focus we believe offers the biggest gains for increasing student learning and equity.

I believe that we need to find, and that we can find, learning and teaching approaches that work for almost all (99.85%) teachers. If we can find pedagogical approaches that work for 99.85% of teachers, then 99.25% of students will have access to exemplary learning experiences for five years in a row. This will not only result in better learning outcomes but also a system that is more inclusive, equitable and more diverse.

Improving our pedagogical approaches so that they work effectively for all students and teachers is a complex task, and one that we won’t be able to solve while we continue to apportion the blame on bad teachers.

For me, the choice is clear. We need to stop speaking about good and bad teachers, we need to stop worrying about teacher variance, and instead focus on what might actually make a difference in the lives of our students focussing on developing higher quality learning and teaching approaches that are not limited by the variance of teacher quality.


Footnote: By the way, the bell curve as it relates to good and bad school leaders is also true. Sure there may be a tiny great school leaders, and tiny few terrible ones, but most of them are just part of the average majority…. as for doctors, public servants, politicians, car drivers, golfers, singers, ….


Update: Feedback from Andrew Worsnop suggests that I’m misusing Hattie’s 30% figure, I’ve expanded the section on Wiliam’s figures as it makes the same point. I haven’t changed my writing on Hattie’s 30% though, as I’m not sure that I agree with Andrew that I am misconstruing what Hattie is saying about the 30% variance in teacher quality/impact.


Image credit:  A visual representation of the Empirical (68-95-99.7) Rule based on the normal distribution. Creative Commons Attribution-Share Alike 4.0 International license.

What can Visible Learning effect sizes tell us about inquiry-based learning? Nothing.

Time to read: 8 minutes

I haven’t read Visible Learning, I’ve only skimmed through it a couple of times, largely because the book isn’t aimed at me. Visible learning and its effect sizes are only useful to inform learning and teaching in highly instructional settings, I’m not interested in highly instructional settings. I think they’re a poor substitute for authentic learning, I think students would be much better served inquiry-based learning and teaching approaches, if Visible Learning was about that I’d read it. It isn’t so I haven’t and I won’t.

Which is why I was very disappointed to read Dan Haesler’s reporting on his interview with John Hattie, here, here, and here, where John reportedly critiques student-centred learning, inquiry, 21st century skills, and constructivism. Except for constructivism where this poor paper is cited as evidence, (I’ll explain why the paper is extremely poor later,) there is no evidence quoted that  suggests to me that the claims John makes doesn’t come from Visible Learning’s meta analysis. These statements, about inquiry-based learning and 21st century skills (though it isn’t my favourite term) have compelled me to write this post in order to challenge their validity.

Statements like this from John, deeply worry me..

“We have a whole rhetoric about discovery learning, constructivism, and learning styles that has got zero evidence for them anywhere.” Note, I’m not defending learning styles!

…and this next statement is worrying as  well…

 “I’m just finishing a synthesis on learning strategies, it’s not as big [as others he’s done] there’s only about 15 – 20 million kids in the sample, and one of the things that I’ve learnt from the learning strategies, and a lot of them include the 21st Century skill strategies is that there’s a dirty secret.” 

If the synthesis that John is speaking about is using the same approach as Visible Learning’s meta analysis, then the dirty little secret is that the research is invalid and can’t be trusted.

There has been a bit of talk about the maths in Visible Learning, to me this is a distraction from the real problem of the book. You only have to get to the second page of the preface of the book to find the first huge problem, as we read…

“This is not a book about qualitative studies. It only includes studies that include basic statistics (means, variances, sample sizes.) Again this is not to suggest that qualitative studies are not important or powerful just that I have had to draw lines around what can be accomplished in a 15 year writing span.” Visible Learning (preface xi)

The next section outlines an even bigger, insurmountable problem with Visible learning’s design and findings…

“It is not a book about criticism of research, I have deliberately not included much about moderators of research findings based on research attributes (quality of study, nature of design) again not because they are unimportant (my expertise is measurement and research design), but because they have been dealt with elsewhere by others.” Visible Learning (preface xi)

If you’re not interested in instructional teaching approaches and you reach a passage similar to either of the two above, my advice would be to put the book down and walk away, and advise others to do so.

These two decisions, to omit qualitative studies and not require the study design to match the object of study renders its findings virtually useless. Firstly, this restricts the definition of impact to numbered results (obviously this almost always means test scores) and secondly it allows these numbers (test scores) to measure the impact of things that may not even be seeking to impact. In short they’re using test scores to measure things that may not be able to be measured meaningfully with test scores. Furthermore, they are disregarding other research that measures these same things against their actual claims. In Visible Learning all that matters is presumably badly designed (more on that later) test scores, in classrooms we know that is simply not true. Visible learning’s design makes it incompatible with a large number of learning theories, approaches and strategies, yet unfortunately it doesn’t admit this, and it still calculates an effect size for them.

In Visible Learning’s world of research,the impact of inquiry-based approaches can be measured by how well a student does on a badly designed and irrelevant test. Does that mean the impact of 21st century skills can be measured by an irrelevant test? Does that mean that the impact of constructivism can be measured against an irrelevant test? It is as if that all that matters is the test.

The Visible Learning study design chooses not to require the object of study (eg inquiry-based learning, 21st century skills ) to be evaluated against it benefits. Instead it allows the researchers to test inquiry-based learning, 21st century skills and whatever else to be tested against what the researcher deems important, say for example the ability to pass a math test. Furthermore, the requirement that the studies produce a number surely results in favouring studies that align non-instructional approaches with instructional outcomes. Visible Learning then omits surely better designed non-instructional approaches but excluding qualitative research. The end result is that for non-instructional approaches Visible Learning has to be omitting well designed research while including poorly designed research.

To see that these questions aren’t just hypotheticals, lets look at an example where these two failings 1) the reliance on numbers and an irrelevant test, and 2) a misalignment between study design and study object cause meaningless results that are then touted as evidence. Why don’t we briefly look at the paper that John purportedly himself suggests is a major investigation into constructivismShould There Be a Three-Strikes Rule Against Pure Discovery Learning? by Richard Mayer.

Let’s specifically look at his third strike supposedly taking down Seymour Papert’s vision of discovery learning, of course we’ll conveniently side step around Mayer’s wrong assertion that Papert promoted constructivism when in fact he promoted constructionism. Does Mayer examine all of Papert and MIT kindergarten’s studies and seek to replicate them? Of course not, instead he makes the same design mistake that Visible Learning makes, trying to apply an instructional theoretical research approach to constructionism/constructivism/discovery learning.  Mayer refers to findings of two similarly deeply flawed studies, studies that seek to test what the researchers think the students should know against what they actually have learned. Actually that’s not true, the two studies do not seek to find out what the students learned at all, that might have would’ve been a better study…

The first Kurland & Pea (1985) study, which provides no rationale that the fundamental programming concepts are indeed fundamental, where is the author’s explanation that the impact of constructivism/constructionism/discovery learning can be accurately measured by testing these fundamental concepts. This study is flawed because it wrongly associates worth of the approach against a test that has nothing to do with the worth of the approach.

Where is it asserted anyway that LOGO is designed to teach a predefined set of fundamental programming concepts? Absolutely nowhere, the authors just made it up. They’ve used a badly designed test student’s knowledge of recursion! Why recursion? Simply because the authors think it is important, not because the purpose of LOGO is to teach recursion (spoiler, it is not) or because the purpose of discovery learning is to learn recursion (again spoiler, it is not.) If using a test wasn’t bad enough to evaluate LOGO and discovery learning, they’ve made it even worse by limiting the test to recursion. Sure if you want to students to learn recursion quickly do some research but don’t try to extrapolate the results or misconstrue the results to make unfounded claims against LOGO and discovery learning.

A proper study would assess impact based on the overt goals of constructivism/constructionism/discovery learning, but this study didn’t, instead it wrongly measured impact against a measure suitable for instructional approaches using a very bad test that produced very bad numbers.

The design of the second study Fay and Mayer (1994) is even worse, and its findings should be believed even less. The researchers taught programming using two approaches but they only tested using one approach (bonus points if you guess which approach was used to measure impact.)

Now there’s a radical idea, someone quick do research into the effectiveness of direct instruction to deliver on the promise of constructivism!

The third study, Lee & Thompson (1997), is a thesis and too long for me to be bothered reading, yet a flip through the first twenty pages doesn’t fill me with optimism that the research is using constructivism/constructionism/discovery learning to understand and assess the impact of constructivism/constructionism/discovery learning. In fact it again seems like a study using guided instruction design to compare guide instruction with constructionism. One could guess that the study might be comparing the approach of LOGO with an approach of LOGO plus a worksheet… by asking the students to complete a worksheet!


The Mayer paper that John cites and its three examples show why impact of constructionism and LOGO can only be adequately understood through the theoretical lens of constructionism. Anything else is to do a huge disservice to Seymour Papert, constructionism and educational research. To reduce LOGO to “discovery learning” an believe that its value can be assessed by testing students on a specific externally defined knowledge of recursion is both ridiculous and poor research. I don’t know whether these studies contributed to the Visible Learning effect sizes but I’m sure many studies like these did, studies which supposedly prove LOGO, constructivism, constructionism, discovery learning, and everything else outside of instruction don’t work as well as instruction.

If Visible Learning effect sizes did take study design into account then it would not be open to these errors, and its effect sizes would be more believable. It doesn’t and therefore I cannot see how anyone can have any confidence in the effect sizes.

Visible learning and its effect sizes probably adequately report on the impact of instructional approaches but it cannot possibly adequately or in good faith report on the effect on student learning of using LOGO, constructivism, constructionism, discovery learning, or anything else that is based on a learning theory that is not based on instruction.  If you want to know if research finds evidence into any of these things go and find research that is uses the basis of theory as a lens of understanding, there is lots out there. Unfortunately these well designed and trustworthy research isn’t included in Visible Learning because it is qualitative research (it has to be by the nature of the object of the research) and therefore is excluded from Visible Learning’s meta analysis and therefore does not contribute to Visible Learning’s effect sizes.

The same is true for John’s claim that his forthcoming meta analysis illuminates the “dirty little secret” that 21st century skills don’t work. If in this forthcoming study 21st century skills are not evaluated against the purpose of 21st century skills then the study is flawed and should not be trusted. If the forthcoming study uses the same study design as the Visible Learning then it too will be deeply flawed, and John’s claims about 21st century skills should be ignored.


Finally, just in case you’re not convinced by my critique of these research papers and the importance of using the theory of learning to measure the impact of the theory of learning. Lets examine a more everyday critique of inquiry-based learning. I’m not sure if it was the author, the sub-editor or John Fleming the interviewee, that came up with the opening paragraphs of the article Schools cool to direct instruction as teachers ‘choose their own adventure‘ but they are gold….

“WHEN teaching your four-year-old to tie their shoelaces, do you give them four pairs of shoes and tell them to try different techniques until they work it out? Or do you sit down and show them how to do it: make the bunny ears and tie the bow, watch while they try it, lending a helping finger if required, and then let them practise until they can do it on their own?

The two approaches illustrate different teaching styles used in classrooms. The first describes a constructivist method in which a child “constructs” their own understanding through discovery or activities, also referred to as student-centred learning.”

Of course, it is absurd that any rational person, let alone a teacher, wouldn’t use direct instruction to teach a child to tie their shoelaces. Unless of course you’re this guy, and you might point out that most people tie their shoelaces incorrectly.


Now if your assessment of a child’s ability to tie their shoelaces is by being able to replicate the adults (wrong) method, say by testing them, then our use of direct instruction is working wonderfully well. If your assessment on the success of direct instruction is based on how many kids are running around the school grounds with their laces undone, or how often they have to stop and retie them, then maybe direct instruction isn’t doing so well.

If direct instruction cannot even teach children to tie their shoelaces correctly, what can it possibly be trusted to get right?  If direct instruction fails to make obvious that the teacher is teaching incorrectly when tying shoelaces, what else are teachers using direct instruction getting wrong? In maths? In english? In everything?


Would a better study design show a more accurate effect size for direct instruction? A study design that looked beyond a simple numbered value produced by a test? For us to have any faith in academic research, I’d like to believe yes.


What if kids did use an inquiry approach to learn to tie our shoelaces? What if we did as the above article suggests and give kids four pairs of shoes and asked them to work out the best method?

My bet is that we’d see benefits above that of actually having everyone being able to tie their shoelaces correctly. Maybe we’d see students who were less accepting that there is a single right way? Maybe we’d see kids believe less that when something goes wrong its because they hadn’t followed the proper process accurately?  Maybe we’d see kids being more critical consumers of the purportedly correct information they’re presented with?

Maybe we’d see a whole range of things… but while we continue to use instructional measures (predefined narrow tests) to measure impact we’ll never know.


Note: I have updated this post for clarity since publishing, and I will probably make further updates over the next couple of weeks, as I receive feedback.

Pre-service Teachers and the Terrible, Horrible, No Good, Very Bad Test

Time to read: 3 minutes

Yesterday, the Teacher Education Ministerial Advisory released a new report on pre-service teacher education titled Action Now: Classroom Ready Teachers, for mine this is a very flawed report mainly because it is based largely on assumptions and biases that are not disclosed, its findings largely benefit the authors of the report, and additionally lays the blame squarely on others. The authors do a very good job trying to pretend that this is a fair and balanced report though, as they continually make the claim that their recommendations are all evidence-based. So much so that the word evidence is used 146 times in the whole document! While the report authors do provide a definition of “evidence-based teaching practice” in the glossary (interestingly this complete term is only used twice, with one being a heading), the authors do not at any stage define what good and bad evidence is, despite their reliance on it.

If only there was way that we could infer what the authors may view as being good (or even acceptable) evidence? I could propose my own definition of what good evidence is and then seek to identify each of their uses meets my criteria but I think that there is possibly a better (and much quicker way.) Take for example Recommendation 13: “Higher education providers use the national literacy and numeracy test to demonstrate that all pre-service teachers are within the top 30 per cent of the population in personal literacy and numeracy.”  A recommendation that according to the explicit goals of the report has “been chosen on the basis that they are practical, based in evidence and calculated to succeed.”

Lets examine Recommendation 13 against the report’s own criteria…

… they are practical…

In the scheme of things making universities administer a test seems reasonably practical, there might even be a couple of research papers in it! Of course there isn’t any evidence given that a single current pre-service teacher, or that future pre-service teachers wouldn’t test in the top 30 per cent, so it could be argued that administering a test that is passed 100% of the time is practical.

So, is Recommendation 13 practical? I’ll give this criteria half a mark.

Taking about evidence leads us to the next criteria.

… based in evidence…

Unfortunately Recommendation 13 doesn’t look as promising when we apply their based in evidence criteria. There is no evidence in the report (or that I could find elsewhere) that supports the notion that teachers need to be in the top 30 per cent in order to be classroom ready. I’m not sure you’d find many people who would argue that teachers don’t need to literate and numerate, but where is the evidence for using the top 30 per cent as a benchmark? Also, there is no evidence that teachers who are currently teaching in our schools and are deemed to be high quality teachers are in the top 30 per cent. Surely this would be the first piece of evidence on which this recommendation would be based, that there is a strong correlation between teacher quality and scores in the proposed literacy and numeracy test.

So, is Recommendation 13 based on evidence? Sadly I’d have to say no, zero marks.

… calculated to succeed…

Given the lack of evidence for the test in the first place obviously predicting its success is difficult. I assume that success wouldn’t be measured by Higher Education’s ability to administer a test, in that case it would be a no brainer to agree that the recommendation would succeed. Yet, I don’t think that is how success would or should be measured! How else could success be measured?  Success could be better measured by pre-service teachers in the top 30 per cent at the time of taking the test remaining in the top 30 per cent over time. Again I can’t see how such success could possibly be guaranteed that a one off test could possibly predict that a teacher will remain in top 30 per cent of the population in literacy and numeracy over their entire career, especially given the short term knowledge that these types of tests test.

Sorry, for Recommendation 13 based on the likely to succeed criteria, again I can’t give any marks.

My total score for Recommendation 13: “Higher education providers use the national literacy and numeracy test to demonstrate that all pre-service teachers are within the top 30 per cent of the population in personal literacy and numeracy.” based on their own criteria of “is it practical, is it based on evidence, and is it likely to succeed” I can only give it half a mark out of three. It is fair to say that the author’s understanding of evidence is a far short of my understanding of good evidence.


Paradoxically, of course, there is also no evidence that pre-service teachers who might not test as being in the top 30 per cent at the time of graduation would not later test as being in the top 30 per cent. Given the focus and time spent on literacy and numeracy based work, I assume that all teachers would perform well on this sort of test. This highlights the most terrible, horrible, no good, very bad part of this recommendation, beyond failing its own criteria, it displays a fundamental lack of understanding of how people learn. Teachers will increase their proficiency in literacy and numeracy as they teach because they engage in wider range of authentic activities that rely heavily on the fundamentals of literacy and numeracy. A single test can only test the immediate past and cannot possibly give any insight into the future, understand a person’s development in terms of a ranking against peers in even worse.


However you look at Recommendation 13, using the report’s own criteria, or what we all know about learning and teaching, this is a terrible, horrible, no good, very bad test.

Teacher Quality and the Purpose of School

Time to read: 1 minute

Disclaimer: This post is not intended to be a deconstruction of the purpose of school and the three roles of schooling, I’ll save that post for another time!

To round out this series of posts on teacher quality I thought I should address the final elephant in the room that those who speak of good and bad teachers need to acknowledge… the purpose of school. If you haven’t read the other posts you may wish to read Who are the Best Teachers?, Assessing Teacher Development (or anything really) and A Better Understanding of Teacher Quality.

Those that teacher bash talk about teacher quality like to pretend that schools have a singular universal purpose, that a school’s sole purpose is to prepare students for their professional lives by teaching them what they need know, and how to think.

Yet when we reflect on the purpose of school without thinking about teacher quality we realise that the purpose of school isn’t so narrow. Sure preparing students for the workforce is a major role but so is developing the social aspects of our students, developing in them a morale compass, an appreciation of culture and history, and how to develop constructive and meaningful relationships. Students engage in leadership programs, work and play in teams, and participate in many other activities and programs for the sole purpose of social and emotional development. There is also a third aspect to the purpose of schools, that is, to develop students as unique individuals. Yes, we believe we need to teach specific forms of knowledge and ways of thinking, yes we believe that we need to develop the student socially, but we also believe that students are individuals with unique talents, and aspirations. As such as teachers we provide opportunities for student choice. We highlight that there are multiple paths rather than a single preferred path to tackle a problem. We encourage students to reflect and assess their own learning. We have student-led conferences rather than teacher-directed reporting.

Yet, when experts, school leaders, and teachers talk about teacher quality they almost always ignore the three facets of the purpose of schooling and focus solely on teaching of content, skills, and ways of thinking. This is because assessment focusses solely on this singular aspect of schooling, but in reality the other two roles of schooling might be held in a high regard by a teacher, the school, and the wider school community. Making pronouncements about the quality of a teacher without understanding the how the teacher and their school views the purpose of their school, is completely unfair bordering on immoral. When talking about teacher quality in this way, we devalue the other two roles of school as well as devaluing teachers and teaching.

When we talk about teacher quality we can only do so ethically in light of what they and their school community believe about the purpose of school, and the weighting of the three roles of school. If we don’t properly understand the purpose of a specific school we cannot possibly determine whether an individual teacher is “good or bad.”

A better understanding of quality teaching

Time to read: 2 minutes

Disclaimer: This post is not intended to be an explanation of developmental stages of learning to read, I’m sure many others cover that ground…

I’ve written about how most of the rhetoric around teacher quality is unhelpful and punitive, so I thought it was about time I offered a suggestion of a better way to understand teacher development.

Here in Victoria Australia we use VELS (Victorian Essential Learning Standards) to guide curriculum and assess student learning, or maybe it is now called AUSVELS thanks to the Australian Curriculum, but anyway. VELS and AUSVELS divides (student) development into levels. The problem is that the names they’ve applied to these levels are meaningless (Level 1, Level 2, …)

Think about students learning to read, in VELS levels 1 and 2. A much better alternative to calling these level 1 and level 2 would be to give the stages descriptive names because that is how learning to read, or developing as a reader, is better understood. Developmental stages go something like non-readers, beginning readers, and then independent readers. Naming the developmental stages as such would much better clarify the developmental process of reading. These labels make sense as they are descriptive, we know what kids do at each of the stages. A non-reader can’t tell which way the book is meant to be up. A beginning reader, makes up their own story. An independent reader engages with the writing. We could build out a much richer picture of what each of these stages look like but the point is that we could look at a student reading/attempting to read a book, and get a pretty good idea of at which stage of development (non-reader, beginning reader, or independent reader) they are. We can’t say the same things about VELS level 1 and VELS level 2, they don’t have the same rich meaning. VELS levels also make learning to read a jumble of independent skills and concepts that need to mastered, rather than seeing development as a transition from one stage to the next stage.

There are many more implications of understanding which stage of development a learner/reader is at but this is not the point of this post….

So what about teacher development?  Are there similar developmental stages that teachers progress through? If so what are they?  And, please don’t suggest Beginning or Graduate Teacher, Expert Teacher, Leading Teacher as developmental stages as these titles are also meaningless as far as development is concerned. I don’t really have an opinion about what the stages of teacher development, so unfortunately this is where I’ll leave this discussion, but the point is that if we’re going to talk about quality teaching then we need to understanding the stages of teacher development. These stages need to described descriptively, with the characteristics of each stage being unique. Think of the characteristics of each of the stages of reading development, understanding how a book is technically constructed doesn’t apply as a descriptor at every stage although it certainly does at the beginning reader stage. So please, don’t suggest that teacher/leader quality frameworks that simply list similar but increasingly complex statements as being useful for understanding the stages of teacher development.

One last thing…

Think about the rhetoric around teacher quality, when people say “we want every teacher to teach like the best” — this it is punitive, intending to separate the best from the rest. We don’t say we want every student to read like the best, or talk about quality reading! We don’t say we want every student to read like Lily (obviously she is the best reader in the class),  instead we say we want every student to be an independent reader.  Maybe if we had a better understanding of the stages of teacher development we would see the end of teacher bashing…

Assessing teacher development (or anything really)

Time to read: 3 minutes

I was pointed to “Licensed to Create: Ten essays on improving teacher quality,” as part of the current conversation on twitter about teacher quality and teacher bashing. According to the website,  “Licensed to Create is a collection of essays from some of the leading thinkers in education. 11 authors offer their unique perspective from practice, policy and academia on how we can improve teacher quality.”

I haven’t read any of these essays, and I don’t intend to… here’s why.

A quick search finds the word evidence is used eighty four times (admittedly this includes its use in the various references). I guess this is pretty good, after all we don’t just want to just rely on someone’s opinion, we want hard evidence, not conjecture. But wait a minute… on what is this evidence based? Let’s check. Whoops, the word theory is only used a single time, and that single occurrence (p. 45) is not concerned with the theoretical underpinnings on which the numerous points of evidence are based. The words pedagogy, pedagogies, and pedagogical are used sixteen times combined, again none of these uses illuminate the theoretical or pedagogical underpinnings of the frequently cited evidence. For all we know, the educational theory and pedagogical beliefs underpinning these ten essays were obtained from the fish and chip owner down the street! Although, I do admit this is highly unlikely… I’ve spoken to her and she doesn’t think much of teachers!

My advice is that you’d be foolish to believe any educational evidence that doesn’t explicitly define and detail how it uses educational theory to validate its evidence. Take all of these ten essays with a big grain of salt. And when you read any other essays, blog posts, scholarly papers, or whatever, first assess the educational theory, and only if you’re satisfied that the theory provides a solid basis then assess the supposed evidence. If not, treat the writing as an opinion piece, despite what they try to make you believe.


So how might educational theory be used to understand teacher quality?  Let’s look at two possible (of many) approaches Instructivist, and Cultural Historical Theory.

Instructivist Theory

Instructivist theory suggests that learning is a change in student long term memory, which enabled by the teacher providing concepts, procedures and strategies. This knowledge needs to be explicitly taught rather than discovered. If nothing changes in the long term memory, then nothing changes and the student can be considered to have learnt nothing. Identifying learning is therefore relatively simple, we pre-test what the learner knows and then post-test after the learning sequence to assess what the learner now knows. The difference between the two shows what they have learnt.

Assessing teacher quality using an instructivist approach should therefore be pretty easy, we simply test their knowledge, or observe them teaching.. inferring what they do is what they know. With the instructivist approach it is also easy to tell who stored the most learning in the long term memory, we could even call them the best, if we wanted to. We could also find the student who has stored the least in their long term memory, and label them the worst, if we were that way inclined. And if a learning theory is not explicitly given by an author it is a pretty good guess they are using instructivist theory… but they should be explicit about this.

Yet, there is an obvious problem with this view of learning… what if the test is wrong? What if we’ve misunderstood what is important? What if we’ve required student to learn the wrong things? The test is the weak point, and it is very fragile to say the least.

Cultural Historical Theory

Cultural historical theory suggests that a study on the fragments of student development, (as in the case of an instructivist approach’s study on only long term memory) doesn’t allow us to understand the development of the whole student. In fact, cultural historical theory suggests understanding the process of development is far more important than understanding the results of development. This is because development occurs through a series of stages, for example consider the stages of physical development, this puberty being a particularly obvious stage of development we all go through.

When assessing teacher development using cultural historical theory, we study the teacher functions for which development has not yet started, the teacher functions that are currently developing, and finally the teacher functions that are fully developed. Cultural historical theory argues that development can only be understood at looking at the whole development cycle, think of the gardener tending plants, just looking at the plants that are in full fruit, and ignoring the budding plants, fails to give an accurate picture of development.

In the same way, looking at the fruits of development of the teacher (eg what they currently do) does not give an accurate way of understanding teacher development and/or teacher quality. Further, comparing stages of development of one student to another is futile. To suggest one teacher is better than another, is like suggesting that a child entering the stage of puberty is better than a child that has not yet entered puberty. They are simply at different stages of development. (Obviously, we could explore whether a teacher is at the expected stage of development, and then seek to understand why they aren’t if that is the case.)



Obviously there are many other theoretical approaches the authors of the ten essays could have used by which to make their arguments about teacher quality. My hope that is post has encouraged you to question the educational approaches and beliefs that underpin the evidence that teachers are exposed to. I also hope that educational experts and other authors take up the challenge to make explicit the education theory that underpins their evidence.

Maybe it also has caused you to question whether an instructivist approach is capable of assessing teacher quality.

Who are the best teachers?

Time to read: 1 minute

Apparently, Maxine McKew at a Bastow Seminar something like “How do we get more teachers teaching like the best?” I wasn’t there but I assume something similar to what was reported was said, because experts say these sort of things regularly.

This is a very poor question, and one that I suggested on Twitter was teacher bashing, but let’s for a minute consider that it was not…

Many argue that there is nothing wrong with this rhetoric. “Don’t you believe or want teachers to improve?” and “I’ve met lots of teachers who are have given up!” or  “There is nothing wrong with having high expectations of teachers.” or ….


So, if we want to get more teachers teaching like the best, it must be asked who are the best teachers and how are they teaching?

For me this is a no brainer. The best teachers are those using play or project-based learning approaches. I believe that these approaches are the most pedagogically suitable, I believe that these approaches lead to the most effective learning in students.

So naturally I must believe that the worst teachers are those not using play or project based approaches… except I don’t.


Because labelling teachers as the best suggests that there are fully in control of everything they are as a teacher and everything they do as a teacher. That they must be able to pull themselves up by their bootstraps so that they can too be like the best teachers. Firstly, suggesting that teachers are free agents, able to use the most pedagogically appropriate approaches in their school, doesn’t account for the pressure that accountability that teachers are under. Even if teachers were free agents, suggesting that despite their everyday experiences, teachers can somehow learn (that play and project-based learning approaches are most pedagogically appropriate) spontaneously despite their history as teachers and their school environment, goes against everything we as educators know about how people learn and develop. Learning and development doesn’t happen independently and spontaneously, it needs the right environment.  Pleading for teachers to just improve is useless, about as effective as pleading with them to spontaneously grow bunny ears!


Of course, you probably disagree with me about who the best teachers are.. and you might even think that teaching like the best (your way) is easy. If so, I really think you’re either underselling the role of teachers, or dismissing the complexity of teaching.


So why do experts and others continue with labelling teachers as bad teachers rather than defining what makes the best teachers the best? Who knows? But I wish they’d stop because they are damaging the profession and making substantial changes in the way student’s learn in our schools all the less likely.

[I have edited the last sentence of this post to clarify that I am suggesting student learning needs to change, not the teaching profession.]

Plans and changes for 2014

I know it is not exactly the start of the year anymore but it about time that I publicly state my intentions for 2014.

Short story, I’ve started a new company and enrolled to do my PhD.


The longer story is that the company is the natural evolution of the work that I’ve doing at ideasLAB and on my own for the last five years. It has become increasingly apparent that the major stumbling block for the successful re-imagining learning and teaching is our lack of pedagogical understanding. This has been proved to me over and over as I’ve been working on and presenting about the Principles of Modern Learning over the last year. This doesn’t mean that I’m dropping the Principles though, far from it, for those waiting the book will be out soon!

The company that I’ve founded is called Validated Learning, whose first, and flagship, product is the Modern Learning Canvas. The Modern Learning Canvas builds on the best ideas from the business and entrepreneurial world. Basically the idea is that if entrepreneurs can have a business model to describe, understanding and design innovative businesses way can’t and don’t educator have a learning model to describe learning and teaching? Why don’t we have a framework for understanding, describing and designing for innovative learning and teaching? Well now we do!

It enables teachers to more beyond a pedagogical understanding based around the lesson plan and the how and what of learning and teaching, to pedagogical language and understanding of the why of learning and teaching. In doing so it enables teachers even with completely different practice to engage in meaningful pedagogical conversations.

The Modern Learning Canvas is still in closed beta but it will be launched in a matter of weeks, so stay tuned! In the meantime check out the canvas on the site and download the free creative commons printable version.

The second big change is my decision to pursue a PhD, something that I was certain that I would never do. In September, the Modern Learning Canvas started working really well and I began writing a (soon to be released) white paper on it. At that time I began considering whether academic research would be beneficial for the development of the Modern Learning Canvas. In speaking to universities about it, I became more and more convinced that it was a natural and sensible decision. So just a few weeks ago I enrolled at Monash University, initially I’ve signed up to do a Masters of Research but the plan is to switch over to the PhD program later in the year.I couldn’t enrol directly in the PhD program as I didn’t have a research component in the original Master of Education!

My supervisors are Dr Phillip Dawson and Professor Marilyn Fleer. So a big thank you to them for seeing merit in the idea!

Yes, a big year is on the cards…. stay tuned!

Why everyone should learn to code [eventually]

Will Richardson pointed to Bret Victor’s critique of highlighting Seymour Papert’s writing where he laments the misinterpretation of his book, Mindstorms, that programming in itself is worthwhile.

Spoiler alert: It is not. Programming without the purpose of exploring ideas is just learning to code. Nothing more, nothing less.

Case closed… I guess?

But what about reading? And writing?

Take the  passages from Papert’s “What’s the Big Idea?” paper that Bret Victor has highlighted and replace the word “program” with the word “read” or “write.” Is it any less true that reading does not “in itself have consequences for how children learn and think”?  Is it any less true that writing does not “in itself have consequences for how children learn and think”?

It a lot of ways whether programming in itself is necessary for students to learn is the wrong question. The bigger question is what do students need to learn, period. If we’re throwing things such as programming in and out of some supposed curriculum and unspoken (for Bret) but crucial pedagogical framework, then we need to put everything on the table: reading, writing, programming, quadratic equations, surds, history, geography, …. everything.

So what do students need to learn?

Students need to learn what they need to learn, just like us. We need to learn what we need to learn when we have to solve real problems. Students are no different. Curriculum is designed to predict need. We can do better now. We no longer need to teach programming, reading, writing or anything else just in case. Having said that, over time, through a variety of experiences our students will most likely cover all the good bits of the old curriculum.

Of course, using that criteria, I doubt there is any debate that reading and writing would make the cut. That quite early the need to read and write would be necessary for whatever the students were exploring.

I’ll leave it to the mathematics, history and geography teachers to argue the rest of my list above…

Since 2000, when Seymour Papert wrote this quoted paper, the world has gone digital. In the space of 14 years we (and our students) deal with digital information and content continually. Programming gives us (and our students) powerful tools to play, explore and make sense of of this information. Whether it be a simple  function to manipulate data in a spreadsheet, or a script to scrape data from a website that thinks it “owns” our data, a website to communicate with the world or many other cases, without the ability to program our students will be constrained, limited and frustrated.

Authenticate problem solving, ideation and play in our digital world requires the ability to program. I can’t believe how any student can go through the schooling experience without being faced with that fact. If our students are working on meaningful problems and projects, if our students are seeking to understand their world, and explore big ideas, programming in some shape or form is essential.



Note 1: Yes, I probably agree with Bret Victor’s caricatures of the rich and famous’ reasons for learning to code, but that doesn’t mean I agree (and I’m not sure what Bret actually thinks) that student shouldn’t learn to code.

Note 2: This is not meant to be a critique of Bret Victor, I love his ideas and work. Bret suggests that you read Mindstorms if you haven’t already, I second that.