Pre-service Teachers and the Terrible, Horrible, No Good, Very Bad Test

Time to read: 3 minutes

Yesterday, the Teacher Education Ministerial Advisory released a new report on pre-service teacher education titled Action Now: Classroom Ready Teachers, for mine this is a very flawed report mainly because it is based largely on assumptions and biases that are not disclosed, its findings largely benefit the authors of the report, and additionally lays the blame squarely on others. The authors do a very good job trying to pretend that this is a fair and balanced report though, as they continually make the claim that their recommendations are all evidence-based. So much so that the word evidence is used 146 times in the whole document! While the report authors do provide a definition of “evidence-based teaching practice” in the glossary (interestingly this complete term is only used twice, with one being a heading), the authors do not at any stage define what good and bad evidence is, despite their reliance on it.

If only there was way that we could infer what the authors may view as being good (or even acceptable) evidence? I could propose my own definition of what good evidence is and then seek to identify each of their uses meets my criteria but I think that there is possibly a better (and much quicker way.) Take for example Recommendation 13: “Higher education providers use the national literacy and numeracy test to demonstrate that all pre-service teachers are within the top 30 per cent of the population in personal literacy and numeracy.”  A recommendation that according to the explicit goals of the report has “been chosen on the basis that they are practical, based in evidence and calculated to succeed.”

Lets examine Recommendation 13 against the report’s own criteria…

… they are practical…

In the scheme of things making universities administer a test seems reasonably practical, there might even be a couple of research papers in it! Of course there isn’t any evidence given that a single current pre-service teacher, or that future pre-service teachers wouldn’t test in the top 30 per cent, so it could be argued that administering a test that is passed 100% of the time is practical.

So, is Recommendation 13 practical? I’ll give this criteria half a mark.

Taking about evidence leads us to the next criteria.

… based in evidence…

Unfortunately Recommendation 13 doesn’t look as promising when we apply their based in evidence criteria. There is no evidence in the report (or that I could find elsewhere) that supports the notion that teachers need to be in the top 30 per cent in order to be classroom ready. I’m not sure you’d find many people who would argue that teachers don’t need to literate and numerate, but where is the evidence for using the top 30 per cent as a benchmark? Also, there is no evidence that teachers who are currently teaching in our schools and are deemed to be high quality teachers are in the top 30 per cent. Surely this would be the first piece of evidence on which this recommendation would be based, that there is a strong correlation between teacher quality and scores in the proposed literacy and numeracy test.

So, is Recommendation 13 based on evidence? Sadly I’d have to say no, zero marks.

… calculated to succeed…

Given the lack of evidence for the test in the first place obviously predicting its success is difficult. I assume that success wouldn’t be measured by Higher Education’s ability to administer a test, in that case it would be a no brainer to agree that the recommendation would succeed. Yet, I don’t think that is how success would or should be measured! How else could success be measured?  Success could be better measured by pre-service teachers in the top 30 per cent at the time of taking the test remaining in the top 30 per cent over time. Again I can’t see how such success could possibly be guaranteed that a one off test could possibly predict that a teacher will remain in top 30 per cent of the population in literacy and numeracy over their entire career, especially given the short term knowledge that these types of tests test.

Sorry, for Recommendation 13 based on the likely to succeed criteria, again I can’t give any marks.

My total score for Recommendation 13: “Higher education providers use the national literacy and numeracy test to demonstrate that all pre-service teachers are within the top 30 per cent of the population in personal literacy and numeracy.” based on their own criteria of “is it practical, is it based on evidence, and is it likely to succeed” I can only give it half a mark out of three. It is fair to say that the author’s understanding of evidence is a far short of my understanding of good evidence.


Paradoxically, of course, there is also no evidence that pre-service teachers who might not test as being in the top 30 per cent at the time of graduation would not later test as being in the top 30 per cent. Given the focus and time spent on literacy and numeracy based work, I assume that all teachers would perform well on this sort of test. This highlights the most terrible, horrible, no good, very bad part of this recommendation, beyond failing its own criteria, it displays a fundamental lack of understanding of how people learn. Teachers will increase their proficiency in literacy and numeracy as they teach because they engage in wider range of authentic activities that rely heavily on the fundamentals of literacy and numeracy. A single test can only test the immediate past and cannot possibly give any insight into the future, understand a person’s development in terms of a ranking against peers in even worse.


However you look at Recommendation 13, using the report’s own criteria, or what we all know about learning and teaching, this is a terrible, horrible, no good, very bad test.