Is there evidence that Positive Education improves academic performance? No

Screen Shot 2016-07-20 at 8.23.50 AM

Time to read: 5 minutes

Lately there has been quite a bit of talk in education circles about social aspects of learning, particularly well-being, grit, growth and other mindsets, positive psychology and other social and emotional programs.

My personal opinion is that these are a tacit recognition by proponents of direct instruction, that their belief that learning and development is a linear cognitive approach of memorising skills is insufficient. Maybe they are starting to understand that development is highly individual in nature, it is not linear or maturational, and that it is a complex transition to qualitatively new understanding of concepts, new motivations, new relationships with others and the world, new directions, and new results?

Unfortunately, rather than reexamining the more appropriate learning theories of Vygotsky, Piaget and other dialectical approaches to development, these instructionists blindly continue down their misguided path co-opting bits and pieces into their flawed framework. Rather than design learning and teaching so that it IS social, they attempt to teach social as if it was a seperate discrete unit to other learning.

One such model is the Visible Wellbeing Instructional Model. Rather than admitting direct instruction (Visible Learning) and performativity (Visible Thinking) don’t work. They’ve now misunderstood the fundamental aspect of the idea that all learning is social from Vygotsky and Piaget, and instead tried to stuff it into their broken Visible Learning and Visible Thinking model in the hope that it will fix it.

How do they justify this?  Well according to them, Positive Education has been shown to increase student academic results by 11 percent.

Unfortunately for the Visible Wellbeing Instructional Model, this is simply untrue.

 

In 2011, Durlak, Weissberg, Dymnicki, Taylor, and Schellinger released their meta-analysis of social and emotional interventions. Notice that their paper is concerned with school based interventions, not a study of social and emotional practices that are embedded in standard learning and teaching practice. Their finding that is widely reported as evidence that these programs improve academic results is found in the abstract where they write:

“Compared to controls,  SEL (Social Emotional Learning) participants demonstrated significantly improved social and emotional skills, attitudes, behavior, and academic performance that reflected an 11-percentile-point gain in achievement.”

Seems clear cut right? Wrong!

If you, like me, and seemingly subsequent researchers who quote this research took “compared to controls” means compared to those who didn’t participate in these programs you’d be wrong, because that’s not at all what they are saying… Let’s read the paper further.

In Table 5, they specify the results of their meta analysis:
Skills 0.57
Attitudes 0.23
Positive Social Behaviours 0.24
Conduct 0.22
Emotional Distress 0.24
Academic Performance 0.24

Though I’m not a fan of effect sizes, as I believe they are completely flawed, consider what John Hattie in the book Visible Learning says about effect sizes:

“Ninety percent of all effect sizes in education are positive (d > .0) and this means that almost everything works. The effect size of d=0.4 looks at the effects of innovations in achievement in such a way where we can notice real-world and more powerful differences. It is not a magical number but a guideline to begin discussion about what we can aim for if we want to see student change.”
(Hattie, p15-17 quoted by http://visiblelearningplus.com/content/faq)

You might notice that all except one of Durlak et al effect sizes fall below Visible Learning’s guideline for beginning discussion about them. The only one is Skills (0.57) so according to their figures only worth of Social and Emotional Interventions are to develop social and emotional skills. Everything else atttitudes (0.23), positive social behaviours (0.24), conduct (0.22), emotional distress (0.24), and academic performance (0.24) fall a fair way below the Visible Learning cut off.

You’re probably wondering, where the 11% gain in academic improvement comes from, in light of its small effect size. To solve this one, we need to keep reading the paper.

“Aside from SEL skills (mean ES = 0.57), the other mean ESs in Table 2 might seem ‘‘small.’’ However, methodologists now stress that instead of reflexively applying Cohen’s (1988) conventions concerning the magnitude of obtained effects, findings should be interpreted in the context of prior research and in terms of their practical value (Durlak, 2009; Hill, Bloom, Black, & Lipsey, 2007).”
Durlak, Joseph A., et al. “The impact of enhancing students’ social and emotional learning: A meta‐analysis of school‐based universal interventions.”Child development 82.1 (2011): 416.

The mean effect sizes in Table 2 (Table 2 contains the same figures as above and broken down into further groups, such as class by teacher, class by non-school) do seem “small,” because they are small! Very small, so small Hattie would no doubt suggest you should ignore social and emotional programs, unless you’re teaching social and emotional “skills” (0.57).

But what do the author’s mean when they say “instead of reflexively applying Cohen’s (1988) conventions”?   I looked up the definition of reflexively… the Webster-Merriam dictionary gives the following meaning:

“showing that the action in a sentence or clause happens to the person or thing that does the action, or happening or done without thinking as a reaction to something”

Now I’m not a methodologist, like Durlak whose other paper is provided as a reference about why the effect size of the social and emotional intervention shouldn’t be understood by the effect size happens because of the intervention. Yet, it does seem a bit of a stretch (to a non-methodologist), to find what the methodologist is an appropriate method of determining its practical value.

Table 5

What the authors did, as far as I can tell as a non-methodologist, in order to “interpret the practical value of social and emotional interventions” is compare the results to other social and emotional interventions.

I’ll say that again, the 11% improvement in academic results is not compared to control groups who did not have interventions at all, they are 11% gains over students in other social and emotional type programs, and all students experience less improvement than those who did not participate in social and emotional programs.

We can see clearly from the last line of the table that the figure 11% was produced by comparing the effect size of 0.27 to four other studies with effect sizes of 0.29, 0.11, 0.30 and 0.24.

I’ve taken a quick look at these studies. They describe: 1) Changing Self Esteem in Children, 2) Effectiveness of mentoring programs for youth, 3) Primary prevention mental health programs for children and adolescents, and 4) Empirical benchmarks for interpreting effect sizes in research.

I must admit (as a non-methodologist) that I don’t understand why or how the fourth study “Empirical benchmarks for interpreting effect sizes in research” fits the criteria of “prior research” given that, from as far as I can tell it has nothing to do with social and  emotional programs. But what that particular research does describe is that typical effect sizes for elementary school are 0.24 and middle school 0.27.  On that research alone the effect sizes are either level or slightly above expected, hardly a ringing endorsement, nor a source of much faith in the 11 percentage points of academic improvement.

A rudimentary understanding of mathematics also suggests the extremely low effect size (0.11) of study into “Effectiveness of mentoring programs for youth” greatly increased the difference between the study in question and the “prior research.” I’d suggest if that study was deemed not fit the “practical value” of the study then the 11 percentage points figure would’ve been much lower.

So, it seems clear to me the 11 percentage points of academic improvement is determined by comparing it to previous similar studies which didn’t work as well. Any other measure would not have produced the same results.

 

Of course, to Vygotsky or Piaget these results would not be surprising. For they know you can’t reduce learning and development to individual traits instead we can only understand it as a complex system.  Maybe, the Visible Wellbeing Model is trying to move towards Vygotsky and Piaget? If so, they’re doing it wrong. By attempting to identify and promote three traits of teacher effectiveness, teacher practice, and wellbeing, they’re not seeing them as a system but rather three individual traits together. Yet, at the same time they’re only measuring one trait… test scores. And when you only measure one trait, guess what, the only traits that matter are that trait!

For Positive Education and wellbeing to ever produce an effect size that is substantial, what is measured would need to change, just as they did to produce the contrived 11% figure. But can what Visible Learning effect sizes deem important change? Could they decide what matters while still believing in “evidence”?

Such is the conundrum that the Visible Wellbeing Model finds itself in? Theoretical baseless, considering test scores only worthwhile, what it finds are worthwhile aren’t what they know are worthwhile… No wonder most of us still listen to Vygotsky and Piaget!

 

Personally, I believe the learning and development is social, so this post is not to belittle the wellbeing movement but rather to suggest reducing social and emotional to skills to be learned though programs and interventions is, in my opinion, a missed opportunity. Further, to think we can bolt on wellbeing in order to improve test scores, is to misunderstand how our students actually learn and develop.

 

Incidentally, Inquiry-based learning in the incredibly flawed Visible Learning meta analysis comes in at 0.35, maybe it is time it replace Positive Education, with an effect size of 0.27 as one of the three components of their model?

Leave a Reply