糖心动漫vlog

Opinion Blog

Classroom Q&A

With Larry Ferlazzo

In this EdWeek blog, an experiment in knowledge-gathering, Ferlazzo will address readers鈥 questions on classroom management, ELL instruction, lesson planning, and other issues facing teachers. Send your questions to lferlazzo@epe.org. Read more from this blog.

Teaching Opinion

Correlation? Causation? Effect Sizes? What Should a Teacher Trust?

By Larry Ferlazzo 鈥 June 10, 2025 5 min read
Conceptual illustration of classroom conversations and fragmented education elements coming together to form a cohesive picture of a book of classroom knowledge.
  • Save to favorites
  • Print
Email Copy URL

Today鈥檚 post is the third, and final, one in a series providing guidance to teachers on how to interpret education research.

Who Cares About Effect Sizes?

Cara Jackson currently serves as the president of the . She previously taught in the New York City public schools and conducted program evaluations for the Montgomery County public schools in Maryland:

Education leaders need to know how much to expect a program or practice to improve student outcomes. Such information can inform decisions about what programs to invest in and which ones to stop, saving teachers鈥 time and energy for programs with the most potential.

In this post, I discuss what 鈥渆ffect sizes鈥 are, why effect sizes from well-designed studies are not the same as correlational evidence, and why that matters.

What is an 鈥渆ffect size,鈥 and how is it measured?

An effect size is a standardized measure of how large a difference or relationship is between groups. Researchers use to measure the difference. While researchers may translate the standard deviation units into 鈥渄ays of school鈥 or 鈥渕onths of learning鈥 for the practitioner audience, research suggests this can lead to or unreliable and improbable conclusions.

Translations could to make an effect that is small in standard deviation units appear large. That is, an effect that is small in standard deviation units might be presented in days, weeks, or months of learning to make the intervention look good.

One study reported that compared with traditional public school students, charter school students鈥 performance is equivalent to 16 additional days of learning in reading and six days in math. But as pointed out by , these are quite small differences when expressed in standard deviation units.

For that reason, I focus here on interpreting the standard deviation metric. If you see an effect size presented in 鈥渄ays of school鈥 or 鈥渕onths of learning,鈥 be aware that this could be misleading.

Why does 鈥渃orrelational, not causation鈥 matter for effect sizes?

In studies designed to identify the causal effect of a program, effect sizes as low as 0.10 standard deviations are considered large (). This may come as a surprise to fans of Hattie鈥檚 Visible Learning, which argues that the 鈥渮one of desired effects鈥 is 0.40 and above. But that benchmark is based on making no distinction between correlational and causation.

As noted in the previous post, the correlation between some program or practice and student outcomes can reflect a lot of different factors other than the impact of the program, such as student motivation. If we want to know whether the program causes a student outcome, we need a comparison group that:

  1. hasn鈥檛 yet received the program, and
  2. is similar to the group of students receiving the program.

The similarity of groups matters because any differences between groups offers an alternative explanation for the relationship between the program and student outcomes. For example, we would want both groups to have similar levels of academic motivation, because differences in motivation could explain differences in outcomes. Correlational studies can control for some characteristics of students that we can observe and measure, but they do not rule out all alternative explanations.

The for reading a research paper recommends looking for certain keywords in the methods section to distinguish between correlation and causation. In studies designed to make causal inferences, the methods section will likely mention one or more of the following words: experiment, randomized controlled trial, random assignment, or quasi-experimental.

Look for a table that describes the students who receive the program and students not receiving the program. Particularly if the study is quasi-experimental, it鈥檚 important to know whether students are similar prior to participating in the program. For example, a study of a program implemented with 4th grade students might use 3rd grade standardized-test scores to assess whether the groups are similar. This helps rule out alternative explanations for the findings.

In 鈥淭he Princess Bride,鈥 Inigo Montoya says, 鈥淵ou keep using that word. I do not think it means what you think it means.鈥 While effect sizes are influenced by , distinguishing between correlation and causation is fundamental to a shared understanding of the meaning of the word 鈥渆ffect.鈥 And that meaning has implications for effect-size benchmarks.

intheprincessbride

Why do effect-size benchmarks matter?

It鈥檚 not that I simply dislike effect sizes larger than 1.0. As noted by past contributors to EdWeek, 鈥淗olding educational research to greater standards of evidence will very likely mean the effect sizes that are reported will be smaller. But they will reflect reality.鈥

Confusing correlation and causation may lead decisionmakers to have unrealistic expectations for how much improvement a program can produce. These unrealistic expectations could leave 糖心动漫vlog disappointed and pessimistic about the potential for improvement. Education leaders may avoid implementing programs or stop programs with solid evidence of effectiveness because they perceive the potential improvement as too small.

Key takeaways

Questionable translations of research findings and presenting correlations as 鈥渆ffects鈥 can mislead people about whether a program causes an impact on student outcomes. Here are three things to look for in different sections of a study.

  • Methods: Does the study include a comparison group of students who did not receive the program or practice?
  • Findings: Does the study describe the groups in the study and whether they looked similar prior to the program or practice being implemented?
  • Results or technical appendix: Does the study include the effect size in standard deviation units?
question

Thanks to Cara for contributing her thoughts!

Consider contributing a question to be answered in a future post. You can send one to me at lferlazzo@epe.org. When you send it in, let me know if I can use your real name if it鈥檚 selected or if you鈥檇 prefer remaining anonymous and have a pseudonym in mind.

You can also contact me on Twitter at .

Just a reminder; you can subscribe and receive updates from this blog via . And if you missed any of the highlights from the first 13 years of this blog, you can see a categorized list here.

Related Tags:

The opinions expressed in Classroom Q&A With Larry Ferlazzo are strictly those of the author(s) and do not reflect the opinions or endorsement of Editorial Projects in Education, or any of its publications.

Events

This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
College & Workforce Readiness Webinar
Smarter Tools, Stronger Outcomes: Empowering CTE Educators With Future-Ready Solutions
Open doors to meaningful, hands-on careers with research-backed insights, ideas, and examples of successful CTE programs.
Content provided by 
Reading & Literacy Webinar Supporting Older Struggling Readers: Tips From Research and Practice
Reading problems are widespread among adolescent learners. Find out how to help students with gaps in foundational reading skills.
This content is provided by our sponsor. It is not written by and does not necessarily reflect the views of Education Week's editorial staff.
Sponsor
Reading & Literacy Webinar
Improve Reading Comprehension: Three Tools for Working Memory Challenges
Discover three working memory workarounds to help your students improve reading comprehension and empower them on their reading journey.
Content provided by 

EdWeek Top School Jobs

Teacher Jobs
Search over ten thousand teaching jobs nationwide 鈥 elementary, middle, high school and more.
Principal Jobs
Find hundreds of jobs for principals, assistant principals, and other school leadership roles.
Administrator Jobs
Over a thousand district-level jobs: superintendents, directors, more.
Support Staff Jobs
Search thousands of jobs, from paraprofessionals to counselors and more.

Read Next

Teaching Does Homework Further Learning? Educators Weigh In
Most said homework isn't effective or beneficial for students.
1 min read
Kapua Ong does math homework at her home in Honolulu, on Sept. 11, 2025.
Kapua Ong does math homework at her home in Honolulu, on Sept. 11, 2025.
Mengshin Lin/AP
Teaching Opinion More Than 鈥楧usty Books鈥: Why School Libraries Are Essential Infrastructure
Administrators wrestling with learning loss rarely turn to librarians. That鈥檚 a strategic mistake.
Daniel A. Sabol
5 min read
students librarians reading different books, giant textbooks. Concept of book world, readers at library, literature lovers or fans, media library. Colorful vector illustration in flat cartoon style.
Vanessa Solis/Education Week + iStock/Getty
Teaching Opinion The Small Teaching Moves That Offer Big Wins
Educators meticulously plan lessons to reach students. Here鈥檚 how to have a bigger impact.
10 min read
Conceptual illustration of classroom conversations and fragmented education elements coming together to form a cohesive picture of a book of classroom knowledge.
Sonia Pulido for Education Week
Teaching Opinion The Three Big Misconceptions About Student Engagement
For teachers, engagement is the holy grail. But what if we鈥檙e thinking about it all wrong?
Rebecca A. Huggins
5 min read
Children playing and learning with their teachers, school supplies and books: back to school and education concept
E+/Getty