Steve Lee, Research Analyst
Teacher effectiveness is the most important in-school factor affecting student learning. Consequently, measuring teacher effectiveness properly is an important step toward creating effective classrooms. In 2013 the Maryland State Department of Education (MSDE) established a teacher and principal evaluation (TPE) system for Maryland public schools to measure teachers and principal effectiveness using a combination of indicators of professional practice and student growth. In the spring of 2016, CNA Education evaluated the accuracy of the TPE system by examining correlations between observable teacher and principal characteristics and TPE evaluation scores. If the TPE system properly measures teacher and principal effectiveness, certain teacher and principal characteristics should be correlated with the TPE scores in expected ways (for example, more experienced teachers should have higher TPE scores). Our analysis largely supported the validity of the TPE system, as teacher and principal characteristics correlated with TPE evaluation outcomes in theoretically expected ways. We found:
Characteristics associated with lower effectiveness ratings:
- Math teachers.
- Teachers at high-minority/high-poverty schools versus teachers at high-poverty-only schools.
Characteristics associated with higher effectiveness ratings:
- Tenured and experienced teachers/principals.
- Teachers who work under effective principals.
We then investigated whether the teacher effectiveness scores correlated with student performance. If the TPE system is measuring teacher effectiveness, we would expect to find an association between effective teaching and student performance. To protect teacher privacy, the TPE scores among math and English teachers were aggregated at the school level (that is, a teacher performance score was calculated reflecting the entire school). In middle, high, and combined schools, we estimated the average teacher effectiveness among math or English teachers only. In elementary schools, which often lack a math or English teacher, we calculated the average for all teachers. In addition, we controlled for a variety of school and district characteristics to mitigate the influence of any outside factors that may affect student performance.
The results of our analysis show that teacher effectiveness at the school level correlates positively with student performance in most cases. Student performance was measured by 2016 Partnership for Assessment of Readiness for College and Careers (PARCC) scale scores and a student growth percentile (SGP) score, which is based on PARCC scores. The PARCC scale score results show that a 10-point increase in teacher effectiveness correlates with a 3.2-point increase in English scores and a 5.6-point increase in mathematics scores. The SGP results show that a 10-point increase in teacher effectiveness is associated with a 1.8-point increase in mathematics scores among all students.
Effective teaching seems to be particularly important for high-poverty, high-minority schools, where we see the largest effect on SGP scores. A 10-point increase in teacher effectiveness is associated with a 3.8-point increase in the mathematics SGP score among high-poverty, high-minority schools. The English SGP effect sizes are smaller: a 10-point increase in teacher effectiveness is associated with less than a 1-point change in SGP score among all students and various high-poverty and high-minority subgroups. The sequenced, discrete nature of mathematics instruction may make student improvement more directly measurable.
These results should be treated with caution, however. Without linked teacher and student data, our analysis can detect only associations rather than causal relationships between teacher and student performance. For example, a school with five mathematics teachers may have one effective teacher and four very ineffective teachers. In this statistical analysis, the school’s low average teacher effectiveness score would obscure the effectiveness of that one teacher. In our next phase of work with MSDE, we will further investigate the relationship between teacher and student performance and combine all previous findings into a single report.
Dr. Steve Lee is a quantitative analyst who studies college readiness, teacher evaluation, and career and technical education programs. He is currently a lead researcher assessing the validity of the Maryland State Department of Education teacher and principal evaluation system. Since joining the team in 2014, Steve has played an essential role in analyzing data for CNA’s five-year grant to evaluate Florida’s College and Career Readiness Initiative and for an Investing in Innovation (i3) Fund grant to East Tennessee’s Niswonger Foundation.
Steve’s recent publications include Instructional Quality and Enrollment in Northeast Tennessee i3 Consortium Courses and Expansion of Online, Distance Learning, Advanced Placement, and Dual Enrollment Courses in the Northeast Tennessee i3 Consortium. He has presented his research at conferences of the Association for Education Finance and Policy, the American Educational Research Association, and the Association for Public Policy Analysis and Management.
Steve holds a Ph.D. in sociology from Vanderbilt University, an M.S. in applied sociology from Clemson University, and a B.A. in sociology from James Madison University.
Did you like this article? Sign up and we’ll send you more articles like this in our monthly newsletter.