Ronald GallimoreEveryone's a teacher to someone (John Wooden)

Focus more on processes to achieve valued outcomes

Brad Ermeling alerted me to this blog posting by Daniel Markovitz in the Harvard Business Review.

Markovitz points out several potential dangers of so called “stretch goals” including sapping motivation, fostering unethical behavior, and encouraging excessive risk-taking.
Better, he argues, to set reasonable goals and focus on process improvement. Process improvement refers to what is required to achieve goals, sometimes referred to as opening the “black box” to see how something works in order to make it work better.

He closed the blog with two sentences written for a business and industrial audience that could also be addressed to education, medicine, and behavioral interventions among others.

“The heavy lifting has to be done at the outset — a deep understanding of the current condition is a prerequisite for true improvement. This approach also requires a subtle — but critical — shift in focus from improving outcome metrics to improving the process by which those outcomes are achieved.”

Education reformers and policymakers are now engaged in a great debate about the value of standards and assessment. Once in a while someone alludes to Markovitz’s point that settings standards and developing outcome metrics accomplish little unless the mediating process is examined and improved. In education that mediating process is what transpires in classrooms. Teaching that provides effective learning opportunities for students.
Comments

Promising method for assessing teaching effectiveness

An Arizona and California research team (Kersting, Givvin, Thompson, Santagata, & Stigler, 2012) reported a novel and promising new approach to assessing teaching effectiveness. Teachers were asked to analyze thirteen 3 to 5 minute classroom video clips from fraction lessons, and write detailed comments for each. The researchers rated the written comments for how attentive teachers were to mathematical content and student thinking portrayed on the video and the degree to which teachers made suggestions for instructional improvement. They also rated the depth of teachers' analyses, e.g. was the written response purely descriptive or evaluative versus connecting analytic points to form a cause-effect argument. The team defined these 4 dimensions as reflections of a teacher’s usable knowledge for teaching fractions. The fraction clips covered such topics as part-whole relationships, equivalency, operations with fractions, etc.

But does a teacher’s “usable knowledge for teaching” transfer into the classroom? The research team addressed that question as well. Teachers who completed the video analysis were videotaped in their own classrooms teaching a fractions lesson, which was scored for instructional quality. Based on an extensive review of the mathematics teaching research, teaching quality was defined as developing concepts, appropriate use of representations to explain algorithms, and connecting concepts and topics. And the answer? Yes, a teacher’s usable knowledge for teaching is correlated with the quality of instruction they deliver in a classroom lesson.

But do these assessments of teaching knowledge and quality lead to more student learning? Yes. Students of the 36 teachers who participated in the study completed pre and post fractions quizzes. Teachers who did better on the video analysis task had higher scores on classroom teaching quality and their students had larger gains on the post-test fractions quiz. Usable knowledge predicted better classroom teaching and together these two assessments of teaching quality predicted greater student learning. Few studies have attempted to connect these three dots of knowledge, practice, and achievement, and even fewer reported positive correlations.

The approach used by Kersting, et al. (2012) is a promising alternative to the questionable approaches currently pursued at the national level. In the last several years, several major policy efforts have focused on assessing teaching quality as part of the standards and accountability reform. This latest wave of reform acknowledges that improved teaching is critical to improved student achievement. To help teachers, reformers have been developing teaching assessments based on live-observation of classroom instruction. Armed with powerful psychometric development strategies, researchers have been struggling to find a cost-effective way that educators can assess teachers based on a few or even a single classroom observation. This approach is questionable on several accounts. To get a reliable or accurate assessment of an individual teacher’s classroom practices probably requires multiple observations over at least a unit of instruction. The cost is prohibitive, and hardly appealing schools already strapped for resources.

A second limitation of live observation methods is the complexity of behavior to be captured. “Live observations are limited to whatever an observer can record. Checklists can be useful, but it is possible for a live observer to make only a limited number of reliable judgments at the speed required for classroom research. There simply is too much going on. Video, on the other hand, can be paused, rewound, and watched again. Two observers can watch the same video, independently, and go back to re-play and discuss those parts that they saw differently. Videos can be coded multiple times, in passes that require only limited judgments by an observer on any single pass. This makes it easier to train observers and enables reliable coding of complex events.
The most important advantages of video derive from its concrete, vivid, and “raw, un-analyzed” nature (i.e., the categories can be derived from the data rather than vice versa, leaving the data open to a vast array of analyses)
(Stigler, Gallimore, & Hiebert, 2000, p. 90).

It’s premature to argue that video clip analysis is a workable, scalable alternative to live observations. Although this newly published study replicates earlier work by Kersting and colleagues, so far only mathematics instruction has been investigated, and only with secondary school samples. But given the stakes, I hope that national policymakers will not become so wedded to live observations that the nation spends massive resources on a single approach when such a promising alternative is available. It is possible to imagine that rather than an army of classroom observers, knowledge useful for teaching could be assessed using modern technologies at a fraction of the cost of live observations
(Gallimore & Stigler, 2003).

The Kersting, et al. study was published in the June, 2012 issue of the
American Education Research Journal.

Kersting, N.B., Givvin, K. B., Thompson, B.J., Santagata, R., & Stigler, J. W. (2012). Measuring usable knowledge: teachers’ analyses of mathematics classroom videos predict teaching quality and student learning.
American Education Research Journal, 49, 3, 568-589.
Comments