Personal tools
You are here: Home XChange Multiple Measures of Good Teaching XPress
XChange - Publications and Resources for Public School Professionals

XPress

  1. Research for High-Quality Urban Teaching: Defining It, Developing It, Assessing It
  2. Problems with the use of student test scores to evaluate teachers
  3. Using Classroom Artifacts to Measure Instructional Practice in Middle School Science: A Two-State Field Test
  4. Overview of the Instructional Quality Assessment

Research for High-Quality Urban Teaching: Defining It, Developing It, Assessing It

Author(s): Jeannie Oakes, Megan Loef Franke, Karen Hunter Quartz and John Rogers

Abstract:

Using research focusing on urban schools, the authors articulate the need for expanding the definition of urban teacher quality, understanding teacher learning within the context of urban schools, and developing processes and structures that support urban teachers.  They conclude with a call to develop ways to gauge the success and impact of efforts to develop urban teacher competencies that go beyond teacher retention rates and student achievement data.

APA Citation:
Oakes, J., Franke, M. L., Quartz, K. H., & Rogers, J. (2002). Research for High-Quality Urban Teaching: Defining It, Developing It, Assessing It. Journal of teacher education, 53(3), 228-34.

This copy is attached as pre-print and is used with permission of the publisher, Journal of Teacher Education, and the authors.

Attachment
Oakes et al 2002 Research for High-Quality Urban Teaching.pdf — PDF document, 321Kb

Back to top

Problems with the use of student test scores to evaluate teachers

Author(s): Eva L. Baker, Paul E. Barton, Linda Darling-Hammond, Edward Haertel, Helen F. Ladd, Robert L. Linn, Diane Ravitch, Richard Rothstein, Richard J. Shavelson, and Lorrie A. Shepard

Abstract:

Reviewing the technical evidence of value-added modeling (VAM), the authors present reasons to question to validity of using student test scores to evaluate teachers.  The analysis includes the limitations of this model, the possible unintended consequences, and recommendations.

APA Citation:
Baker, E.L. et al. (2010). Problems with the use of student test scores to evaluate teachers (EPI Briefing paper 278). Washington, D.C.: Economic Policy Institute.

Link: http://epi.3cdn.net/b9667271ee6c154195_t9m6iij8k.pdf

Back to top

Using Classroom Artifacts to Measure Instructional Practice in Middle School Science: A Two-State Field Test

Author(s): Hilda Borko, Brian M. Stecher, Felipe Martinez, Karin L. Kuffner, Dionne Barnes, Suzanne C. Arnold, Joi Spencer, Laura Creighton, and Mary Lou Gilbert

Abstract:

This report presents findings from two investigations of the use of classroom artifacts to measure the presence of reform-oriented teaching practices in middle-school science classes. It complements previous research on the use of artifacts to describe reform-oriented teaching practices in mathematics. In both studies, ratings based on collections of artifacts assembled by teachers following directions in the “Scoop

Notebook” are compared to judgments based on other sources of information, including direct classroom observations and transcripts of discourse recorded during classroom observations. For this purpose, we developed descriptions of 11 dimensions of reform-oriented science instruction, and procedures for rating each on a dimension-specific five-point scale.

Two investigations were conducted. In 2004, data were collected from 39 middle school science teachers in two states. Each teacher completed a Scoop Notebook, each was observed by a singe observer on two or three occasions, and eight of the teachers were also audio-taped, allowing us to create transcripts of classroom discourse. In 2005, 21 middle-school mathematics teachers participated in a similar study, in which each teacher was observed by a pair of observers, but no audio-taping occurred.

All data sources were rated independently on each of 11 dimensions. In addition, independent ratings were made using combinations of data sources. The person who observed in a classroom also reviewed the Scoop Notebook and assigned a “gold standard” rating reflecting all the information available from the Notebook and the classroom observations. Combined ratings were also assigned based on the transcripts and notebooks, and based on the observations and transcripts.

The results of these field studies suggest that the Scoop Notebook is a reasonable tool for describing instructional practice in broad terms. For example, it could be useful for providing an indication of changes in instruction over time that occur as a result of program reform efforts. There was a moderate degree of correspondence between judgments of classroom practice based on the Scoop Notebook and judgments based on direct classroom observation. Correspondence was particularly high for dimensions that did not exhibit great variation from one day to the next. Furthermore, judgments based on the Scoop Notebook corresponded moderately well to our “gold standard” ratings, which included all the information we had about practice.

APA Citation:
Borko, H., Stecher, B. M., Martinez, F., Kuffner, K. L., Barnes, D., Arnold, S. C., et al. (2006). Using Classroom Artifacts to Measure Instructional Practice in Middle School Science: A Two-State Field Test (CSE Technical Report 690). Los Angeles: University of California, National Center for Research on Evaluation, Standards and Student Testing (CRESST).

Link: http://www.cse.ucla.edu/products/download_report.asp?r=690

Back to top

Overview of the Instructional Quality Assessment

Author(s): Brian Junker, Yanna Weisberg, Lindsay Clare Matsumura, Amy Crosson, Mikyung Kim Wolf, Allison Levison, and Lauren Resnick

Abstract:

Educators, policy-makers, and researchers need to be able to assess the efficacy of specific interventions in schools and school Districts. While student achievement is unquestionably the bottom line, it is essential to open up the educational process so that each major factor influencing student achievement can be examined; indeed as a proverb often quoted in industrial quality control goes, “That which cannot be measured, cannot be improved”. Instructional practice is certainly a central factor: if student achievement is not improving, is it because instructional practice is not changing, or because changes in instructional practice are not affecting achievement? A tool is needed to provide snapshots of instructional practice itself, before and after implementing new professional
development or other interventions, and at other regular intervals to help monitor and focus efforts to improve instructional practice. In this paper we review our research program building and piloting the Instructional Quality Assessment (IQA), a formal toolkit for rating instructional quality based primarily on classroom observation and student assignments. In the first part of the paper we review the need for, and some other efforts to provide, direct assessments of instructional practice. In the second part of this paper we briefly summarize the development of the IQA in reading comprehension and in mathematics at the elementary school level. In the third part of the paper we report on a large pilot study of the IQA, conducted in Spring 2003 in two moderately large urban school Districts. We conclude with some ideas about future work and future directions for the IQA.

APA Citation:
Junker, B., Weisberg, Y., Matsumura, L. C., Crosson, A., Wolf, M. K., Levison, A., et al. (2006). Overview of the instructional quality assessment (CSE Technical Report No. 671). Los Angeles: University of California, National Center for Research on Evaluation, Standards, and Student Testing (CRESST).

Link: http://www.cse.ucla.edu/products/reports/r671.pdf

Back to top

Document Actions

UCLA Center X
1320 Moore Hall, Box 951521
Los Angeles, CA 90095-1521
(310) 825-4910