Bibliography: Selected Resources Related to Measuring Effectiveness of Teachers in Tested Grades in Subjects Alliance for Excellent Education. (2008, March). Measuring and improving the effectiveness of high school teachers. Washington, DC: Author. Amrein-Beardsley, A. (2008). Methodological concerns about the education value-added assessment system. Educational Researcher, 37(2), 65–75. Retrieved from http://proquest.umi.com/pqdweb?did=1447864391&sid=3&Fmt=3&clientId=47400&RQT=309& VName=PQD Andrejko, L. (2004). Value-added assessment: A view from a practitioner. Journal of Educational and Behavioral Statistics, 29(1), 7–9. Assessment and Accountability Comprehensive Center (AACC). (2011, May). Strategies for measuring teacher effectiveness across all grade and content areas: A system of measures intended to be used in strategic combination. Draft report for AACC. San Francisco: WestEd. Baker, E. L., Barton, P. E., Darling-Hammond, L., Haertel, E., Ladd, H. F., Linn, R. L., . . . Shepard, L. A. (2010, August). Problems with the use of student test scores to evaluate teachers (Briefing Paper #278). Washington, DC: Economic Policy Institute. Retrieved from http://www.epi.org/publication/bp278/ Ballou, D., Sanders, W., & Wright, P. (2004). Controlling for student background in value-added assessment of teachers. Journal of Educational and Behavioral Statistics, 29(1), 37–65. Betebenner, D. (2004, March). An analysis of school district data using value-added methodology (CSE Report 622). Los Angeles, CA: University of California, National Center for Research on Evaluation, Standards, and Student Testing (CRESST). Bock, R. D., Wolfe, R., & Fisher, T. H. (1996). A review and analysis of the Tennessee Value-Added Assessment System. Nashville, TN: Office of Education Accountability. Bratton, S. E., Jr. (1998). How we’re using value-added assessment. School Administrator, 55(11), 30–32. Braun, H. I. (2004, December). Value-added modeling: What does due diligence require? Princeton, NJ: Educational Testing Service. Braun, H. I. (2005, September). Using student progress to evaluate teachers: A primer on value-added models. Princeton, NJ: Educational Testing Service, Policy Information Center. Buckley, K., & Marion, S. (2011, June). A survey of approaches used to evaluate educators in non-tested grades and subjects. Retrieved from http://colegacy.org/news/wp- content/uploads/2011/10/Summary-of-Approaches-for-non-tested-grades_7-26-11.pdf WestEd_ Selected Sources of Information_1.17.2102 1 Burris, C. C., & Welner, K. G. (2011, July). Letter to Secretary of Education Arne Duncan concerning evaluation of teachers and principals (NEPC Policy Memo). Boulder, CO: School of Education, University of Colorado at Boulder. Cantrell, S., & Scantlebury, J. (2011). Effective teaching: What is it and how is it measured? Cantrell_&_Scantlebury,_2011.pdf Clotfelter, C. T., Ladd, H. F., & Vigdor, J. L. (2007, September). Teacher credentials and student achievement in high school: A cross-subject analysis with student fixed effects. Durham, NC: Duke University, Sanford Institute of Public Policy. Council of Chief State School Officers (CCSSO). (2011). Introduction to CCSSO’s technical expertise exchange’s information requests. Washington, DC: Author. Crane, J. (2002, November). The promise of value-added testing. Washington, DC: Progressive Policy Institute. Danielson, C. (1996). Enhancing professional practice: A framework for teaching. Retrieved from http://www.umatilla.k12.or.us/NCLB/Char_Danielson.htm Darling-Hammond, L. (2010, October). Evaluating teacher effectiveness: How teacher performance assessments can measure and improve teaching. Washington, DC: Center for American Progress. Dosset, D., & Muñoz, M. A. (2003). Classroom accountability: A value-added methodology. Paper presented at the annual meeting of the American Educational Research Association, Chicago, IL. Doran, H. C., & Lockwood, J. R. (2006). Fitting value-added models in R. Journal of Educational and Behavioral Statistics, 31(2), 205–230. Ferguson, R. F. (2011, February). Tripod classroom-level student perceptions as measures of teaching effectiveness. Presented at Cambridge Education webinar. Glazerman, S., Goldhaber, D., Loeb, S., Raudenbush, S., Staiger, D. O., & Whitehurst, G. J. (2011, April). Passing muster: Evaluating teacher evaluation systems. Washington, DC: Brown Center on Education Policy, The Brookings Institution. Goe, L., Bell, C., & Little, O. (2008, June). Approaches to evaluating teacher effectiveness: A research synthesis. Washington, DC: National Comprehensive Center for Teacher Quality. Goe, L., & Holdheide, L. (2010, December). Measuring teacher contributions to student learning growth for “the other 69 percent”. Washington, DC: National Comprehensive Center for Teacher Quality. Goe, L., & Holdheide, L. (2011, May). Evaluating all teachers: Measuring student growth in nontested subjects and for teachers of at-risk students. Presented at National Comprehensive Center for Teacher Quality webinar. Goe, L., Holdheide, L., & Miller, T. (2011, May). A practical guide to designing comprehensive teacher evaluation systems: A tool to assist in the development of teacher evaluation systems. Washington, DC: National Comprehensive Center for Teacher Quality. WestEd_ Selected Sources of Information_1.17.2102 2 Goldschmidt, P., & Choi, K. (2008). The practical benefits of growth models for accountability and the limitations under NCLB. University of California, Los Angeles: CRESST. Gordon, R. (2008, May). Value-added measures: Implications for policy & practice. Madison, WI: WCER. Harris, D. H. (2008, June). Would accountability based on teacher value-added be smart policy? An examination of the statistical properties and policy alternatives. Retrieved from VAMandPolicy_DHarris.pdf Harris, D. & Sass, T. R. (2006). Value-added models and the measurement of teacher quality. Draft report to the Institution for Education Sciences. Herman, J. L., Heritage, M., & Goldschmidt, P. (2011). Developing and selecting assessments of student growth for use in teacher evaluation systems. Los Angeles, CA: University of California, National Center for Research on Evaluation, Standards, and Student Testing (CRESST). Herman, J. L., Yamashiro, K., Lefkowitz, S., & Trusela, L. A. (2008, September). Exploring data use and school performance in an urban public school district (CRESST Report 742). Los Angeles, CA: University of California, National Center for Research on Evaluation, Standards, and Student Testing (CRESST). Kane, T. J., & Staiger, D. O. (2008). Estimating teacher impacts on student achievement: An experimental evaluation (Working Paper 14607). Cambridge: National Bureau of Economic Research. Retrieved from http://www.dartmouth.edu/~dstaiger/Papers/w14607.pdf Kelly, S., & Monczunski, L. (2007). Overcoming the volatility in school-level gain scores: A new approach to identifying value added with cross-sectional data. Educational Researcher, 36(5), 279–287. Kupermintz, H. (2003). Teacher effects and teacher effectiveness: A validity investigation of the Tennessee Value Added Assessment System. Educational Evaluation and Policy Analysis, 25(3), 287–298. Lissitz, R. (2005). Value added models in education: Theory and practice. Maple Grove, MN: JAM Press. Lockwood, J. R., Louis, T. A., & McCaffrey, D. F. (2002). Uncertainty in rank estimation: Implications for value-added modeling accountability systems. Journal of Educational and Behavioral Statistics, 27(3), 255–270. Marion, S., & Buckley, K. (2011, September). Approaches and considerations for incorporating student performance results from “non-tested” grades and subjects into educator effectiveness determinations. Retrieved from http://www.nciea.org/publications/Considerations%20for%20nontested%20grades_SMKB2011. pdf Marsh, J. A., Pane, J. F., & Hamilton, L. S. (2006). Making sense of data-driven decision making in education: Evidence from recent RAND research. Santa Monica, CA: RAND Corporation. Martineau, J. A. (2006). Distorting value added: The use of longitudinal, vertically scaled student achievement data for growth-based, value-added accountability. Journal of Educational and Behavioral Statistics, 31(1), 35–62. WestEd_ Selected Sources of Information_1.17.2102 3 McCaffrey, D. F., Lockwood, J. R., Koretz, D., Louis, T. A., & Hamilton, L. (2004a). Models for value-added modeling of teacher effects. Journal of Educational and Behavioral Statistics, 29(1), 67–101. McCaffrey, D. F., Lockwood, J. R., Koretz, D., Louis, T. A., & Hamilton, L. (2004b). Let’s see more empirical studies on value-added modeling of teacher effects: A reply to Raudenbush, Rubin, Stuart and Zanutto, and Reckase. Journal of Educational and Behavioral Statistics, 29(1), 139–143. Measures of Effective Teaching (MET) Project. (2010a, June). Working with teachers to develop fair and reliable measures of effective teaching. Seattle, WA: Bill & Melinda Gates Foundation. Retrieved from http://www.metproject.org/downloads/met-framing-paper.pdf Measures of Effective Teaching (MET) Project. (2010b, October). Danielson’s framework for teaching for classroom observations. Seattle, WA: Bill & Melinda Gates Foundation. Measures of Effective Teaching (MET) Project. (2010c, December). Learning about teaching: Initial findings from the Measures of Effective Teaching Project. Seattle, WA: Bill & Melinda Gates Foundation. Retrieved from http://www.metproject.org/downloads/Preliminary_FindingsResearch_Paper.pdf Measures of Effective Teaching (MET) Project. (2010d, December). Learning about teaching: Initial findings from the Measures of Effective Teaching Project (Policy Brief). Seattle, WA: Bill & Melinda Gates Foundation. Measures of Effective Teaching (MET) Project. (2011, November). Title. Retrieved from CCSSO_cantrell_110311.pptx Measures of Effective Teaching (MET) Project. (2012). Gathering feedback for teaching: Combining high quality observations with student surveys and achievement gains. Seattle, WA: Bill & Melinda Gates Foundation. Retrieved from http://www.metproject.org/downloads/MET_Gathering_Feedback_Practioner_Brief.pdf Meyer, R. H. (2000, June). Value-added indicators: A powerful tool for evaluating science and mathematics programs and policies (NISE Brief Vol.3, No. 3). Madison, WI: National Institute for Science Education. Meyer, R. H., & Dokumaci, E. (2009, December). Value-added models and the next generation of assessments. Paper presented at the Exploratory Seminar: Measurement Challenges Within the Race to the Top Agenda. National Academy of Education. (2011). Improving teacher quality and distribution (Education Policy Briefing Sheet). Washington, DC: Author National Association of State Boards of Education. (2005, October). Evaluating value added: Findings and recommendations from the NASBE study group on value-added assessments: Executive Summary. Alexandria, VA: Author. National Council on Measurement in Education (NCME). (2011, June). Newsletter 19(2). National Council on Teacher Quality. (2011, October). State of the states: Trends and early lessons on teacher evaluation and effectiveness policies. Washington, DC: Author. WestEd_ Selected Sources of Information_1.17.2102 4 New York State Department of Education. (2011). Appendix A: The framework for teaching (2011 revised edition). New York, NY: Author. Pennsylvania Clearinghouse for Education Research (PACER). (2011, September). Teacher effectiveness: The national picture and Pennsylvania context (Issue Brief). Philadelphia, PA: Author. Potemski, A., Baral, M., & Meyer, C. (2011, May). Alternative measures of teacher performance (Policyto-Practice Brief). Washington, DC: National Comprehensive Center for Teacher Quality. Rabinowitz, S. (2011). Reliable and valid indicators to support the measurement of educator effectiveness. Presented at AACC NEI Educator Effectiveness Webinar. Rabinowitz, S. (2011, January). Measuring student growth. Presented at Measuring Educator Effectiveness to Improve Teaching and Learning, Phoenix, AZ. Raudenbush, S. W. (2004). What are value-added models estimating and what does this imply for statistical practice? Journal of Educational and Behavioral Statistics, 29(1), 121–129. Raudenbush, S. & Bryk, A. (2002). Hierarchical linear models: Applications and data analysis methods, Second Edition. London: Sage. Reckase, M. D. (2004). The real world is more complicated than we would like. Journal of Educational and Behavioral Statistics, 29(1), 117–120. Rockoff, J. E. (2003, March). The impact of individual teachers on student achievement: Evidence from panel data. American Economic Review, Papers and Proceedings, May 2004. Rockoff, J. E., & Speroni, C. (2011). Subjective and objective evaluations of teacher effectiveness: Evidence from New York City. Labour Economics, 18, 687–696. Rothstein, J. (2007). Do value-added models add value? Tracking, fixed effects, and causal inference. Cambridge, MA: National Bureau for Economic Research. Rothstein, J. (2009, January). Student sorting and bias in value added estimation: Selection on observables and unobservables. Princeton, NJ: Princeton University, Industrial Relations Section. Rowan, B., Correnti, R., & Miller, R. J. (2002, November). What large-scale, survey research tells us about teacher effects on student achievement: Insights from the Prospects study of elementary schools (CPRE Research Report Series RR-051). Philadelphia, PA: CPRE Publications, Graduate School of Education, University of Pennsylvania. Rubin, D. B., Stuart, E. A, & Zanutto, E. L. (2004). A potential outcomes view of value-added assessment in education. Journal of Educational and Behavioral Statistics, 29(1), 103–116. Rumberger, R. W., & Palardy, G. J. (2004). Multilevel models for school effectiveness research. In D. Kaplan (Ed.), Handbook of Quantitative Methodology for the Social Sciences, 235-258. Thousand Oaks, CA: Sage Publications. Sanders, W. L. (1998). Value-added assessment. School Administrator, 55(11), 24–27. Sanders, W. L. (2000). Value-added assessment from student achievement data: Opportunities and hurdles. Journal of Personnel Evaluation in Education, 14(4), 329–339. WestEd_ Selected Sources of Information_1.17.2102 5 Sanders, W. L. (2004, June). A summary of conclusions drawn from longitudinal analyses of student achievement data over the past 22 years (1982–2004). Presented at the Governors Education Symposium, Asheville, NC. Sanders, W. L. (2006). Comparisons among various educational assessment value added models. Paper presented at The Power of Two—National Value-Added Conference, October 16, 2006. Sanders, W. L. (2010). EVAAS statistical models (SAS White Paper). Cary, NC: SAS Institute Inc.. Sanders, W. L., & Horn, S. P. (1994). The Tennessee Value-Added Assessment System (TVAAS): Mixedmodel methodology in educational assessment. Journal of Personnel Evaluation in Education, 8, 299–311. Sanders, W. L., & Horn, S. P. (1998). Research findings from the Tennessee Value-Added Assessment System (TVAAS) database: Implications for educational evaluation and research. Journal of Personnel Evaluation in Education, 12(3), 247–256. Sanders, W. L., & Rivers, J. C. (1996, November). Cumulative and residual effects of teachers on future student academic achievement. Knoxville, TN: University of Tennessee Value-Added Research and Assessment Center. Sanders, W. L., Saxton, A. M., & Horn, S. P. (1997). The Tennessee Value-Added Assessment System: A quantitative, outcomes-based approach to educational assessment. In J. Millman (Ed.), Grading teachers, grading schools: Is student achievement a valid evaluation measure? (pp. 137–162). Thousand Oaks, CA: Corwin Press, Inc. (Sage Publications). Sanders, W. L., & Wright, S. P. (2009.) A response to Amrein-Beardsley (2008) “Methodological concerns about the education value-added assessment system” (SAS White Paper). Cary, NC: SAS Institute Inc.. Sanders, W. L., Wright, S. P., Rivers, J. C., Leandro, J. G. (2009, November). A response to criticisms of SAS© EVAAS© (SAS White Paper). Cary, NC: SAS Institute Inc. Sartain, L., Stoelinga, S. R., & Brown, E. R. (2011, November). Rethinking teacher evaluation in Chicago: Lessons learned from classroom observations, principal-teacher conferences, and district implementation. Chicago, IL: Consortium on Chicago School Research at the University of Chicago. Schochet, P. Z., & Chiang, H. S. (2010). Error rates in measuring teacher and school performance based on student test score gains. Washington, DC: National Center for Educational Evaluation and Regional Assistance, Institute of Education Sciences, U.S. Department of Education. Stewart, B. E. (2006). Value-added modeling: The challenge of measuring educational outcomes. New York, NY: Carnegie Corporation of New York. Tekwe, C. D., Carter, R. L., Ma, C.-X., Algina, J., Lucas, M. E., Roth, J., . . . Resnick, M. B. (2004). An empirical comparison of statistical models for value-added assessment of school performance. Journal of Educational and Behavioral Statistics, 29(1), 11–36. WestEd_ Selected Sources of Information_1.17.2102 6 Thomas, B. E. (2011, February). Using student growth data in teacher evaluation. Presented at CCSSO ASR Meeting, Atlanta, GA. Wainer, H. (2004). Introduction to a special issue of the Journal of Educational and Behavioral Statistics on value-added assessment. Journal of Educational and Behavioral Statistics, 29(1), 1–3. Wake County Public School System. (2009, March). Comparison of SAS© EVAAS© results and WCPSS effectiveness index results (E&R Report No. 09.11). Raleigh, NC: Author. Webster, W. J. (2005). The Dallas school-level accountability model: The marriage of status and valueadded approaches. In R. W. Lissitz (Ed.), Value added models in education: Theory and applications (pp. 233–271). Maple Grove, MN: JAM Press. Working Group on Teacher Quality. (2007, October). Roundtable discussion on value-added analysis of student achievement: A summary of findings. Washington, DC: Author. Wright, S. P., Sanders, W. L., & Rivers, J. C. (2006). Measurement of academic growth of individual students toward variable and meaningful standards. Cary, NC: SAS Institute, Inc. Retrieved from http://www.sas.com/resources/asset/measurement-of-academic-growth.pdf WestEd_ Selected Sources of Information_1.17.2102 7