View the Press Briefing PowerPoint

advertisement
Assessing Research-Doctorate
Programs:
A Methodology Study
Committee Task
• Review and revise the methodology used to assess
the quality and effectiveness of research doctoral
programs.
• Explore new approaches and new sources of
information about doctoral programs and new
ways of disseminating these data.
• Recommend whether to conduct a full assessment
using the methodology developed in by the
committee
History of NRC Assessments
•
•
1982 “Assessment of Research-Doctorate
Programs in the United States”
Lyle V. Jones (Co-Chair)
Gardner Lindzey (Co-Chair)
1995 “Research-Doctorate Programs in the
United States: Continuity and Change”
Marvin L. Goldberger (Co-Chair)
Brendan Maher (Co-Chair)
Perceived Strengths of Prior
NRC Assessments
•
•
•
•
•
Authoritative source
Comprehensive
Clearly stated methodology
Temporal continuity
Widely quoted and utilized
Perceived Weakness of Prior
NRC Assessments
• Spurious precision of program rankings
• Confounding of research reputation and
educational quality
• Soft criteria for assessments of programs
• Ratings based on old data
Weaknesses continued…
• Poor dissemination of results for some
audiences
• Taxonomy categories out of date
• Validation of data inadequate
Design of the Methodology Study
• Formation of a committee. Definition of tasks.
• Panel meetings to define questions, discuss
methodology. Panels:
 Taxonomy and interdisciplinarity
 Quantitative measures
 Student processes and outcomes
 Reputation and data presentation
• Pilot trials of questionnaires, taxonomy.
Recommendations
• Spurious precision issue:
The committee recommends a new statistical
methodology to make clear the probable range of
ranking for each assessed academic unit.
Alternative Approach to Rankings to
Convey Rating Variability
• Draw ratings at random.
• Calculate rating for that draw.
• Repeat process enough times to reach
statistical reliability.
• Present distribution of ratings from all the
draws.
Recommendations continued…
• Research versus education issue:
– Drop reputational estimate of education quality as not
independent of the reputational estimate of program
quality.
– Add quantitative indicators of educational offerings
and outcomes.
Program Measures and a Student
Questionnaire
• Questions to programs
–
–
–
–
Size
Student characteristics and financing
Attrition and time to degree
Competing programs
Program Measures and a Student
Questionnaire continued…
• Questions to students in selected fields
–
–
–
–
–
Employment Plans
Professional Development
Program Environment
Infrastructure
Research Productivity
Recommendations continued…
• Soft criteria issue:
Add quantitative measures concerning research
output, citations, student support, time to degree,
etc.
Examples of Indicators
•
•
•
•
Publications per faculty member
Citations per faculty member
Grant support and distribution
Library resources (separating out electronic
media)
• Laboratory Space
• Interdisciplinary Centers
Recommendations continued…
• Poor dissemination issue:
–
–
–
–
Add analytic essays to archival book output.
Add updateable current web output.
Add electronic assessment tools.
Add links from professional societies.
Recommendations continued…
• Taxonomy issue:
– Update 1995 taxonomy.
– State clear criteria.
– Consult professional societies, administrators and
faculty.
– Allow for two academic categories (rated
programs and emerging fields).
– Named subfields to help universities classify their
programs.
– Allowed faculty to be in more than one program.
– Included two sub-threshold humanities fields
(classics and German) to maintain continuity.
Recommendations continued…
• Validation issue:
Conduct pilot studies and institute checks, both by
institutional respondents and by external societies.
Pilot Institutions
•
•
•
•
University of Maryland
Michigan State University
Florida State University
University of Southern
California
• Yale University
• University of Wisconsin at
Milwaukee
• University of California,
San Francisco
• Rennsalear Polytechnic
Institute
What’s next
• Obtain financing for the full study from
both federal and foundation sponsors.
• If funding is obtained:
– Full study would begin in Spring, 2004
– Data collection in 2004/2005 for previous
academic year.
– Final report in summer 2006
Conclusion
The study that the Committee recommends is a BIG
undertaking in terms of survey cost and the time
of graduate programs and their faculty. Why is it
worth it?
It will provide faculty, students and those involved
with public policy an in-depth look at quality and
characteristics of those programs that produce our
future scientists, engineers, and those who help us
understand the human condition.
Committee
Jeremiah Ostriker, Princeton,
(Astrophysics), Chair
Elton Aberele, U. of Wisc (Ag)
John Brauman, Stanford U. (Chem)
George Bugliarello, PolyNY (Eng)
Walter Cohen, Cornell U. (Hum)
Jonathan Cole, Columbia U. (Soc
Sci)
Ronald Graham, UCSD (Math)
Paul Holland, ETS (Stat)
Earl Lewis, U. of Michigan (History)
Joan Lorden, U. of AlabamaBirmingham (Bio)
Louis Maheu, U. de Montréal (Soc)
Lawrence Martin, SUNY-Stony
Brook (Anthro.)
Maresi Nerad, U. Wash (Sociology &
Education)
Frank Solomon, MIT (Bioscience)
Catherine Stimpson, NYU (Hum)
Sub Committee – Panels
• STUDENT PROCESSES
AND OUTCOMES
Joan Lorden (Chair)
University of Alabama-Birmingham
• QUANTITATIVE
MEASURES
Catherine Stimpson (Chair)
New York University
• TAXONOMY AND
INTERDISCIPLINARITY
Walter Cohen (Co-Chair)
Cornell University
Frank Solomon (Co-Chair)
Massachusetts Institute of Technology
• REPUTATIONAL
MEASURES AND DATA
PRESENTATION
Jonathan Cole (Co-Chair)
Columbia University
Paul Holland (Co-Chair)
Educational Testing Service
Additional Panel Members
STUDENT PROCESSES
AND OUTCOMES
QUANTITATIVE
MEASURES
• Adam Fagen, Harvard Univ.
(Bioscience, grad.student)
• George Kuh, Indiana Univ.
(Education)
• Brenda Russell, Univ. of
Illinois-Chicago
(Bioscience)
• Susanna Ryan, Indiana U.
(English, Woodrow
Wilson Fellow)
• Marsha Moss, Univ. of
Texas (Institutional
Research)
• Charles E. Phelps, Univ.
of Rochester (Provost &
Econ.)
• Peter D. Syverson, Council
of Graduate Schools
Additional Panel Members
TAXONOMY AND
INTERDISCIPLINARITY
• Richard Attiyeh,UCSD
(Econ.)
• Robert F. Jones,
AAMC (Bioscience)
• Leonard K. Peters, VPI
(Computer Science)
REPUTATIONAL MEASURES
AND DATA RESENTATION
• David Schmidley, Texas
Tech (President &
Bioscience)
• Donald Rubin, Harvard
(Statistics)
Project web-site
http://www7.nationalacademies.org/resdoc/index.html
Download