Evaluation of Efforts to Broaden STEM Participation: Results from A Two-Day Workshop Planning Committee: Bernice Anderson Elmima Johnson Beatriz Chu Clewell Norman Fortenberry Presenters: Patricia B. Campbell & Veronica Thomas Evaluation of Efforts to Broaden STEM Participation: Workshop Goals To develop and validate a strategy by which to demonstrate the value of NSF's investment in broadening participation (BP). To negotiate answers to two questions: 1. What metrics should be used for project monitoring? 2. What designs and indicators should be used for program evaluation? Evaluation of Efforts to Broaden STEM Participation: The Workshop Report The Policy Context for NSF Programs for Broadening Participation (Fortenbury) Measuring Success and Effectiveness in NSF’s Broadening Participation Programs (Clewell) Outcomes and Indicators Related to Broadening Participation (Campbell, Thomas, & Stoll) Evaluating Efforts to Broaden Participation (Campbell, Stoll, & Thomas) Implications of the NSF Broader Impacts Statement (Nelson & Bramwell) The Policy Context: Historically NSF’s goal of broadening participation has been shaped through a variety of policy actions by the legislative and executive branches of government. Within the agency itself, policies articulated by the National Science Board (NSB) and the Committee on Equal Opportunity in Science and Engineering (CEOSE) have informed the NSF approach and strategy to address this goal, as referenced in major policy documents issued by NSF. The Policy Context: Currently Broadening Participation at the National Science Foundation: A Framework for Action (May 2008), outlines the NSF-wide broadening participation plan. It provides guidelines for broadening participation both externally and internally, through: • Expanding the reviewer pool • Training NSF staff and reviewers • Enforcing accountability for NSF staff and principal investigators • Communicating promising practices • Maintaining and monitoring a portfolio of relevant programs The Policy Context: A Core Value and A Strategic Goal Broadly Inclusive: seeking and accommodating contributions from all sources while reaching out especially to groups that have been underrepresented; serving scientists, engineers, educators, students, and the public across the nation; and exploring every opportunity for partnerships, both nationally and internationally. Investing in America's Future: Strategic Plan FY 2006-2011, National Science Foundation, NSF 06-48, National Science Foundation, Arlington, VA, 2006. Measuring Success: NSF – BP Programs Broadening Participation Focused Programs (28 Programs; 17 Require Evaluations) Programs with Emphasis on Broadening Participation (17 Programs; 8 Require Evaluation) Programs with Broadening Participation Potential (16 Programs; 9 Require Evaluation) Other Broadening Participation Efforts (5 Programs). Measuring Success: Suggested Monitoring Metrics Institution Focused Targeted Programs Goal: Increase research capability and teaching effectiveness Baseline data: Collaborative relationships, Funding distribution, % URM students, Total enrollment Follow-up: Collaborative relationships established, Funding support obtained, Teaching reforms effected Measuring Success: Recommendations It is recommended that the NSF: • Conduct periodic evaluations, including external reviews ranging from the program level to larger cross-sections of the portfolio • Develop a common framework requiring that BP projects collect uniform data • Review all funded programs to determine: If program funds serve a representative proportion of members of under-represented groups or institutions; If positive outcomes of programs are distributed equitably among all groups of participants or institutions. Broadening Participation (BP) : Critical Issues Related to Indicators and Outcomes • Developing shared understanding and clarifying meaning • Addressing ‘‘success” at multiple levels Important Distinctions • Inputs • Outputs • Process • Outcomes Inputs Resources, contributions, investments that go into the project Input indicators measure resources, contributions and investments such as: Staff Volunteers Funding Materials Facilities Investments made to support BP Outputs Units of services and goods provided by the project Output indicators measure things such as the scope/size of activities, services, events, and products reaching underrepresented Numbers of students served Numbers of workshops Process Ways in which project services and goods are provided Process indicators measures extent to which BP projects, programs, and strategies delivered as intended (alignment) Outcomes Things project hopes to achieve; actual benefits, impact, or changes Outcomes indicators expressed in terms of changes for individuals, groups, communities, institutions, and system : Knowledge, attitude, and skill changes Behavior changes Value changes Policy, procedural, and practice changes Considering BP Success at Multiple Levels Level 1: Having access to the benefits of STEM knowledge Level 2: Having access to STEM knowledge Level 3: Studying STEM Level 4: Working in STEM areas Level 5: Generating STEM knowledge Problems in Determining “Success” Defining in terms of increase in absolute number or percentage Defining in terms of increase in both number and percentage Defining in terms of the end point being “parity” (absolute number) Other Considerations in Defining Success Defining “parity” as a range Achieving parity, as more participate overall Considering discipline/field size to which definition of success apply Integrating qualitative indicators (e.g., broadening and transforming perspectives) Other Indicators of Success Broadening Participation Individual level indicators Institutional level indicators Foundation level indicators Individual (Student) Level Indicators Participation Retention, persistence, and success Experiences Attitudes Institutional Level Indicators • Staffing • Policies, programs, and institutional commitment • Accountability and rewards • Monitoring, tracking, and using data for improvement • Collaborations Foundation Level Indicators • Inclusion of information about importance of BP • Review and monitoring of foundation policies/practices in terms of potential to broaden participation • Diversity of professional involved with the Foundation • Foundation resources devoted to BP • Improvements to knowledge base about broadening participation • Implementation of effective strategies at Foundation level to BP Evaluating BP: Research vs. Evaluation Goal Research: To move the knowledge base forward. Evaluation: To assess quality/effectiveness. Outcome Research: Why something does or doesn’t. Evaluation: If something does or doesn’t work. Focus Research: The research. Evaluation: The program/intervention. Designs/measures/analysis: No difference Evaluating BP: Longitudinal Tracking Being able to follow students longitudinally is the key to any sophisticated understanding of how colleges are doing and what's happening to students. - Thomas R. Bailey, 2008 Without longitudinal data, the generation and testing of causal models tied to successful participation in STEM for diverse populations will be difficult if not impossible. Evaluating BP: Using Comparison Groups Evaluating BP: Selecting Designs The appropriateness of the fit between the design of the program or “intervention” and the requirements of more rigorous evaluation methodologies. The timing of the evaluation. The balance between the level of investment in the evaluation and the level of investment in and the intensity of the intervention. The level of evidence expected given the nature of the intervention. The strength of rival hypotheses. Evaluating BP: Selecting Designs Study Type Design Representation Typical questions answered by the design One-shot After attending a preview Quantitative Post-test weekend are at least 50% Case Study only X O Design Quasiexperimental Study apply to the institution? One-shot Pre-testPost-test Design of the students planning to Does working with a role Oa X Ob model increase girls’ interest in science careers?