COMMON ASSESSMENT TERMS Academic Program An academic program is a program of study over a period of time that leads to a degree. (e.g. BBA-Marketing, BA–History, BS- Biology, MBA– Finance, MS- Statistics, PhDEnvironmental Science and Engineering) Academic Success Measures Outcomes are not to be confused with “success,” usually associated with measures of retention, grades, and transfer rates; these success measures do not give us information about what students have learned. Assessment (of Student Learning) Assessment of student learning involves the systematic collection, review, and use of information about educational programs undertaken for the purpose of improving student learning and development. Assessment answers the question, "How do we know what students are learning, and how well they are learning it?" As a process, it has six steps: 1) specify learning objectives; 2) identify learning opportunities; 3) select assessment methods; 4) gather data on student learning; 5) evaluate the data; and 6) make decisions and implement them in order to make improvements. Assessment plan An assessment plan describes the student learning outcomes to be achieved, a description of the direct and indirect assessment methods used to evaluate the attainment of each outcome, the intervals at which evidence is collected and reviewed, and the individual(s) responsible for the collection/review of evidence. Assessment Instruments Assessment instruments are used to gather data about student learning. These instruments can be either quantitative or qualitative, and may be both traditional tests (multiple choice, essay, or other formats), as well as to alternative forms of assessment such as oral examinations, group problem solving, performances, and demonstrations, portfolios, peer observations, etc. The most important factor in choosing an instrument is ensuring that it (a) is gathering information about the desired outcome, not something else and, (b) that the results gathered from using it can be used to make improvements. Benchmark A description or example of student or institutional performance that serves as a standard of comparison for evaluation or judging quality. Benchmarks can be “internal,” i.e., comparing performance against past performance as well as “external,” i.e., comparing performance against the performance of another institution/department/program. Bloom’s Taxonomy of Cognitive Objectives Six levels arranged in order of increasing complexity (1=low, 6=high): 1. Knowledge: Recalling or remembering information without necessarily understanding it. It includes behaviors such as describing, listing, identifying, and labeling. 2. Comprehension: Understanding learned material and includes behaviors such as explaining, discussing, and interpreting. 3. Application: The ability to put ideas and concepts to work in solving problems. It includes behaviors such as demonstrating, showing, and making use of information. 4. Analysis: Breaking down information into its component parts to see interrelationships and ideas. Related behaviors include differentiating, comparing, and categorizing. 5. Synthesis: The ability to put parts together to form something original. It involves using creativity to compose or design something new. 6. Evaluation: Judging the value of evidence based on definite criteria. Behaviors related to evaluation include concluding, criticizing, prioritizing, and recommending. Criterion Referenced Assessment Expectations for the level of performance are based on specific learning outcomes. In criterionreferenced assessment, evaluation is based only on how one does relative to what one needs to know. To use a grading analogy, everyone can get an “A” if everyone meets the expectations for learning that determine an A. Norm Referenced Assessment The level of success is based on the relative standing of a person or program within a larger group, independent of what one actually knows. To use the grading analogy, only a certain proportion of students can receive an A. Curriculum Map Presented in rows and columns, a curriculum map relates program-level student learning objectives (usually listed in individual rows) to the courses and/or experiences that students take in progress to graduation (usually listed in columns). Direct Measures of Learning Direct methods of assessing student learning are those that provide tangible and visible evidence of whether or not a student has achieved the learning intended, can perform a certain task, exhibits a particular skill, or holds a particular value. Examples of direct methods are tests (linked to specific learning outcomes), rubrics, portfolios, capstone projects, field supervisor ratings, employer ratings, scores on licensure exams. Indirect Measures of Learning Indirect methods of assessing student learning involve data that are related to the act of learning, such as perceptions about learning but do not reflect learning itself. Examples of indirect methods are surveys, focus groups, course evaluation, graduate school acceptance rates, internship evaluation, alumni honors and awards. Embedded Assessment A means of gathering information about learning at the program level that uses assessment methods that are already being used for individual student learning in courses. Classroom assignments used as part of the method for assigning students a grade in a course can then also be used for program assessment purposes. Example 1: As part of a course, expecting each senior to complete a research paper that is graded for content and style, but is also assessed for advanced ability to locate and evaluate Web-based information (as part of a university-wide outcome to demonstrate information literacy). Example 2: Faculty members teaching multiple sections of an introductory course might include a common pre-test to determine student knowledge, skills and dispositions in a particular field at program admission. Formative Assessment Formative Assessment methods are used to make improvements, not to come to a final conclusion about performance or knowledge. For example, an instructor can use these methods during a class to evaluate whether students are understanding the material (this is called “classroom assessment methodology”). At the program level, formative assessment means the principle of using whatever results are obtained to improve the program in the future. Summative Assessment Accountability-oriented assessment. The use of data assembled at the end of a particular sequence of activities, to provide a macro view of teaching, learning, and institutional effectiveness. Learning Goal Statements that describe broad learning concepts, for example clear communication, problem solving, and ethical awareness. A description of what our students will be or what our students will have upon successfully completing the Program. Example: Know and apply basic research methods in psychology, including research design, data analysis, and interpretation. Learning Outcome Statements that describe in precise terms the observable behaviors or actions that students will demonstrate that intended learning outcomes have occurred. A description of what we intend for students to know/think (cognitive), feel (affective), or do (behavioral) when they have completed a given course of study. A description of what our students will be able to do. Example: Graduates in Communication will be able to interpret non-verbal behavior and to support arguments with credible evidence. Locally Developed Means of Assessment Assessment instruments such as surveys, examinations, projects, or assignments developed by the institution or by a department (rather than standardized/commercially-available measures). Methods of Assessment Techniques or instruments used in assessment. Qualitative Assessment Methods Methods that rely on descriptions rather than numerical analyses. Examples are ethnographic field studies, logs, journals, participant observation, interviews and focus groups, and openended questions on interviews and surveys. The analysis of qualitative results requires nonstatistical skills and tools. Quantitative Assessment Methods Methods that rely on numerical scores or ratings such as surveys, inventories, institutional/departmental data, departmental/course-level exams (locally constructed, standardized, etc.). In order to analyze quantitative results, either descriptive or inferential statistics are needed. Portfolio An accumulation of evidence about individual proficiencies, especially in relation to learning outcomes. Examples include but are not limited to samples of student work including projects, journals, exams, papers, presentations, videos of speeches and performances. The evaluation of portfolios requires the application of tools using the judgment of experts (faculty members). Reliability Reliable measures are measures that produce consistent responses over time. Rubrics (Scoring Guidelines) A set of categories that define and describe the important components of the work being evaluated. Each category contains a level of completion or competence with a score assigned to each level and a clear description of what criteria need to be met to attain the score at each level. Written and shared for judging performance to differentiate levels of performance, and to anchor judgments about the degree of achievement. Student Learning Outcome Assessment Cycle An institutional pattern of identifying outcomes, assessment, and improvement plans based on the assessment. Teaching-Improvement Loop Teaching, learning, outcomes assessment, and improvement may be defined as elements of a feedback loop in which teaching influences learning, and the assessment of learning outcomes is used to improve teaching and learning. Triangulation A method where multiple assessment methods are used to provide a more complete description of student learning. An example of triangulation would be an assessment plan for a learning outcome that incorporated surveys, interviews, and observations of student behavior. Validity As applied to a test refers to a judgment concerning how well a test/instrument does in fact measure what it purports to measure.