RBMS TF Group 3 report (long)

advertisement
RBMS Task Force on Metrics and Assessment: Reference and Instruction
Our group looked at the areas of Reference and Instruction and have found that while Reference
Services are in need of some additional attention to achieve common professional understanding and
shared best practices, the profession has even further to go in the area of Instruction.
Instructional Activities1
Findings:
1. The profession lacks a common articulation what the goal of instruction for special collections
should be, let alone how to measure it.
2. Since there appears to be no widely embraced or held metrics for special collections instruction
in any standardized kind of way, we suggest that existing metrics are generally inadequate.
3. Existing methodologies are generally inadequate for measuring value, impact and outcome.
a.
ARL's broad metrics on instruction are captured at the library level and whether libraries
are even reporting special collections instruction within that number or in ways that it
could be broken out for evaluation is not determined.
b.
SAA and RBMS, the two leading U.S. based special collections professional organizations
have not provided community standards, guidelines, or definitions (as far as we have been
able to determine).
c.
Archival Metrics' Teaching Support Toolkit2 is aimed at instructors and assesses how
archives support their teaching (rather than student's learning). While this might measure
the archives value to teachers, it does not get at the impact and learning outcomes for
students, especially those articulated by the archives. The student's experience can be
1
Our task force’s working definition of instruction is: Providing instruction to classes in a wide variety of disciplines related to
and/or using collections; topical presentations related to and/or using collections to a range of audiences. Activities
encompassed in this area can include:

Working with faculty to develop courses or assignments that use special collections

Developing in-class instructional activities tailored to course learning objectives

Selecting appropriate materials for class sessions related to collections

Conducting in-person instruction for classes in special collections spaces or on-campus classroom

Creating course-related Web pages/subject guides/LibGuides highlighting special collections materials

Contributing special collections-related content to the local course management system or web-streaming

Developing instructional videos or online modules for course-related use

Providing tours of facilities or sessions about collections and services for visitors, community groups, students, etc.

Developing and providing instruction about using special collections services or materials to a group.
 Presenting sessions about special collections or services to groups of visitors and community members.
2 The ArchivalMetrics Toolkit on Teaching is a qualitative survey meant to measure the success of a teacher’s interaction with
Special Collections around a course which is helpful for improving service in-house, but this metrics is limited in scope to inhouse activities. Archival Metrics (http://www.archivalmetrics.org/): Archival Metrics provides "Toolkits" for college and
university archives and special collections which are intended to be standardized user-based evaluation tools. It is unclear how
many special collections are using the Archival Metrics toolkits.
evaluated using the "Student Researchers" questionnaire, but this tool mixes instruction
and reference qualitative assessment.
Recommendation: We suggest that RBMS should provide guidance that could evolve such community
standards, guidelines, and definitions through the work of this task force and that shared values and
impacts need to be identified first. Development of the appropriate metrics to gather in support of
these shared values and impacts can follow this step. Determine first what we are trying to accomplish,
and then determine the best ways to measure whether or not we are meeting our objectives.
We identified a list of instructional metrics special collections may be gathering:
 Number of presentations or classes per term or year
 Type of presentations or class: tour, orientation, course-based, team-taught, etc.
 Number of students touched by instructional activity
 Prep time for archival staff instructor
 Type of group attending presentation: public, undergraduate, graduate, k-12
 Type of follow up/post instruction assignment or activity
 Collections used (number, type, subject)
And note that these metrics are inadequate for articulating value, impact, and outcomes we want to
articulate, including:









Are we making a difference by making presentations?
Are participants learning anything, such as skills or knowledge, from the presentation? What are
they taking away? Define these – this low-hanging fruit since work has already started (“Carini’s
Curricular Concepts, etc.")
Has a user's understanding of a subject area, research process, collection content, or
departmental procedure improved because of instruction efforts?
Is the staff time involved in preparation for presentations worth it?
Are more people using the collections because of our instructional activities? Is that a shared
goal for all departments?
Are the disciplines that could potentially benefit from instruction in Special Collections all taking
advantage of it? Is this a shared goal for all departments?
Is special collections advancing the broader institution's teaching and learning goals?
Do these presentations support a students’ life-long learning?
Does this work adequately support the teaching goals of the faculty and departments?
The priority that RBMS should place on developing a better metric and related definitions
and guidelines for use:
Given the gap we perceive in community standards, guidelines, and definitions for instruction in special
collections and the increased expectation that special collections should be able to articulate their value
to the teaching and learning activities that occur in their parent institutions, we suggest that RBMS
should establish some basic metrics and methods in the area of instruction. Some straightforward first
steps could include:





RBMS could provide a definition of "instruction" to the SAA glossary
(http://www2.archivists.org/glossary) based on definitions used by the broader library
profession.
RBMS could provide an ACRL/RBMS Standard and Guidelines relevant to Instruction activities in
Special Collection that include expectations that certain activities are measured and suggestions
for methods of measurement. Also, ACRL has many information literacy competency,
objectives, and standards statements here <http://www.ala.org/acrl/standards> that might help
craft a standards/competency statement that reflects value, outcomes, impact of special
collections instruction.
RBMS could advise ARL to ensure that ARL libraries are recording and reporting their Special
Collections instruction metrics inline with those reported by the broader library. Additionally,
the ARL definitions (http://www.libqual.org/documents/admin/12instruct.pdf, Questions 14-15)
for instruction metrics could be tested for broader use across special collections regardless of
ARL affiliation.
RBMS could investigate the viability of using ArchivalMetrics Teaching Toolkit (footnote 2) to
gather information from teachers and faculty on the success of working with Special Collections
in their instruction activities and from students on their success in using special collections for
their research. RBMS may consider working with Archival Metrics to further develop and refine
the Teaching Toolkit and the Student Researcher Toolkit, or work to develop an RBMS-approved
suite of metrics similar to Archival Metrics.
Analyze ACRL's Value of Academic Libraries report <http://www.acrl.ala.org/value> to ground
our work and recommendations in the current "Value" statement of our parent (ACRL)
community.
More involved would be the creation of guidelines that would direct more standardized gathering of
metrics regarding:






Type of presentations or class: tour, orientation, course-based, team-taught, etc.
Number of students touched by instructional activity (aligns with ARL question 15
http://www.libqual.org/documents/admin/12instruct.pdf)
Prep time for instructor
Type of group attending presentation: public, undergraduate, graduate, k-12
Type of follow up/post instruction assignment or activity
Collections used (number, type, subject)
Reference Transactions3
Findings:
1. There is more of an existing foundation for both defining and measuring reference activities in
special collections.
2. However, there are not a widely shared or held metrics for special collections reference in any
standardized kind of way, we suggest that existing metrics are generally inadequate.
3. Existing methodologies are generally inadequate for measuring value, impact and outcome.
a.
ARL's broad metrics on reference are captured at the library level and whether libraries
are even reporting special collections reference within that number or in ways that it could
be broken out for evaluation is not determined.
b.
SAA and RBMS, the two leading U.S. based special collections professional organizations
have not provided community standards, guidelines, or definitions (as far as we have been
able to determine).
c.
Archival Metrics Toolkits for "Researchers", "Student Researchers", "Online Finding Aids",
and "Websites" are user-based tools meant to evaluate the quality of and users'
experiences in using archives. The toolkits can gather qualitative feedback on user
experience but includes assessment beyond the reference interaction. The toolkits may
point a direction to qualitative metrics for measuring reference value, impact and
outcome.
Recommendation: We suggest that RBMS could provide guidance that could evolve such community
standards, guidelines, and definitions through the work of this task force and perhaps make the
statistics that are being gathered more relevant. Development of the appropriate metrics to gather in
support of these shared values and impacts can follow this step. Determine first what we are trying to
accomplish, and then determine the best ways to measure whether or not we are meeting our
objectives.
We identified a list of reference metrics special collections may be gathering:
 Number of users who come into a facility or physically approach a reference desk.
 Number of users or questions that do not come in physically but through email, phone, fax and
other virtual paths.
3
Our task force’s working definition of reference is: Providing reference assistance to patrons, answering questions regarding
the nature, content, and usage of collections, matching patron topics of inquiry with relevant collections, helping users
formulate research questions, strategies and processes. Activities encompassed in this area can include:

Retrieval requests for materials from closed stacks.

Facilitating reformatting request orders.

Directional or operational guidance in using the facility, equipment, or other resources.

Assisting with descriptive tools (finding aids, catalog, indexes) in the identification of collections.

Location of materials that meet a user's question or address a research topic.

Providing consultation on research topics or methodology.

Interpretation of context, relevancy and meaning of materials.








Type of patron--undergraduate, general public, etc. (keeping in mind that the common tool
would be for academic libraries)
Type of reference question--directional, reference, remote, etc.
The content of the question and the content of the answer
Collection(s) consulted
Purpose or eventual use of the question (are they writing a book, doing genealogy?
Time spent answering question
Time of day
Type of staff member who answered (professional, paraprofessional, student, etc)
And note that these metrics are inadequate for articulating value, impact, and outcomes we want to
articulate, including:







What academic impact do reference encounters have for patrons?
Has a user's understanding of a subject area, research process, collection content, or
departmental procedure improved because of reference efforts?
What is the long-term value of a reference encounter in special collections? (For different
categories of patron)
What is the impact of our reference services on the global community of scholars?
What is the scholarly value of our collections?
Is the staff time it takes to provide reference in special collections worth it?
What is the impact of the reference skills of special collections staff?
The priority that RBMS should place on developing a better metric and related definitions
and guidelines for use:
Since there may be a common understanding of what reference is and possibly common methodologies
for counting amount, kinds and audience, and given the increased expectation that Special Collection
should be able to provide usage statistics and articulate value to the teaching, learning and research
activities in their parent institutions, we suggest that RBMS could articulate refinement about a common
set of reference metrics that should be kept and recommended way to capture them. Some
straightforward first steps could include:




RBMS could refer people/formally adopt the definitions of "reference" provided by SAA, ARL, or
any other definitions
RBMS could provide an ACRL/RBMS Standard and Guidelines relevant to Reference activities in
Special Collection that include expectations that certain activities are measured and suggestions
for methods of measurement.
RBMS could advise ARL to ensure that ARL libraries are recording and reporting their Special
Collections reference metrics inline with those reported by the broader library.
RBMS could work with ArchivalMetrics to further refine the toolkits as a way to gather
information from users on their success in using special collections in their research, or develop
similar RBMS metrics. Depending on what goals the community has for evaluating reference as
it relates to use, RBMS could assess and possibly adopt Archival Metrics tools ("Researchers",
"Student Researchers", "Online Finding Aids", and "Websites") as qualitative assessment tools
for measuring aspects of "reference". Before the usefulness of the Archival Metrics tools can be
determined, the community needs to determine first what it is trying to accomplish by
measuring "reference".
More involved would be the creation of tools, methods and guidelines that would direct more
standardized gathering of metrics regarding:










Number of users who come into a facility or physically approach a reference desk.
Number of users or questions that do not come in physically but through email, phone, fax and
other virtual paths.
Type of patron--undergraduate, general public, etc. (keeping in mind that the common tool
would be for academic libraries)
Type of reference question--directional, reference, remote, etc.
The content of the question and the content of the answer
Collection(s) consulted
Purpose or eventual use of the question (are they writing a book, doing genealogy?
Time of day
Time spent answering question
Type of staff member who answered (professional, paraprofessional, student, etc)
Methods commonly used to collect data required by the metric:
 Aeon
 Libstats/ LibAnswers (Springshare)
 Tick marks
 Internal circulation database
 Sign-in sheets
 Reader registrations
Less commonly used, but still sometimes employed:





mention of collections, staff in book prefaces, articles, etc.
citation analysis
thank-you letters
attaching surveys for remote patrons
comment books in the reading room
Download