Competency N Melvyn Yabut LIBR 289 1 Dr. Linda Main 10/17/10 Competency N Each graduate of the Master of Library and Information Science program is able to evaluate programs and services on specified criteria. Introduction Evaluation is the process of measuring the performance, output or outcomes of an implemented program or an offered service against a set of criteria (Matthews, 2007). It is an essential component of planning, management and marketing of libraries since it is used in crafting current and future plans for library programming and services. It is a means to gather feedback from the community to serve as a basis for assessing the usefulness of programs and services. It is a tool used by librarians to validate the continuation or the elimination of programs and services. It provides the rationale for the allocation and reallocation of limited funds for the different programs and services that the library offer (McClure, 2008). And it is the basis of a library’s annual report of how well it is doing in meeting the informational, recreational and educational needs of its community. In this essay, the student present evidence of competence in his ability to evaluate programs and services based on specified criteria. Evaluation McClure defines evaluation as the “process of determining the success, impact, results, costs, outcomes, or other factors related to a library activity, program, service, or resource use” (2008, p. 179). The focus of evaluations can be library-centric, customer-centric, or a combination of both (Matthews, 2007). A library-centric evaluation is internal and looks at Competency N 2 processes, functions, and services (Matthews, 2007). It measures the library’s programs and services against a set of criteria that may include costs, service efficiency, and the allocation and use of human and material resources. A customer-centric evaluation focuses on discerning whether or not the library is meeting customer expectations (Matthews, 2007). A combination evaluation looks at the quality of service from both the customer and the library’s perspective (Matthews, 2007). A combination evaluation is outcome-based. It measures the quality of service provided by the library with customer expectations. It is a systematic approach to assessing the extent to which the program or service has achieved its stated objective (Matthew, 2007). Criteria for Evaluation In evaluating programs and services, librarians use a set of criteria to define what they wanted to measure. There are six criteria that librarians usually include in evaluating a service or program: extensiveness, efficiency, effectiveness, service quality, impact, and usefulness (McClure, 2008). Extensiveness refers to the quantity of a service or program a library provides in a given time frame. An example would be collecting the number of reference questions answered in a week. In measuring efficiency, librarians look at the amount of resources used to provide a particular service or program. The usual unit of measurement would be time and cost of service against quantity of items processed or number of transactions. An example would be a cost benefit analysis of offering LINK+ per customer. On the other hand, effectiveness measures how well a program or service meets its stated objective. An example would be rating the success rate of users in satisfying an information need (McClure, 2008). Effectiveness would be based on user expectations and would usually employ a rating scale like rating the success of a program that trained customers on the use of computers from not helpful to most helpful. In measuring service quality, librarians assess the level of excellence of a program or service based Competency N 3 on organizational or user expectations. According to McClure (2008), quality of service is difficult to measure because of different perspectives on what constitutes excellent service. In evaluating impact, librarians measure the change in attitude, behavior, skill level, and knowledge of users as a result of offering a particular service or program. Impact is also referred to as outcomes (McClure, 2008). An example would be evaluating impact of training videos on how to use online resources in changing the information seeking behavior of students. Finally, in measuring usefulness, librarians assess the utility of a program or service to a specific group of users. Similar to quality of service, usefulness is difficult to measure and depends on how users or the library define or perceive usefulness (McClure, 2008). An example would be evaluating the usefulness to Boomers of a program that offer memory-enhancing exercises through the use of computer software. An Evaluation Action Plan Matthews (2007) enumerates seven steps to take in conducting an evaluation of library services. The steps are identify the problem, determine the scope of the evaluation, check if the answers already exists, determine the methodology, specify the data needed, conduct the evaluation and prepare the report, and use the results to improve services (Matthews, 2007). The first step is selecting the program or service area to evaluate. Recurring issues, budget challenges, and frequency of usage are areas that librarians look at in selecting a program or service to evaluate. For example, assessing the efficiency of reference interview transactions at the reference desk or evaluating the usefulness of an afterschool craft program in relation to staffing issues. The second step is to determine the scope of the analysis. An evaluation would not cover all aspects of a program or service so it is important to specify the issues that the assessment would cover. Scope would include determining the following issues: the focus of the Competency N 4 evaluation, whether it would be a library, customer or combination of both; the methodology and design of the evaluation; and the purpose of conducting the evaluation. The third step is to determine if the answer already exists. Networking with colleagues at other branches or at another library system and conducting a literature review are just some the ways to determine if a particular library services have already been studied and the answer already exists. This would also avoid committing mistakes in methodology or design through reviewing the experience of others. Another advantage of checking if the answer already exists is that it could present new areas to study by looking at gaps in knowledge in the professional literature. The next step is to determine whether to use a quantitative or qualitative approach. A quantitative approach uses techniques that gather numerical values that would be subjected to a statistical analysis, while a qualitative approach gathers insight on library services. These two broad methodologies translate into different types that measures performance, output, or outcomes. The fifth step is deciding what data to gather. The choice of methodology also determines what data would be gathered (Matthews, 2007). It is important to be certain what data is needed in conducting the evaluation to avoid gathering unnecessary data and to ensure that the data needed to inform service improvements is gathered. The penultimate step is conducting the analysis and preparing the report. In this step, data is gathered and analyzed and a report summarizing the results and outlining recommendations is prepared. The evaluation report would also document the purpose of the evaluation, its focus and the description of how the evaluation was conducted. It is good practice to include an executive summary of the preceding items (McClure, 2008). Statistical data can also be presented through the use of graphs or charts rather than through lengthy prose (McClure, 2008). Recommendations should be explicit and offer specific strategies for implementation (McClure, 2008). Detailed data and a sample of the data Competency N 5 gathering tool should be included as appendices (McClure, 2008). The language of the report should be tailored to its intended audience (Matthews, 2007). For example, a report intended for outside stakeholders should avoid the use of, or if unavoidable, explain library jargons. Finally, the last step is to use the result to improve library services. An evaluation should always have a potential for action and not be an end in itself (Matthews, 2007). Potential action can be the allocation of funds, resolving recurring issues like delays in providing services, or the retention or elimination of a program or service. Evidence of Competency The student presents four evidences that demonstrate competence in evaluating programs and services on specified criteria. The first evidence is a group paper on the evaluation of the cataloging services of an academic library submitted to LIBR 249. The group decided to evaluate the cataloging services of Stanford University Libraries and Academic Information Resources (SULAIR). The evaluation was based on the ten areas of cataloging services used by Sanchez (2007) in a survey of emerging issues in nine cataloging agencies in academic libraries. The ten areas are: description of the organization and staff, cataloging productivity, new technologies and the enhancement of online catalogs, transition to metadata standards, cataloging of web sites and digital and special collections, library catalog, database maintenance, holdings and physical processing, relationship of the cataloging agency with the acquisitions department, staff education, and other issues facing library staff (Sanchez, 2007). The student together with the two other members of the group visited the head of the Metadata Development Unit of SULAIR to interview her regarding emerging issues in cataloging services in SULAIR. All three members of the group asked questions developed from the ten categories used by Sanchez. Aside from the Competency N 6 interview, the student and the two other members of the group used documents posted on the SULAIR web site on their cataloging policies and the recent redesign of the Metadata Development Unit. The final paper was a case study that described the current state of cataloging services in SULAIR. The final paper demonstrate the student’s competence in evaluating programs and services on specified criteria since the student and the other members of the group developed the questions based on the ten areas of cataloging services to look at the Metadata Development Unit’s efficiency, effectiveness and quality of service using the case study approach. The student’s contributions to the group effort were the following: participated in the group interview of the head of the Metadata Development Unit of SULAIR, editing of draft paper and wrote the introduction, description of the organization and staff, productivity, and other issues. The paper is attached to this essay and labeled EvN01a and the chat transcript of the group members showing a discussion of the group member responsible for writing each section is attached and labeled EvN01b. The second evidence is another case study of a public library looking at the web site, planning documents, budget, external stakeholders, and organizational structure. The paper was a group effort and the subject of the case study was the Palo Alto Main Library. The case study was based on a set of questions provided by the professor and submitted to LIBR 204. The student was responsible for writing the section on demographics and planning and made the edits on the final draft of the group paper. The section on demographics included an assessment of the library’s web site and an analysis of the user base of the library. The specified criteria with regards to the web site required assessing its ease of use and its content. The group was also required to provide recommendations for improving the web site. The evidence demonstrate the student’s competence in evaluating programs and services on specified criteria since the student Competency N 7 assessed the website based on its effectiveness in terms of usability and content. The paper is attached to this essay and labeled EvN02a and a copy of the minutes of the group meeting dividing the sections of the paper among the group members is labeled EvN02b. The assignment instructions and the questions provided by the professor is also attached to this essay and labeled EvN02c. The third evidence is a best practice report on mobilizing a library website and submitted to the student’s internship site and LIBR 294. The report was an assessment of how several public and academic libraries mobilized their library web site and make recommendations to Mountain View Public Library (MVPL) in mobilizing their site. A mobile page is a service that libraries and other information providing entities offer to their customers using a mobile device like a smartphone that has internet access. A mobile page is different from the regular web page as it is selective of the amount of information it offers to users. It is selective since a mobile page takes into consideration the bandwidth and screen size of the devices accessing the page. Offering a mobile page is a library service as it offers convenience and on-the-go account management and library catalog access to mobile users. The research suggested that mobilizing can be done through building a separate site for mobile information, the use of stylesheets, or the use of third party vendors that offer subscription-based mobilizing services. The student proceeded to evaluate the mobile pages/services of public and university libraries based on those that built their sites using mobile stylesheets and those that offered the service using third party vendors. The student assessed these libraries using the criteria of content, features and layout of the mobile site and provided recommendation for prevailing practices that MVPL might want to adopt in offering the service. The paper demonstrates the student’s ability to evaluate services on specified criteria by looking at practices by other libraries and used that as the basis for the Competency N 8 introduction of new service. The paper is also an example of a well-written evaluation report since it has the following elements: a set of recommendations, objectives of the evaluation, it is an evaluation of a service in anticipation of potential action (i.e. development of MVPL mobile page), and an executive summary. The paper is attached to this essay and labeled EvN03. The final evidence is a paper that evaluated ten children’s web site submitted to LIBR 261. The paper is an assessment of five readers’ advisory and five homework help site devoted to children age 9 to 12. The evaluation was based on the following parameters: layout, design and content. The student assessed design in terms of how it would appeal to its intended audience and the ease of using the site. The student assessed content by describing and drilling down through headings and links provided in the site. The student also queried the site by searching for books on the readers’ advisory site and by doing a subject search on the homework help site. The paper demonstrates the student’s competence in evaluating services offered through web sites as it is a descriptive assessment of ten sites to determine its usefulness to its intended audience based on the parameters of layout, design and content. Conclusion The student presented evidences that demonstrated competence in evaluating library services on specified criteria. The evidence presented were the evaluation of the efficiency, effectiveness and quality of service of cataloging services in a university library, the usefulness of a public library web site and readers’ advisory and homework help sites to its intended audience and the evaluation of best practices in mobilizing a library site to inform the decision of offering a similar service. The criteria used to frame the evaluation and guidelines for reporting evaluations were mentioned and discussed. Competency N Attached EvN01a - Paper on evaluating cataloging services EvN01b - Chat transcript of group meeting EvN02a - Paper on evaluating a public library EvN02b - Minutes of group meeting EvN02c - Professor’s assignment instructions and set of questions EvN03 - Paper on best practices on mobilizing a page EvN04 - Paper on evaluating ten children’s web site 9 Competency N 10 References Matthews, J. R. (2007). The evaluation and measurement of library services. Westport CT: Libraries Unlimited. McClure, C. R. (2008). Learning and using evaluation: A practical introduction. In K. Haycock & B. E. Sheldon (ed.), The portable MLIS (179-192). Westport CT: Libraries Unlimited. Sanchez, E. (2007). Emerging issues in academic library cataloging & technical services [PDF]. Primary Research Group.