LibQUAL+® 2009: CSUN Summary Data Analysis 1
®
LibQUAL+ ® is a Web-based survey offered by the Association of Research Libraries (ARL), which libraries can use to solicit, track, understand, and act upon users' opinions of service quality. Furthermore, the creators of LibQUAL+ ® point out that, as such, it is not a library
“satisfaction” survey. The goals of LibQUAL+ ® are to:
Foster a culture of excellence in providing library service
Help libraries better understand user perceptions of library service quality
Collect and interpret library user feedback systematically over time
Provide libraries with comparable assessment information from peer institutions
Identify best practices in library service
Enhance library staff members' analytical skills for interpreting and acting on data
The LibQUAL+ ® survey evolved from a conceptual model based on the SERVQUAL ® instrument, a tool for assessing service quality in the private sector.
1 The Texas A & M
University Libraries and other libraries used modified SERVQUAL ® instruments for several years; those applications revealed the need for a newly adapted tool that would serve the particular requirements of libraries. ARL, representing the largest research libraries in
North America, collaborated with Texas A & M University Libraries to develop, test, and refine LibQUAL+ ® . This effort was supported in part by a three-year grant from the U.S.
Department of Education's Fund for the Improvement of Post-Secondary Education
(FIPSE). To date, over 1,000 libraries in 17 countries have participated in LibQUAL+ ® , collecting responses from over 1,000,000 library users
During the spring 2009 session, six CSU campuses participated in LibQUAL+ ® as a consortium: Northridge, Chico, San Jose, San Bernardino, San Luis Obispo, and
Sacramento, and funding was provided by the Chancellor’s Office to cover the costs charged by the ARL. All six campuses had to agree to use the exact same survey instrument and to do a random sample of their populations. In addition, each campus had to agree to map their various campuses’ departments and programs to a standard list of disciplines provided by LibQUAL+ ® . By standardizing the survey, the campuses would be able to compare their results to one another and to national norms.
At CSUN, a random sample of 3,000 undergraduates and 600 graduates (10% of fall 2008 headcount respectively), and 1,000 full- and part-time faculty (50% of fall 2008 headcount) were sent emails inviting them to participate in the LibQUAL+ ® survey, which was available via the Web from mid-March to early April 2009. Four emails were sent to the sample: the first to announce it was coming and why, the second to provide the link to the survey, and the third and fourth were reminders. Also included was an email address created especially
LibQUAL+® 2009: CSUN Summary Data Analysis 2
for the Library that invitation recipients could respond to if they had questions or wanted to provide comments without taking the LibQUAL+ ® survey. In addition, posters were placed at all 13 service points in the Library and an announcement was placed on the Library’s web site. LibQUAL+ ® encourages the use of incentives to increase return rate and automatically provides a random selection of 50 respondents who opted to provide email addresses to enter the drawing. The Library provided $25 Matador Bookstore gift cards to all 50 randomly selected respondents.
Based on LibQUAL+ ® automated quality control filters, 685 “usable” surveys were returned, for a response rate of 14.9% of the total sample of 4,600. “Usable” surveys had to have answers to the 22 core questions and a "user group" (undergraduate or graduate student, faculty) selected. Furthermore, surveys that had 11 or more “non-applicable” responses or excessively inconsistent responses, i.e., where respondents scored their minimum requirements as higher than their desired on nine or more individual items, were also unusable and discarded. According to the LibQUAL+ ® procedure manual, a 15-20% return rate for a Web-based survey is typical. Therefore, our usable return rate of 14.9% is slightly less than the minimum expected. However, random sampling still allows generalizations to a certain degree.
The LibQUAL+ ® 2009 survey (See Appendix 1) included 22 standardized or core questions that asked respondents to rate the Library across three dimensions: “Affect of Service,”
“Information Control,” and “Library as Place” and indicate their minimum, desired, and perceived levels of service for each question on a 9-point scale. Questions were worded as,
“when it comes to [insert question], my [minimum, desired, or perceived] service level is
[number 1 through 9, low to high; or, “non-applicable”]” so that each question had to be answered three times. Furthermore, each service level was described in the survey instructions as follows:
Minimum--the number that represents the minimum level of service that you would find acceptable
Desired--the number that represents the level of service that you personally want
Perceived--the number that represents the level of service that you believe our library currently provides.
Affect of Service (The human dimension of service quality, relating to user interactions with library staff.)
AS-1 Employees who instill confidence in users
AS-2 Giving users individual attention
AS-3 Employees who are consistently courteous
AS-4 Readiness to respond to users' questions
AS-5 Employees who have the knowledge to answer user questions
AS-6 Employees who deal with users in a caring fashion
AS-7 Employees who understand the needs of their users
LibQUAL+® 2009: CSUN Summary Data Analysis 3
AS-8 Willingness to help users
AS-9 Dependability in handling users' service problems
Information Control (The collections and access tools dimension of service quality, relating to the relevance and ease of access to library resources in any format.)
IC-1 Making electronic resources accessible from my home or office
IC-2 A library Web site enabling me to locate information on my own
IC-3 The printed library materials I need for my work
IC-4 The electronic information resources I need
IC-5 Modern equipment that lets me easily access needed information
IC-6 Easy-to-use access tools that allow me to find things on my own
IC-7 Making information easily accessible for independent use
IC-8 Print and/or electronic journal collections I require for my work
Library as Place (The physical dimension of service quality, relating to the library as a place conducive to study, learning, and research.)
LP-1 Library space that inspires study and learning
LP-2 Quiet space for individual activities
LP-3 A comfortable and inviting location
LP-4 A getaway for study, learning, or research
LP-5 Community space for group learning and group study
In addition to the 22 core items, the 2009 CSU participant libraries selected five questions from an approved list, which used the same minimum, perceived, and desired scoring mechanism:
1.
Availability of online help when using my library's electronic resources
2.
Library orientations/instruction sessions
3.
The multimedia (CD/DVD/video/audio) collections I need
4.
Timely document delivery/interlibrary loan
5.
An online catalog that is user-friendly for finding materials
In addition to the 22 core questions and five locally selected questions outlined above, a 9point agreement scale (1 being “strongly disagree” to 9 being “strongly agree”) was used to assess respondents’ general satisfaction and information literacy outcomes based on the following statements (with one exception noted below * ):
1.
In general, I am satisfied with the way in which I am treated at the library.
2.
In general, I am satisfied with library support for my learning, research, and/or
3.
teaching needs.
How would you rate the overall quality of the service provided by the library?
* (In this case, the scale was nine points, 1 being “extremely poor” to 9 being “extremely good”).
LibQUAL+® 2009: CSUN Summary Data Analysis 4
1.
The library helps me stay abreast of developments in my field(s) of interest.
2.
The library aids my advancement in my academic discipline or work.
3.
The library enables me to be more efficient in my academic pursuits or work.
4.
The library helps me distinguish between trustworthy and untrustworthy information.
5.
The library provides me with the information skills I need in my work or study.
Furthermore, respondents were asked about their frequency of use of the library versus the
Internet (daily, weekly, monthly, quarterly, never):
1.
How often do you use resources on library premises?
2.
How often do you access library resources through a library Web page?
3.
How often do you use Yahoo™, Google™, or non-library gateways for information?
Respondents were also asked demographic questions about their age range, gender, discipline, and position (student level; faculty or staff status).
LibQUAL+ ® results were returned to each campus and the 2009 CSU consortium in the form of large documents containing statistical tables and radar charts indicating overall scores, and scores broken down by patron type, depending on who was sampled (undergraduates, graduates, faculty, staff, library staff). Raw data in Excel and SPSS formats are available for download from the LibQUAL+ ® results web site for further analysis. The text of comments including the patron type, age, gender, and discipline were provided in Excel format.
As shown in Figure 1 below, the radar charts show the aggregate mean data for the core 22 questions with each axis referring back to a coded question, such as “AS-3” (Employees who are consistently courteous), and grouped by dimension: Affect of Service (AS),
Information Control (IC), and Library as Place (LP). Mean scores for the minimum, perceived, and desired responses are plotted on the axes, with points closest to the center indicating a lower value, and points closest to the edge a higher value. The resulting gaps between those scores are shaded in blue, red, green, and yellow. Furthermore, the overall shape of the spirals indicates the fluctuation of the data: the “spikier” the shape, the more variable the data.
LibQUAL+® 2009: CSUN Summary Data Analysis 5
Figure 1: Radar Charts
The distance between minimum expectations and perceived level of current service is shaded blue, and is called the positive adequacy gap (or perceived is greater than
minimum). The distance between desired expectations and perceived level of current service is shaded yellow, and is called the negative superiority gap (or perceived is less than desired). A radar chart shaded blue and/or yellow indicates that user perceptions of service fall within what LibQUAL+ ® terms the “zone of tolerance.”
When users’ perceptions fall outside the “zone of tolerance,” the graph will include areas of red and green shading. If the distance between users’ minimum expectations and
perceived level of service delivery is shaded red, that indicates a negative adequacy gap
(or perceived is less than minimum). If the distance between the desired level of service and perceived level of service delivery is shaded green, that indicates a positive
superiority gap (or perceived is greater than desired).
Therefore, service adequacy is an indicator of the extent to which we are meeting the minimum expectations of our users, and service superiority is an indicator of the extent to which we are exceeding the desired expectations of our users. According to LibQUAL+ ® annual survey summaries since 2003 2 , the mean scores of participating college and university libraries lie somewhere between less than desired and greater than minimum, or within the blue and/or yellow “zone of tolerance.”
As illustrated in Figure 2 below, the tables report the aggregate data for the 22 core questions on the three dimensions, the local questions, additional questions, and questions on library and Internet use. In the tabular data for the 22 core and five local questions, one can observe numerically the mean scores for minimum, desired, and, perceived, where
n=the number of respondents. The lowest score would be a 1.00 and the highest would be a
9.00. In addition, the adequacy and superiority mean scores are represented as either positive or negative numbers and represent the following equations:
Adequacy = Perceived - Minimum
Superiority = Perceived - Desired
LibQUAL+® 2009: CSUN Summary Data Analysis 6
Both of the gap scores, either adequacy or superiority, are scaled such that higher scores are more favorable and the closer to +1.0 or over, the better the score for either mean.
Thus, an adequacy gap score of +1.2 is better than an adequacy gap score of +1.0. A superiority gap score of -0.5 is better than a superiority gap score of -1.0.
Figure 2: Sample Core Questions Summary Table
In the general satisfaction and information literacy outcomes questions, respondents were only asked to rate the library once on a 9-point agreement scale, therefore in the tables, only the mean and standard deviation are shown.
The following statistics were provided to help determine comparability between libraries and their LibQUAL+ ® data.
Volumes held June 30, 2008:
Volumes added during year - Gross:
Total number of current serials received:
Total library expenditures (in USD):
Personnel - professional staff FTE:
Personnel - support staff FTE:
1,385,864
13,009
2,691
$9,706, 354
31
57
* Definitions for these items can be found on the ARL Statistics web site at http://www.arl.org/stats/
Table 1 shows the demographic summary for respondents broken down by user group. Note that one of the broadcast emails generated by the CSUN IT department that invited users to paticipate in the LibQUAL+ ® study mistakenly went out to a random sample of emails used for a previous survey (CSU Quality Improvement Program, 2008), and staff were unavoidably included as were faculty and students not originally sampled for LibQUAL+ ®
2009. Therefore, many surveys could have gone out to students who had graduated or faculty/staff who had retired. However, for the 2009 LibQUAL+ ® study, we decided that only
LibQUAL+® 2009: CSUN Summary Data Analysis 7
faculty and students would be invited to participate. Therefore, subsequent follow-up broadcast emails were sent to the “correct” random samples of current students and faculty. This explains the low response rate of staff (n=45) and why their data were not discussed in this report. However, while their comments were excluded, their answers to the survey questions were kept in so as to not have to redo the entire dataset due to 45 “rogue” surveys. Therefore, staff respondents are reported in Table 1.
Table 1: CSUN Respondents by User Group
Most of the respondents were undergraduates (47.74%, n=327) and 24.67% (n=169) were upper division; followed by faculty (29.05%, n=199), and graduate students (16.64%,
n=114). Chart 1 below shows the representativeness of the respondents by user group
LibQUAL+® 2009: CSUN Summary Data Analysis 8
(shown in red) as compared to the actual CSUN population (shown in blue). With the exception of a handful of the 15 sub-groups, there are definite gaps in the representativeness of the respondents versus the total CSUN population.
Chart 1: CSUN Respondents and Population by User Sub-Groups
On the other hand, as Chart 2 illustrates, generalizing results based on discipline may be more useful since there is definitely more overlap between percentage of respondents and campus population. However, the total number of respondents in each discipline is still too small to make generalizations for a specific sub-group. Note also that campuses participating in LibQUAL+ ® as a consortium cannot make a separate entry for each department in the customization of the survey and all have to agree on the groupings of disciplines, even if they did not reflect individual campus’ situations. In addition, discipline groupings further reflect that LibQUAL+ ® conforms to the National Center for Educational
Statistics, again, for comparative purposes across colleges and systems. What is the
LibQUAL+® 2009: CSUN Summary Data Analysis 9
significance of this? One should exercise caution in generalizing results based solely on user sub-group. Therefore, this analysis will focus on the more general user groups (faculty, undergraduates, and graduates) and will not further analyze data based on discipline or class standing.
Chart 2: CSUN Respondents and Population by Disciplines
In addition to user group and discipline, respondents provided data on their frequency of library and Internet use. As shown in Chart 3 below, the majority of respondents access
Internet search engines on a daily basis “for information,” and access the physical and virtual library on a weekly basis, with virtual use slightly higher. Unfortunately, LibQUAL+ ® used vague wording for the question about frequency of Internet use: “How often do you use . . . for information.” What might have been more useful is to qualify the word
“information” with “academic” or similar terms to differentiate using a search engine to find movie times versus information resources in support of the curricular and research needs of
LibQUAL+® 2009: CSUN Summary Data Analysis 10
students and faculty, which is probably an expectation of the role of the campus library.
Therefore, these data, while interesting, are not necessarily telling us how useful the library’s resources are in support of its primary mission simply based on frequency of use; rather, they just confirm other studies on library use and the popularity of the open Internet as an information source.
Chart 3: Library vs. Internet Use Summary for CSUN
As shown in Chart 4 below, in general, for all respondents and for all 22 core questions within the three dimensions--Library as Place (LP), Affect of Service (AS), Information
Control (IC)--the Library was mostly perceived as “less than desired,” or fell into the yellow portion (negative superiority) of the zone of tolerance followed by “greater than minimum” or blue portion (positive adequacy). There was a small amount of red (negative adequacy = perceived < minimum) and no green (positive superiority = perceived > desired) mean scores. Again, as discussed in the interpreting results section above, most American colleges and universities fall into the “zone of tolerance,” somewhere within the blue and yellow circles.
LibQUAL+® 2009: CSUN Summary Data Analysis 11
Chart 4: CSUN Summary Radar Chart
In regards to “Library as place,” the scores are more unevenly divided between the yellow
(“less than desired”) and the blue (“greater than minimum”). “LP-1: Library space that inspires study and learning,” “LP-2: Quiet space for individual activities,” and “LP-5
Community space for group learning and group study” scored lower overall in the Library as
Place dimension, which is supported to some extent by the LibQUAL+ ® comments as well as data gathered from previous Library surveys and anecdotal evidence.
With two exceptions, the questions regarding Information Control scored lower overall within the zone of tolerance when compared to the other dimensions: “IC-1: Making electronic resources accessible from my home or office” scored similarly to the other “blue”
(“greater than minimum”) responses in the other dimensions. “IC-8: Print and/or electronic journal collections I require for my work” fell into the red (“less than minimum”) and out of the zone of tolerance. Again, this theme is repeated somewhat in the comments and in data gathered from other surveys, as well as anecdotal evidence.
With Affect of Service, the relatively smooth spiral means that there was little variability within the zone of tolerance for each question, except in the case of “AS-1 Employees who
LibQUAL+® 2009: CSUN Summary Data Analysis 12
instill confidence in users,” which was slightly more in the blue (“greater than minimum”) than the other responses in that dimension.
So, what is the significance of this? To put these radar charts into perspective, it may be helpful to view the summary chart from LibQUAL+ ® 2009 for all participating American colleges and universities to see that the majority of responses fall into the zone of tolerance
(blue and yellow).
3 However, in comparison to the CSUN Summary Radar Chart (See Chart
4 above), the norm for participating American colleges and universities is more in the
“greater than minimum” or blue portion (positive adequacy) of the zone than the “less than desired” or yellow portion. (See Chart 5 below.)
Chart 5: 2009 American Colleges and Universities Summary Radar Chart
According to LibQUAL+ ® experts, few libraries fall into the green (“greater than desired”) or red (“less than minimum”) outlier zones in regards to any dimension, therefore most libraries should expect to see their scores fall into the blue (“greater than minimum”) and yellow (“less than desired”) spirals of the “zone of tolerance,” with some exceptions with specific questions within a dimension. However, for those libraries who strive for excellence,
LibQUAL+ ® recommends looking at similar institutions (other CSUs) or groups of institutions
(non-ARL four year college/university libraries) that scored higher in those areas of importance and look for evidence of what they do differently on their web sites, in the literature, etc.
Another way of examining the LibQUAL+ ® data is in tabular form, wherein specific mean scores and gaps between the desired and perceived (superiority gap) and the minimum and perceived (adequacy gap) scores are represented numerically. Furthermore, LibQUAL+ ® was designed as such that the higher the gap scores, the better the performance, whether for the adequacy or superiority gap scores. Table 2 below shows the summary of mean scores for each of the three dimensions. Not surprisingly, information control, or library collections and access, had the highest overall desired score and therefore, the lowest
LibQUAL+® 2009: CSUN Summary Data Analysis 13
superiority score, which indicates the Library is not fulfilling our users’ needs in this area.
Furthermore, we are doing a little better meeting their library as place needs, but doing much better with affect of service. Overall, the average adequacy mean was quite low albeit positive, meaning that the perceived mean was higher than the minimum, but not by much.
On the other hand, the superiority mean was low, meaning that the differences between the desired and perceived means were in the negative.
Table 2: CSUN Summary Mean Scores by Dimension
As shown in Table 3 below, the adequacy mean score is positive for all of the 22 core questions, except for “IC-8: Print and/or electronic journal collections I require for my work.” This is a negative number and is printed in red, which indicates that the respondents’ perceived level of service quality in this area is below their minimum level, which is a negative adequacy gap, and is also represented in red on the radar chart. On the other hand, the superiority mean score, which is calculated by subtracting the desired mean score from the perceived mean score, is an indicator of the extent to which we are exceeding the desired expectations of our users. A positive superiority gap score indicates that our users’ perceived level of service quality is above their desired level of service quality and is printed in green in the table and similarly depicted in green on the radar chart. Note that we do not have any positive superiority mean scores, which means we are not exceeding our user’s desired expectations.
Other scores that stand out include “LP-5: Community space for group learning and group study,” which had the second lowest adequacy mean score (0.04), the second lowest superiority mean (-1.05), and was confirmed by the numerous negative comments received from undergraduates. Similarly, “LP-2: Quiet place for individual activities” also fared poorly in terms of mean scores, which was reflected in some of the comments. “IC-3: The printed materials I need for my work” also performed badly, and somewhat duplicates the findings for IC-8. Redundancy is a feature of LibQUAL+ ® as it is for other attitudinal assessment tools.
In regards to more positive scores, respondents scored “AS-1: Employees who instill confidence in users” the best out of the 22 core questions, with an adequacy mean of 0.51 but still a negative superiority mean of -0.84. This means that their perceptions were lower
(6.89) than the overall average of 7.23 for all nine affect of service questions (see Table 2), but their desired needs for this particular dimension were not that much higher (7.73 for this question but the average desired mean for all in this dimension was 7.95). Therefore, this may tell us that we are doing okay in this aspect of service, but it is not as important to
LibQUAL+® 2009: CSUN Summary Data Analysis 14
our users as other aspects. For example, out of all nine questions in the affect of service dimension, the one with the highest desired mean was “AS-3: Employees who are consistently courteous,” which scored 8.14 out of nine. However, the perceived mean was
7.43, which lowered the superiority mean to -0.71. Furthermore, the minimum mean was only 7.15, which raised the adequacy mean to 0.29, which keeps it in the zone of tolerance.
Yet, this seems to fly in the face of common assumptions: would not everyone expect
(minimum mean) and/or want (desired mean) “employees who are consistently courteous?”
So, why the big discrepancy between minimum (7.15) and desired (8.14) means? Does this indicate that our respondents are masochists, willing to be mistreated by rude library employees? Similarly, this seeming anomaly comes up in “AS-4: Readiness to respond to users’ questions.” Again, the minimum mean was 7.07 out of a possible nine, but the desired was 8.01. Wouldn’t users demand that they get answers to their questions?
While the library as place dimension suffered in terms of group study space and quiet, we fared better in terms of atmosphere: “LP-3: a comfortable and inviting location” (adequacy mean = 0.35 as compared to the mean score of 0.22 for the whole dimension); and “LP-4: a getaway for study, learning, or research (adequacy mean = 0.32). On the other hand, the redundant question, “LP-1: library space that inspires study and learning,” fared much lower at 0.16 adequacy mean.
It may be more worthwhile to look at the perceived mean scores for each question within a dimension, which gives an indication of how we are doing at a particular point in time. Then, by comparing the perceived mean to the desired mean, see where it is feasible to make changes that could help improve these scores. Unfortunately, the vagueness of the questions, which is the hallmark of any benchmarking survey, does not give us too many specifics, particularly in terms of information control to make specific changes (we know respondents are not happy with our “printed library materials” collection, but which ones?).
However, taken together with the comments, particularly the raw data, which includes discipline groups, and sometimes specifics from respondents, might help answer these questions.
LibQUAL+® 2009: CSUN Summary Data Analysis 15
Table 3: CSUN Summary Mean Scores by Question
Table 4 and Chart 6 below show the mean scores and gap analysis for CSUN for the five additional questions agreed upon by the current CSU consortium of campuses (i.e., “local questions”). Most of these questions scored lowest in the superiority mean score (yellow or
“perceived is less than desired”) area of the zone, with “availability of online help when
LibQUAL+® 2009: CSUN Summary Data Analysis 16
using my library’s resources” scoring the lowest with a -1.00, followed by a “user friendly online catalog at -0.95. The other questions scored the same or lower, except for “library orientations/instruction sessions,” which scored highest in the adequacy mean and highest
(although still negative, -0.25) in the superiority mean among all 27 core and local questions combined. On the other hand, given the relatively low number of respondents
(n=561) compared with the total (n=685) and lowest desired mean of the questions (7.39), one could argue that the scores alone do not tell the whole story. Namely, the respondents may not have had enough experience with library instruction (similar to the numbers for multimedia collections and ILL) and/or did not really care (minimum mean of 6.33 for library instruction is the lowest for all 27 questions). The rest of the local questions did not fare as well either in terms of perceived means.
Table 4: CSUN Local Questions Summary
Chart 6: CSUN Local Questions’ Gap Scores
LibQUAL+® 2009: CSUN Summary Data Analysis 17
Table 5 displays the mean score and standard deviation for each of the general satisfaction questions: Satisfaction with Treatment ( =7.51), Satisfaction with Support for research, etc. ( =7.20), and Satisfaction with Overall Quality of Service ( =7.45). The first two scores were calculated based on respondents rating their levels of general satisfaction on an agreement scale from one (strongly disagree) to nine (strongly agree). In this case, since respondents did not have to provide minimum, perceived, and desired scores for each question, scores were simply averaged without the aid of gap analysis. Unfortunately, the unlabeled 9-point scaled scoring mechanism does not allow us to perform a semantic differential and draw verbal conclusions about these scores. In other words, since respondents were not given labels for each number in the scale, we know that seven is greater than six, but cannot conclude a score of 7.51 means “somewhat agree” or something else other than saying 7 is closer to 9 (strongly agree) than 6. On the other hand, the overall perceived score of 7.22 for the combination of Affect of Service,
Information Control, and Library as Place (see Table 2) compared to these general satisfaction scores appears to confirm the veracity of those mean scores.
Table 5: General Satisfaction Summary
Table 6 shows the mean score and standard deviation for each of the information literacy outcomes questions. These scores were calculated based on respondents rating their levels of general satisfaction on an unlabeld scale from one (strongly disagree) to nine (strongly agree). Again, we are faced with the same problems in interpretation due to the size of the
Likert scale used and the impossibility to verbalize the nuances between scores. Taken as a group, the Library scored highest on “enables me to be more efficient in my academic pursuits or work” ( =7.12), and lowest on “helps me stay abreast of developments in my field(s) of interest” ( =6.48). However, it seems contradictory that the IL outcomes questions scored below the related general satisfaction question: “I am satisfied with library support for my learning, research, and/or teaching needs,” which scored a mean of 7.20
(see Table 5). What is also concerning is that the two questions that seem the most clearly related to information literacy outcomes: “helps me distinguish between trustworthy and untrustworthy information” and “provides me with the information skills I need in my work or study” scored below seven (6.60 and 6.80 respectively), which had been the mean perceived scores throughout the dimensions. If we compare these scores to the local question about “library orientations/instruction sessions,” we may be seeing a disconnect
LibQUAL+® 2009: CSUN Summary Data Analysis 18
between the importance placed on information literacy outcomes by respondents and/or what is consistently covered in library instruction sessions. Alternatively, we may have encountered a majority of respondents that have not gone through a library orientation or instruction session. How much more telling about our programs if that question could have been asked as a simple yes/no that one could cross-tabulate with the IL outcomes responses. As it is now in LibQUAL+ ® , even if we had scored an average of nine on each of the IL outcomes, we would not have had any mechanism to link that back to our instruction program. Furthermore, the vagueness of the questions: “the library helps me distinguish . .
.,” “the library provides me with information skills, etc.” could have contributed to this relatively low performance. How much more telling it would have been to ask a more specific question, such as, when it comes to interactions with library staff at a service point
or in an instructional session . . . they help me distinguish between trustworthy and untrustworthy information,” etc.
Table 6: Information Literacy Outcomes Summary
Two hundred and seventy-six out of 685 respondents (40.3%) provided comments.
Content analysis was performed and each respondent’s comments were coded as positive, negative, or neutral. For those respondents providing mixed comments
(positive and negative, for example), individual comments were further separated and coded. Therefore, the total number of comments analyzed was 445, and the data presented is based on that total. To read the individual comments, please see the LibQUAL+ ® comments spreadsheet on the Library Intranet Assessment page at http://library.csun.edu/intranet/assessment/LibQUAL_Comments_Revised_4-22-
09.pdf
As shown in Chart 7, 48% of the comments were negative, 44% were positive, and 23% were neutral.
LibQUAL+® 2009: CSUN Summary Data Analysis 19
Chart 7: Total Comments Summary by Category
Furthermore, as depicted in Chart 8, comments were also classified by topic, utilizing the dimensions represented in the LibQUAL+ ® survey: “Service (AS),”
“Collections and access (IC),” and “Library as Place (LP).” In addition, comments matching the local questions one through five were coded as “online help,” “library instruction,” “media collection,” “Interlibrary Loan,” and “OPAC.” The “information literacy outcomes” questions, although there were five of them, were not classified individually in the analysis as there were not enough of each to warrant it.
Comments deemed “general” were those that were indistinct, such as “I love the library.” Finally, questions related to the survey itself were coded “LibQUAL.”
LibQUAL+® 2009: CSUN Summary Data Analysis 20
Chart 8: Total Comments by LibQUAL+ ® Topics
As shown in Chart 8, comments mostly focused on “collections and access” (n=100) and “Library as Place” (n=98). These were closely followed by 83 comments related to “Service,” and 66 “general” comments. Out of the rest of the comments topics,
“Interlibrary Loan” received the most comments (n=26), followed by the survey itself, “LibQUAL,” and “IL outcomes” with 19 and 18 comments respectively.
The next three charts summarize comments by topic and category by user group.
There were 170 comments by faculty (see Chart 9), 179 from undergraduates (see
Chart 10), and 96 by graduates (see Chart 11), again counting the number of comments instead of the number of respondents. Not surprisingly, faculty and graduate students mostly commented about the collections and/or access to them
(the “information control” dimension). Furthermore, undergraduates had the most to say about the “library as place,” and were consistently negative. Most of the negative comments about the library as place had to do with the paucity of group study rooms and lack of noise control by library staff. Service was also important to faculty as evidenced by the number of comments, and to a similar degree,
LibQUAL+® 2009: CSUN Summary Data Analysis 21
important to graduate students, too. Faculty and graduate students were largely positive about library service, while undergraduates had mixed feelings. Most of the positive comments about service related to helpfulness, while the negative comments related to perceived rudeness or lack of training or awareness of service desk staff or student workers. Generalities were another large source of comments from all three patron categories. Most general comments were also positive, such as “overall, I am satisfied with the library,” or “good job.” The few negative comments about the library in general related to prefering another library to do research, such as UCLA. Since nothing in particular was noted, one cannot ascertin if this referred to collections, library as place, or services.
The other area receiving a proportionally large number of comments related to
Inter Library Loan. As shown in Chart 9, faculty were mostly evenly divided between positive and negative comments about ILL, with slightly more positive. The common theme seemed to be gratefulness for the service, but unhappiness with the perceived slowness of fufilling requests. On the other hand, graduate students, as shown in Table 11 were mostly negative, again for the same reasons.
Also interesting to note are the relatively large number of negative comments received from faculty about the LibQUAL+ ® survey itself, which forced the creation of a new topic for comment analysis separate from the content of the survey questions.
LibQUAL+® 2009: CSUN Summary Data Analysis 22
Chart 9: Summary Faculty Comments by Topic and Category
LibQUAL+® 2009: CSUN Summary Data Analysis 23
Chart 10: Summary Undergraduate Comments by Topic and Category
LibQUAL+® 2009: CSUN Summary Data Analysis 24
Chart 11: Summary Graduate Comments by Topic and Category
As previously stated, six CSU campuses participated in LibQUAL+ ® as a consortium during the spring 2009 session: Northridge, Chico, San Jose, San Bernardino, San Luis Obispo, and
Sacramento. Courtesy of Carlos Rodriguez, CSU Sacramento, Appendix 2 provides the mean scores by campus for each of the dimensions (AS, IC, LP) and includes the adequacy and superiority means. Note that none of the superiority mean scores at any of the six campuses were positive numbers meaning that the differences in what respondents desired versus what they perceived was negative; however, some are closer to +1.0 than others.
What the data does not show is the variability in how samples were obtained (some randomized, some convenience) and that all campuses except ours purposefully included staff, and in some cases library staff (!), in their sample. Therefore, overall scores as they include our tiny (n=45) sample of staff, which would somewhat skew overall results, should be discounted, and one should only make comparisons between the campuses about students and faculty.
LibQUAL+® 2009: CSUN Summary Data Analysis 25
It is interesting to note that San Bernardino had consistently higher mean perceived scores than the other campuses among both students and faculty in the “Affect of Service (AS)” dimension, among the student groups only in the “Information Control (IC)” dimension, and for undergraduates only in the “Library as Place (LP)” dimension. Therefore, it might be worthwhile, particularly in the area of service to find out what is different about San
Bernardino in this regard.
In terms of comparability, out of the 2009 CSU consortium that participated in LibQUAL+ ® ,
San Jose would be the closest in terms of student enrollment, but given their unique blend of public and academic libraries, Sacramento seems a better match, although with a smaller student body. Among the three dimensions surrounding the core questions, there do not seem to be major differences between the three campus libraries’ mean scores, except in the “Library as Place” dimension. In this case, San Jose clearly wins the prize; yet, this is not surprising given that their library is new (opened in 2003) and a unique collaboration between the San Jose public libraries and the university.
As this consortium was one of convenience rather than logic, based on which libraries were willing to participate, and not on any logical grouping, it is questionable if comparisons between individual libraries can be made. Future such collaborations should either include all CSU libraries or focus on like campuses in any given year to make the data that much more useful.
In terms of the representativeness of the results, the 14.9% usable return rate is only acceptable at best. Generally, Web-based surveys can expect a minimum 15% - 20% return rate. Our slightly less than minimum return rate can probably be explained by the numerous comments received about the survey itself that were resoundingly negative, not to mention the comments received via our libqual@csun.edu email address. Furthermore, given the length and complexity of LibQUAL+ ® , which is estimated to take a minimum of 13 minutes to complete, and that all of the questions had to be answered, plus the built in mechanism to eliminate specious surveys, the usable return rate of 14.9% is not surprising. For example, these comments were received from various faculty members:
“This survey has way too many vague questions; forcing an answer to questions that are not understandable is not a good idea.”
“The survey was difficult to complete because on the fine distinction of the 9-point rating scale.”
“This survey is really confusing, what does minimum service level and desired service level mean?”
“This is the worst survey I have ever seen. I hope you did not pay anyone to create this. You should have consulted with the many professors on campus who are adept at developing such tools. I would not trust the results.”
“Please design it again so that it's easy to use, short, and to the point.”
In addition to the mechanics of the survey, while LibQUAL+ ® ’s reliance on gap analysis is unique to library surveys, and combined with the unformity and redundancy of the questions within the three dimensions, is touted as having superior validity and reliability as compared to less-sophisticated instruments, one could argue that it is LibQUAL+ ® ’s very
LibQUAL+® 2009: CSUN Summary Data Analysis 26
uniqueness that makes it suspect in terms of its value to a specific library. First and foremost, the definitions of service levels, i.e.,”minimum,” “desired,” and “perceived” are not clearly explained on the instrument itself, which, in fact, re-uses the term “minimum” in its very defintion, which is just sloppy considering the thoughtfulness with which the rest of the survey has been developed and modified over the years. Furthermore, one could argue that the semantic difference between minimum (i.e., “. . . the minimum level of service that you would find acceptable”) and desired (i.e., “the level of service that you personally
want”) is not clearly differentiated or even logical.
Some have questioned the logic of going beyond “perceived” at all to assess service quality, 4 and only see the benefit of comparing it to “desired” for service quality surveys as it begs the question: what is the difference between minimum and desired when it comes to service quality? As the adequacy mean created by the difference between perceived and minimum means is what creates the so-called “zone of tolerance,” it appears to merely throw LibQUAL+ ® participant libraries a bone by allaying our fears that somehow everyone else is better than we are, but as a result provides no real usable data.
This is also supported in the literature by Roszkowski, Baky, and Jones who cite conflicting studies as an indication of the “[inconsistent] . . . value of the “gap scores.” 5 One wonders if
“desired” had been defined as those services you “wish” your library provided instead of
“personally want,” and “minimum” had been eliminated as a choice if the vagueness of the
“zone of tolerance” would be avoided, thus providing more meaningful (if not overly positive) results. In fact, Hernon and Nitecki 6 cite several studies that question the reliability and validity of analyzing the gaps between minimum, desired, and perceived levels of service quality at all.
Furthermore, the unlabeled 9-point scaled scoring mechanism does not allow us to perform a semantic differential and draw verbal conclusions about these scores. In other words, since respondents were not given labels for each number in the scale, we know that seven is greater than six, but cannot conclude a score of 7.51 means “somewhat agree” or something similar. While some psychometricians advocate the use of 7- or 9-point attitudinal scales, others argue that respondents cannot distinguish nuances on anything above a 5-point scale. Furthermore, respondents may avoid using extreme response categories. i.e., score a one or a nine given the extreme range, in which case, most responses would tend to cluster around the middle (central tendency bias) 7 . For example, given that the affect of service dimension measures intangibles, such as employee behaviors and attitudes toward the clientele (i.e., the “priceless” dimension of library quality), rather than tangibles, such as the largely budget-dependent information control dimension (i.e., “IC-5: Modern equipment that lets me easily access needed information”), why are the minimum mean scores for service so flat? Wouldn’t everyone want to be treated with the upmost courtesy (i.e., 9) by library staff at a minimum? Therefore, these scores are highly subjective for both the respondent and the researcher, and in fact, one could argue not useful for informing the library staff of how they are doing except relative to the topic of each question and in comparison to national or local mean scores.
Another limitation of the survey is, except when volunteered in the comments, specific concerns relative to a campus cannot be discerned and even questions that should be of
LibQUAL+® 2009: CSUN Summary Data Analysis 27
universal concern are left out due to the high variability amongst campuses, thereby making this type of benchmark survey less that useful. For example, library hours, circulation policies, cleanliness, infrastructure to support seamless Internet access, and signage are not addressed by any of the core or local questions. Yet, these are areas of interest to library administrators. Furthermore, the fact that local questions are limited to five and require agreement from a convenience-based consortium of campuses with many variations in population served, budget, and infrastructure makes it impossible to rely on LibQUAL+ ® for answers to these more urgent questions. Indeed, the very vagueness of the questions leaves one scratching one’s head in trying to decide what the answers really tell us other than generalities at best. After careful examination of the survey itself and the data it garners, one wonders if the comments data are the most valuable aspect of LibQUAL+ ® except for the inability to have make cross comparisons with other libraries.
Despite the many limitations of the data provided by the LibQUAL+ ® survey, one lone positive voice from an undergraduate probably sums up how patrons perceive library surveys in general, if they are able to spare the time and honest effort to take them: “Thank you for providing this survey. It shows us that our voices are heard.” Therefore, one should consider LibQUAL+ ® not the end of the discussion of a library’s service and collections delivery assessment, but only the beginning of a comprehensive assessment program and foundation of a growing culture of assessment.
LibQUAL+® 2009: CSUN Summary Data Analysis 28
®
We are committed to improving your library services. Better understanding your expectations will help us tailor those services to your needs.
We are conducting this survey to measure library service quality and identify best practices through the Association of Research Libraries' LibQUAL+
®
program.
Please answer all items. The survey will take about 10 minutes to complete. Thank you for your participation!
Please rate the following statements (1 is lowest, 9 is highest) by indicating:
Minimum -- the number that represents the minimum level of service that you would find acceptable
Desired -- the number that represents the level of service that you personally want
Perceived -- the number that represents the level of service that you believe our library currently provides
For each item, you must EITHER rate the item in all three columns OR identify the item as "N/A" (not applicable).
Selecting "N/A" will override all other answers for that item.
When it comes to...
My Minimum
Service Level Is
Low
My Desired
Service Level Is
High Low High Low
Perceived
Service
Performance Is
High
N/A
1) Employees who instill confidence in users
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 N/A
2) Making electronic resources accessible from my home or office
3) Library space that inspires study and learning
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 N/A
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 N/A
4) Giving users individual attention 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 N/A
5) A library Web site enabling me to locate information on my own
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 N/A
6) Availability of online help when using my library's electronic resources
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 N/A
7) Employees who are consistently courteous
9) Quiet space for individual activities
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 N/A
8) The printed library materials I need for my work
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 N/A
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 N/A
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 N/A 10) Readiness to respond to users' questions
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 N/A 11) The electronic information resources I need
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 N/A 12) Library orientations / instruction sessions
13) Employees who have the knowledge to answer user questions
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 N/A
LibQUAL+® 2009: CSUN Summary Data Analysis 29
14) The multimedia (CD / DVD / video / audio) collections I need
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 N/A
15) A comfortable and inviting location 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 N/A
16) Employees who deal with users in a caring fashion
22) Willingness to help users
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 N/A
17) Modern equipment that lets me easily access needed information
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 N/A
18) Timely document delivery / interlibrary loan
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 N/A
19) Employees who understand the needs of their users
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 N/A
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 N/A 20) Easy-to-use access tools that allow me to find things on my own
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 N/A 21) A getaway for study, learning, or research
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 N/A
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 N/A 23) Making information easily accessible for independent use
24) Print and/or electronic journal collections I require for my work
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 N/A
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 N/A 25) Community space for group learning and group study
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 N/A 26) An online catalog that is user-friendly for finding materials
27) Dependability in handling users' service problems
1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 1 2 3 4 5 6 7 8 9 N/A
Please indicate the degree to which you agree with the following statements:
28) The library helps me stay abreast of developments in my field(s) of interest.
1 2 3 4 5 6 7 8 9
Strongly Disagree Strongly Agree
29) The library aids my advancement in my academic discipline or work.
30) The library enables me to be more efficient in my academic pursuits or work.
1 2 3 4 5 6 7 8 9
Strongly Disagree Strongly Agree
1 2 3 4 5 6 7 8 9
Strongly Disagree Strongly Agree
31) The library helps me distinguish between trustworthy and untrustworthy information.
32) The library provides me with the information skills I need in my work or study.
1 2 3 4 5 6 7 8 9
Strongly Disagree Strongly Agree
1 2 3 4 5 6 7 8 9
Strongly Disagree Strongly Agree
1 2 3 4 5 6 7 8 9
Strongly Disagree Strongly Agree
33) In general, I am satisfied with the way in which I am treated at the library.
34) In general, I am satisfied with library support for my learning, research, and/or teaching needs.
35) How would you rate the overall quality of the service provided by the library?
1 2 3 4 5 6 7 8 9
Strongly Disagree Strongly Agree
1 2 3 4 5 6 7 8 9
Extremely Poor Extremely Good
LibQUAL+® 2009: CSUN Summary Data Analysis 30
Please indicate your library usage patterns:
36) How often do you use resources on library premises?
___ Daily
___ Weekly
___ Monthly
___ Quarterly
___ Never
37) How often do you access library resources through a library Web page?
___ Daily
___ Weekly
___ Monthly
___ Quarterly
___ Never
38) How often do you use Yahoo(TM), Google(TM), or non-library gateways for information?
___ Daily
___ Weekly
___ Monthly
___ Quarterly
___ Never
Please answer a few questions about yourself:
39) Age:
___ Under 18
___ 18 - 22
___ 23 - 30
___ 31 - 45
___ 46 - 65
___ Over 65
40) Sex:
___ Male
___ Female
LibQUAL+® 2009: CSUN Summary Data Analysis 31
41) Discipline: ___ Art / Music / Cinema & TV / Theatre
___ Asian American / Chicana/o / Central American / Pan African / Gender & Women's / Jewish
Studies
___ Biology / Chemistry & Biochemistry / Geology / Physics & Astronomy / Mathematics
___ Business (all majors except Economics)
___ Child & Adolescent Development
___ Communication Disorders & Sciences
___ Communication Studies / Journalism
___ Deaf Studies
___ Economics
___ Education (all majors except Deaf Studies)
___ Engineering / Computer Science
___ English / Modern & Classical Languages & Literatures / Linguistics & TESL
___ Environmental & Occupational Health
___ Family & Consumer Sciences
___ Genetic Counseling
___ Health Sciences (all majors) / Nursing
___ History
___ Humanities Interdisciplinary Program
___ Kinesiology / Physical Therapy
___ Liberal Studies
___ Other / Special Major
___ Political Science / Urban Studies / Geography / Anthropology
___ Psychology
___ Recreation & Tourism Management
___ Religious Studies / Philosophy
___ Sociology / Social Work
___ Undecided / Undeclared
42) Position:
(Select the ONE option that best describes you.)
Undergraduate: ___ First year
___ Second year
___ Third year
___ Fourth year
___ Fifth year and above
___ Non-degree
Graduate: ___ Masters
___ Doctoral
___ Non-degree or Undecided
Faculty: ___ Adjunct Faculty
___ Assistant Professor
___ Associate Professor
___ Lecturer
___ Professor
___ Other Academic Status
Library Staff: ___ Administrator
___ Manager, Head of Unit
___ Public Services
___ Systems
___ Technical Services
___ Other
Staff: ___ Research Staff
___ Other staff positions
43) Please enter any comments about library services in the box below:
LibQUAL+® 2009: CSUN Summary Data Analysis 32
44) Enter your e-mail address in the box below if you would like to enter an optional drawing for a prize.
Your e-mail address will be kept confidential and will not be linked to your survey responses. (Not required)
Thank you for participating in this library service quality survey!
LibQUAL+® 2009: CSUN Summary Data Analysis 33
®
LibQUAL+® 2009: CSUN Summary Data Analysis 34
LibQUAL+® 2009: CSUN Summary Data Analysis 35
LibQUAL+® 2009: CSUN Summary Data Analysis 36
1 Parasuraman, A., Berry, L., & Zeithaml, V. (1991). Refinement and Reassessment of the SERVQUAL Scale.
Journal of Retailing, 67(4), 420. Retrieved from Business Source Elite database; Parasuraman, A., Zeithaml, V., &
Berry, L. (1988). SERVQUAL: A Multiple-Item Scale for Measuring Consumer Perceptions of Service Quality. Journal
of Retailing, 64(1), 12-40. Retrieved from Business Source Elite database.
2 Association of Research Libraries. [Year] LibQUAL+® Survey Highlights (2003-2009). Retrieved on 8 June 2010, from http://www.libqual.org/Publications.aspx
3 Association of Research Libraries. 2009 LibQUAL+® Survey Highlights Session I: January to May. Retrieved on 8
June 2010, from http://www.libqual.org/documents/admin/LibQUALHighlights2009_SessionI.pdf
4
Roszkowski, M. J., Baky, J. S., & Jones, D. B. (2005). So which score on the LibQual+(TM) tells me if library users are satisfied? Library & Information Science Research, 27(4), 424-439. doi: 10.1016/j.lisr.2005.08.002.
5 Ibid.
6
Hernon, P. & Nitecki, D. (Spring, 2001). Service Quality: A Concept not Fully Explored. Library Trends 49, 687-
708. Retrieved from Academic Search Elite database.
7
Brill, J. E. Likert Scale. Encyclopedia of Survey Research Methods. (2008). Retrieved from SAGE Reference Online database.
LibQUAL+® 2009: CSUN Summary Data Analysis 37