Assessing Reference and User Services in a Digital Age
eBook - ePub

Assessing Reference and User Services in a Digital Age

  1. 250 pages
  2. English
  3. ePUB (mobile friendly)
  4. Available on iOS & Android
eBook - ePub

Assessing Reference and User Services in a Digital Age

Book details
Book preview
Table of contents
Citations

About This Book

Effectively assess whether any library is making good use of the reference/user service resources available todayLibraries need to develop standards by which they can assess their individual performances in a larger context, and Assessing Reference and User Services in a Digital Age makes significant contributions to this ongoing discussion. The book addresses its subject matter via approaches ranging from case studies of individual libraries to general discussions of best practices. The contributors explore the impact of the Internet on the field of evaluation, focusing on electronic reference and instruction. They highlight current issues, present research results, and offer expert advice on how to assess online reference and instruction. All chapters are well referenced to facilitate further study, and many include tables, appendixes, checklists, and other helpful features that make difficult information easy to access and understand.The chapters that make up Assessing Reference and User Services in a Digital Age are as rich and varied as the backgrounds of their authors. Experienced researchers provide the results of studies conducted to determine the nature and effectiveness of the online reference services offered by various libraries. Practitioners and administrators from different institutional settings (academic libraries, public libraries, consortiums, etc.) provide their perspectives on the issues facing librarians who need to assess the electronic services they provide.In this important new book:

  • Andrew Briedenbagh shows how a chat service can be implemented and suggests which data should be collected for it
  • Buff Hirko examines VET: the Virtual Evaluation Toolkit
  • Ruth Vondracek shares the experiences of a university library as it entered a statewide e-reference consortium, and offers advice and issues to consider before entering such a partnership
  • librarians from San Jose State University present a model for evaluating electronic reference services that can be used in public or academic libraries
  • Kathleen Kern discusses holistic evaluation
  • chat transcripts are addressed in several chapters, including Joseph Fennewald's comparisons of question categories, Lesley Moyo's analysis of the use of instruction in the virtual environment, and Caleb Tucker-Raymond's proposed set of quality measures for chat reference
  • Laurie Probst and Michael Pelikan report on the use of a Tell Us What You Think button to gather user feedback
  • Kristi Nelson and Catherine L. Ross examine a research study that asked library school students to submit a reference question online and report on their experiences
  • Melissa Gross, Charles McClure, and R. David Lankes suggest measures to determine the cost and benefits of a virtual reference service
  • librarians from Utah State University describe the development of their online instructional module

Assessing Reference and User Services in a Digital Age is designed as essential reading for library administrators, public service librarians, and researchers. It provides general advice for practitioners as well as an examination of research results and methodological issues. We urge you to consider making it part of your professional or teaching collection today.

Frequently asked questions

Simply head over to the account section in settings and click on ā€œCancel Subscriptionā€ - itā€™s as simple as that. After you cancel, your membership will stay active for the remainder of the time youā€™ve paid for. Learn more here.
At the moment all of our mobile-responsive ePub books are available to download via the app. Most of our PDFs are also available to download and we're working on making the final remaining ones downloadable now. Learn more here.
Both plans give you full access to the library and all of Perlegoā€™s features. The only differences are the price and subscription period: With the annual plan youā€™ll save around 30% compared to 12 months on the monthly plan.
We are an online textbook subscription service, where you can get access to an entire online library for less than the price of a single book per month. With over 1 million books across 1000+ topics, weā€™ve got you covered! Learn more here.
Look out for the read-aloud symbol on your next book to see if you can listen to it. The read-aloud tool reads text aloud for you, highlighting the text as it is being read. You can pause it, speed it up and slow it down. Learn more here.
Yes, you can access Assessing Reference and User Services in a Digital Age by Eric Novotny in PDF and/or ePUB format, as well as other popular books in Bildung & Bildungstechnologie. We have over one million books available in our catalogue for you to explore.

Information

Publisher
Routledge
Year
2013
ISBN
9781135804350
Edition
1
Topic
Bildung

STANDARDS AND METHODS FOR EVALUATING VIRTUAL REFERENCE

Looking at the Bigger Picture: An Integrated Approach to Evaluation of Chat Reference Services

M. Kathleen Kern
SUMMARY. Virtual reference offers some unique new opportunities for evaluation due to the richness of the transcripts and other automatically collected data. To put evaluation of virtual reference into context, however, libraries should view and evaluate virtual reference as part of the whole of a library's reference service. Holistic evaluation pursues an integrated approach to evaluating the total of a library's reference service. doi:10.1300/J120v46n95_07 [Article copies available for a fee from The Haworth Document Delivery Service: 1ā€“800-HAWORTH. E-mail address: <[email protected]> Website: <http://www.HaworthPress.com> Ā© 2006 by The Haworth Press, Inc. All rights reserved.]
KEYWORDS. Evaluation, reference services, virtual reference, e-mail reference, chat reference, integration

INTRODUCTION

Virtual reference services provide the library community with a new opportunity to evaluate our services. In fact, chat reference has sparked something of a renaissance in evaluation of reference services. There are many aspects of the virtual reference transaction that can be examined; some of these types of evaluation are unique to the virtual reference environment.
Most of the evaluations of virtual reference have focused exclusively on chat and e-mail transactions, examining only those virtual transactions. As virtual reference becomes less of a novelty and more of a mainstream service, it is important that libraries start to evaluate their virtual services in the context of their reference services as a whole. Holistic evaluation will give us a better picture of our reference services.

TIMELESSNESS OF EVALUATION

The basics of reference evaluation are timeless and classic. We seek to answer the questions: who are our users and how do they use our service; when, how, and with whom should we staff our service; are our answers accurate; and how satisfied are our patrons with the assistance we provide? The fundamental questions, the reasons for evaluation, remain the same. Approach to these questions, however, has varied. Librarians have used different research methodologies and have examined different aspects of the questions.
Innovations and new technologies emerge and change the patterns of what and how we evaluate reference service. The Brandeis model of tiered staffing led to evaluations of where and with whom we staff at tiered service points, as well as the efficacy of this model. Telephone reference led to evaluation of the telephone as a communication tool for reference and a separate reference service point. Seminal papers such as Dorothy Cole's 1946 study examining the types of questions asked in public, academic, and special libraries and Joan Durrance's 1989 study of the importance of user satisfaction as a measure of reference service success led to further studies that attempt to look at the same question from a different angle.i The results of the ā€œ55% studyā€1 led to more studies of accuracy and also prompted the more qualitative evaluations of patron satisfaction and willingness to return.
The emergence of virtual reference as a popular service has had a noticeable impact on evaluation of reference. (It is not really a new technology, having been around in some form for over 20 years, but it has only recently gained widespread implementation as a library service.) A search of Library Literatureii finds six articles on reference service evaluation for the years 2002ā€“2003. Five of these articles were published in 2003, and only one of these is about in-person reference. None of the articles include evaluations across ā€œtraditionalā€ and virtual reference services. Looking at Library and Information Science Abstracts, there were seventeen research articles indexed on reference service evaluation for 2002ā€“2003.iii Eleven of these were about virtual reference services. The other six only examined in-person services, with three of these articles being from Norway (there were two articles from the UK about virtual reference). Only one article2 contained evaluation of both virtual reference and in-person reference services.
Since virtual reference is new, it has turned our heads and we have focused our evaluation efforts in this direction. What is it about evaluation of virtual reference that is distinctive and what is unchanged from the evaluation of other reference services? I will explore the answers to both of these questions, as well as the importance of a holistic approach to reference evaluation that integrates evaluation of virtual reference with other reference services.

DEFINITIONS

For an article on virtual reference services, it is necessary to define a few terms since terminology may be unfamiliar. Terminology in this area is not standardized, so you may see the same words used elsewhere with variation in meaning.
Chat Referenceā€“Real-time communication between two users via computer. Chat reference allows users to communicate instantaneously with librarians, or as it is commonly described, the communication is synchronous.3
E-mail Referenceā€“Communication via electronic mail. Patrons can send messages at any time to be answered by operators at another time. Since patrons are not interacting with the librarian in real-time, this mode of communication is asynchronous.
Virtual Referenceā€“An umbrella term that encompasses chat and e-mail reference as well as emerging reference communication technologies such as voice-over-IP and online videoconferencing. Virtual Reference focuses on the interaction between patron and librarian (operator) whereas Digital Reference is a broader term that includes online resources as well as virtual communications.
Transcriptā€“The text of a virtual reference interaction. This may take the form of a chat transcript stored in a database of chat interactions, or the e-mail correspondence between patron and librarian. Most commonly in the virtual reference environment, the supporting software automatically collects the transcripts.
Operatorā€“A generic term for the person on the answering end of a virtual reference interaction. The operator may be a librarian, a paraprofessional, a graduate school student, a contracted employee from a virtual reference software company or some other person designated to answer the virtual reference questions at a library or Ask-A service.

UNIQUE ASPECTS OF EVALUATING VIRTUAL REFERENCE

Aside from the newness of these services, there are some distinctive characteristics of the virtual reference environment that make evaluation of virtual reference different from evaluation of other reference services. The most unique, and tantalizing, aspect of evaluation of virtual reference services is the availability of a transcript of the entire reference interaction. It is perhaps this aspect, as much as the newness of virtual reference, which has engendered interest in the evaluation of virtual reference. Most commercial chat software collects and archives the transcripts for either all transactions or selected chat transactions determined by the operator. These transcripts are rich with data and opportunity. The reference interview can be examined in detail as well as the accuracy and appropriateness of the librariansā€™ answers. Tone, typing skill, and jargon are right there in print. There is a ridiculous amount of information that could be mined. It is important to look at the transcripts as the last step of the research design rather than the first; you should know what questions you want to answer before you jump into the transactions as a data source. Starting from the transactions could lead to specious evaluation. For instance, examining typing errors in chat transcripts is possible, but it has questionable value.
The consortial nature of many chat reference services can add a layer of complexity to evaluation of chat reference which is not present in the evaluation of other reference services. If you are a member of a chat consortium, you need to consider if you want to evaluate only your operators answering questions for your institution's patrons, or your operators answering the questions of other institution's patrons (or both). Or maybe you want to evaluate operators at other libraries answering questions from your patrons. There are issues of access to the consortial data, but also issues of operator privacy and inter-institutional collegiality. The drive to evaluate as a way to measure and maintain service quality can create a tension with the desire for harmony within the consortium.

WHAT IS EASIER TO MEASURE?

There is also much data that is collected automatically by the virtual reference software. The exact data will differ by vendor and the preferences of the library. Some common things that are collected: time and date of transaction, operator name, length of time spent in a chat session, user information such as affiliation or status, IP range, and Web browser. From this data a variety of reports can be run. It takes next to no effort to collect and can quickly yield much information. Some of it may be worth the time it takes to evaluate and some of it may be superfluous. Detailed reports of traffics by time of day, day of week, or week of the year can be generated to help with staffing patterns. If your patrons are asked for status or affiliation (undergraduate, public, faculty), a report can be produced to answer the timeless question of ā€œwho are our patrons.ā€ If IP range is collected, you can determine where your patrons are when they ask questions (inside the library, elsewhere on campus, off-campus). These can be useful facts for training and staffing. Again, specifics vary by vendor, but this kind of patron information is often stored separately from the transcript of the reference transaction to further ensure patron privacy.
One of the ways that libraries and software vendors have made collection of patron data easy is to use a form to collect data up-front (Diagram 1).iv How difficult it is to run reports from the data in your system depends on the software, where the data is stored, and what pre-scripted reports are available or the extent to which you can write your own reports to query the database of transcripts and transaction data. Some data may be collected (such as IP), but not pre-scripted into a report or stored in such a way as to make queries of this data straightforward. Extraction of what you want t...

Table of contents

  1. Front Cover
  2. Half Title
  3. Title Page
  4. Copyright
  5. CONTENTS
  6. Preface
  7. Introduction
  8. LIBRARY CASE STUDIES AND RESEARCH RESULTS
  9. STANDARDS AND METHODS FOR EVALUATING VIRTUAL REFERENCE
  10. ASSESSING LIBRARY INSTRUCTION IN AN ONLINE ENVIRONMENT
  11. Index