NEEDS ASSESSMENT TUTORIAL Printable Version
University of Arizona Library Needs Assessment and Data Management Project Team University of Arizona Library Needs Assessment and Data Management Project Team Engeslgjerd and Cathy Larson Adam Engelsgjerd September 2000
Table of Contents
Test Your Knowledge
* What is needs assessment? * Why do needs assessment? * When is the best time for needs assessment?
How To Reference 1: 2: 3: 4: 5:
Identifying the customer or customer group Ascertaining what the customer wants Collecting the data Analyzing the data Implementing or acting on the findings
* Music Library * Materials Access Team
Make Your Own Plan
Part I. INTRODUCTION How much do you know about the people who use your services and facilities? How much do you know about the people who don't? Think about needs assessment as … being market research for libraries and museums. … something that can help you plan your time, set your priorities, and manage your budget. … an ongoing and important part of your planning. Use this tutorial as a first step toward achieving these goals if you would like to: ☼ ☼ ☼
Learn more about what customers want from you... Retain your current customers... Gain new ones...
Part II. TEST YOUR KNOWLEDGE Use this section to find out what you already know about needs assessment. Circle your response, then look for the answers at the end of this section. 1) WHAT IS NEEDS ASSESSMENT? Needs Assessment can best be defined as . . . A.
Finding out if your customers are happy with the services you provide.
Determining what services you will need to provide for your customers in the future.
An impractical method of getting to know your customers.
2) WHY ASSESS CUSTOMERS' NEEDS? A.
Because it gives your customers a chance to voice their satisfaction with your current services.
Because customers expect to be asked about their wants and needs.
Because you need it to find out what your customers want and need from you.
3) WHEN IS THE BEST TIME FOR NEEDS ASSESSMENT? A.
There is no one good time; it must be done continually to be effective.
At the end of every quarter.
Whenever your services change. 3
ANSWERS 1) WHAT IS NEEDS ASSESSMENT? Correct Response = B: Determining what services you will need to provide for your customers in the Future: With the proper focus, methodology, and analysis, your needs assessment will provide you with valuable information about what your current and potential customers need, want and expect. In the article, What Does Your Customer Really Want? 1, Joan Fredericks and James Salter point out that, "Traditional approaches to customer satisfaction do not offer a comprehensive way to measure the range of perceptions that influence customer behavior. Without proper information about customer needs, quality improvement activities tend to lack both market focus and the proper use of internal resources to enhance business results." Use this tutorial to help focus your needs assessment activities to obtain useful information about what your customers need from you.
2) WHY DO NEEDS ASSESSMENT? Why Do Needs Assessment? Correct Response = C. You need it to find out what your customers want and need from you. But even more importantly, you need to act on this knowledge by providing your customers with valuable, high-quality, innovative, and timely information services that will meet their requirements.
3) WHEN IS THE BEST TIME TO DO NEEDS ASSESSMENT? Correct Response = A: There is no one good time, it must be done continually to be effective. Finding out what your customers want from you is a continual process. As your customers and environment change, you will have to alter your services to stay competitive. And to find out which services to change, which ones to bring in and which to let go, you'll require a needs assessment. Conceptualizing needs assessment as a continuous process encourages you to take advantage of every encounter with a customer to find out more about what they want and expect.
Part III. "HOW TO" REFERENCE
Needs assessment can be broken down into five basic steps: Ê
Identifying the customer or customer group
Ascertaining what the customer wants (planning for data collection)
Collecting the data
Analyzing the data
Implementing or acting on the findings
STEP 1: Identifying the Customer or Customer Group To determine your customer group, ask yourself these questions: ◆ ◆ ◆ ◆ ◆ ◆
Who are the people we serve? Which customers benefit most from the services we offer? Should we break down the customer base? What about prospective customers (i.e., those who currently are not using our services) Who are they How do we find them?
Step 2: Ascertaining What the Customer Wants (planning for data collection) In planning for your data collection, it will be helpful to consider these questions: ▶ ▶ ▶ ▶ ▶ ▶ ▶ ▶
Who are our customers? Which services do our customers use? How do our customers use our services? How do our customers define success? How can we make our customers more successful? What do our customers value? What changes do our customers see coming in their environment? What do our customers see as our distinctive value to their success?
But how you have your customers respond is very important. Should you ask them to give you mostly qualitative data, quantitative data or a mixture of both? In this next section, the tutorial will digress from the five steps to examine these types of data.
Qualitative and Quantitative Data
The following synopsis of qualitative and quantitative research methodologies and analysis is based on information found in the literature of applied research, and the ISNAG needs assessment pilot project.
Qualitative Data Qualitative research concentrates on words and observations to express reality as subjects perceive it. Qualitative data may be gathered through personal or telephone interviews, written or electronic surveys, focus groups, or other means where people are invited to express their thoughts. The subjects' expressed ideas, opinions, and experiences are then recorded, transcribed or somehow notated for later analysis. Analysis often includes dividing and categorizing each expression so the researcher may gain a picture of trends, if any. Typically, qualitative research provides in-depth information about fewer cases, while quantitative research provides more breadth of information across a larger number of cases.
Quantitative Data In contrast, the quantitative approach grows out of a strong academic tradition that places considerable trust in numbers that represent opinions or concepts. Needs assessment surveys that use questionnaires provide quantitative data: subjects' select from a limited range of pre-defined options. These kinds of surveys often provide only a portion of the desired information and omit critical factors. Furthermore, needs assessment surveys tend to identify concerns that already have achieved some visibility within the community, as opposed to the less visible concerns that lie below the surface. Needs assessment surveys that use focus groups or one-on-one interviews provide qualitative data. Focus groups or individual interviews enable the researcher to discover how respondents see reality. Respondents use their own words, as opposed to providing responses to pre-determined terms and categories used in quantitative questionnaires. Qualitative research can be done prior to, at the same time as, after, or without a quantitative research component. For example, focus groups that precede quantitative procedures can help a researcher learn the vocabulary or discover the thinking pattern of a target audience, and provide clues as to special problems that might develop in the quantitative phase. Focus groups used at the same time as quantitative procedures may address the same issue to confirm findings and obtain both breadth and depth of information. Focus groups that follow quantitative procedures provide insights about the meaning and interpretation of the quantitative results. Focus groups used independently of other procedures are helpful when customer insights, perceptions and explanations are the primary factors needed by the researcher. Conclusion: The benefits of combining qualitative and quantitative procedures result in greater methodological mixes that strengthen the research design and aid in the interpretation of data. Back to the five steps . .
Step 3 : Collecting Data There are many different ways to gather needs assessment data. In this section of the tutorial, we will cover the following: A. B. C. D. E.
Surveys Automated Surveys Interviews Focus groups On-Site Observation
In this section, we will investigate the advantages and disadvantages of choosing the survey method. We will also look at survey formats and provide tips for constructing and conducting your survey.
Analyzing the data gathered in the survey will be presented Which of the following are advantages of surveys? (circle one or more responses) A. Can be administered to larger populations B. Less intrusive than focus groups, interviews or on-site observation C. Biasing is minimized (in-person interviews can be biased by perceptions of both interviewer and respondent) D. Respondents are willing to invest an endless amount of time in responding E. Simplicity will positively affect the rate of return and response accuracy ANSWER: If you identified a, b, c and e as advantages of surveys, you were right. And, generally speaking, respondents are NOT willing to invest an infinite amount of time on a survey! So what are the disadvantages of surveys? See which of the following statements are true: T
Surveys require more staff resources and expertise than other needs assessment methodologies, because of the up-front work of design and deployment, and the corresponding analysis.
There is more opportunity for a thoughtful response on the part of the respondent.
If utilized too often, customers may grow tired of being surveyed and may not respond, or may not return accurate results.
ANSWERS: That there is more opportunity for a thoughtful response on the part of the respondent is NOT a disadvantage of surveys! In fact, it is a distinct advantage. But there are disadvantages of surveys: ◪
They require more staff resources and expertise than other needs assessment methodologies, because of the up-front work of design and deployment, and the corresponding analysis of the responses.
If utilized too often, customers may grow tired of being surveyed and may not respond, or may not return accurate results. 7
Developing Surveys It is usually advisable to separate user satisfaction and user needs surveys. However, a survey may be designed to measure both if the two sections are clearly distinguishable on the survey. Data collected in surveys can be both subjective and objective. Choosing the appropriate structure and adherence to faithful representation of the facts (agreement about anonymity, for example) are critical in obtaining unbiased feedback.
Tips for Writing a Survey Keep it simple, make it easy · · · · · · ·
A survey should be as simple as possible. This is true both for those taking it, and for those analyzing it. Limit response possibilities to a range from 5 to 10. Offer an even number of choices to avoid the "middle-of-the-road" choice whenever possible. Use simple sentences: do not have two concepts or situations in the same question. Use language your customers will understand: avoid jargon whenever possible. Avoid complexity and ambiguity: we want our customers to feel comfortable answering our questions. The survey should be easy to return: provide pre-addressed, stamped envelopes where necessary; self-mailers have been known to increase return rates.
Make it purposeful · · · · ·
The purpose of the survey should be clear to you before you begin writing. How the data will be used should be understood before beginning. How many responses do you expect? Will you need Excel or a statistical program such as SPSS to analyze your results? Does someone on the team have the skills to analyze the data?
Entice your respondent . . . ·
Begin the survey with the most interesting question to pique interest.
Base your survey on solid information ·
Refer to an excellent reference guide such as The Survey Research Handbook by Pamela Alreck and Robert Settle. See the "Resources" guide for additional articles.
And m o s t i m p o r t a n t l y ...
Always pilot your questions on a subgroup of your customer base! Use the next section to guide your thinking on the kinds of questions you want to ask your users. 8
Open-ended questions. EXAMPLE: How would you describe the ideal Information Center? Advantage: Often brings up ideas not considered before Disadvantage: Open to subjective interpretation
2) Closed-ended questions. EXAMPLES: a) Forced-choice questions, e.g., yes/no, true/false b) Multiple-choice questions (choices should be mutually exclusive to avoid confusion and as comprehensive as possible) c) Rating questions (questions should have a direct link between the rating scale and the subject being rated). EXAMPLE: How would you rate the document delivery services provided? Excellent Good Fair Poor d) Ranking questions (best to offer fewer rather than more choices or respondents may misunderstand or refuse to answer). EXAMPLE: Please rank the instruction provided by the Information Center: from 1-5, with 5 being best: ____lecture ____group study ____multimedia presentation ____role playing ____hands-on workshop Advantage: Answers can be funneled into pre-defined categories Disadvantage: Potentially stifling customer response 3) Combinations of both Sometimes questionnaires can combine ranking with open-ended questions, e.g., add "other" as a choice, with space for an explanation. "Other" can also be added to a multiple-choice answer.
Automated Surveys The simplest "non-hardcopy" format is email. Responses are sent to one place and then entered into a spreadsheet program, or analyzed by dedicated individuals. There are several programs on the market designed to facilitate the design of surveys and provide basic statistical analysis. However, they are all missing a mechanism to allow the program to automatically dump the raw data into the analysis program without having a centralized server dedicated to that function. The implication is that each group, team or organization utilizing such programs will require the technical expertise to implement the programs.
Research has shown that individual interviews are the most effective means for getting feedback.
In this section, we will examine the elements of one highly successful means of needs assessment: the personal interview. What are the advantages of personal interviews? Take a few minutes to write what you think some advantages of personal interviews might be:
What are the disadvantages of personal interviews? Take a few minutes to write what you think some disadvantages of personal interviews might be:
Turn to the next page for some suggested answers: ADVANTAGES of interviews include the following: ▶ Interviews can yield valuable insights that may have been overlooked in a formal survey. ▶ They often can be done without the intense preparation the must go into a survey assessment, although interviewers must take time to prepare well. ▶ Feedback is immediate. ▶ Responses can be clarified in the moment. 10
DISADVANTAGES of interviews can include the following: ▶ The customer sample is often much smaller than that of a survey. ▶ More training may be needed to prepare interviewers for face-to-face techniques. ▶ Interviews are more intrusive for the respondent. ▶ Interviewers will have to watch their own voice inflection and body gestures to avoid sending auditory or physical clues about the significance of an issue, or about the preferred response.
Consider this variation: Telephone Interviews Telephone interviews can be less intensive--all of the same factors apply as with faceto-face interviews, except for the following: · The parties involved have only auditory clues to react to questions. · Respondents may tire of answering questions over the phone, especially if notes are being taken. · Questions of confidentiality may arise and directly affect the respondent's answers
What can "make or break" the personal interview? The Interviews ◩ ◩ ◩
Can take a structured or non-structured form. Are qualitative in nature so care must be taken to keep the interview on track. Are influenced by the interviewer's own preferences, comfort level with the methodology, and relationship with the interviewee.
The Questions ▩ ▩ ▩
Must be asked consistenly from one respondent to the next: ask each respondent the same set of questions, although follow up and clarification questions may differ. Should be a combination of closed- and open-ended types to allow quantitative and qualitative analysis. The use of a script can be an effective way to lend continuity to more formal interviews, both in format and in feedback.
The Responses ◆ ◆ ◆
Must be recorded accurately, consistently and legibly (or understandably!). Should not be allowed to degenerate into a gripe session, or one in which wants and needs are not shared. Recording requires the respondent's permission.
The Analysis ◢ ◢ ◢
Must take into account that the human element can sometimes inject inaccurate perceptions about what information is wanted, why it is wanted, etc. More up-front time may be needed to set the groundwork for such an interview than for a more informal information exchange. There may be tabulation, cross-tabulation, or the option of no enumerative analysis at all -- simply a qualitative analysis of trends and patterns.
On the other hand, a routine visit with a faculty member can yield valuable information that can be brought back to the team, and that will eventually inform future decision-making. Remember: 11
Research has shown that individual interviews are the most effective means for getting feedback.
Focus Groups What are they? Focus groups are qualitative research groups usually consisting of 6-10 people, each of whom have some knowledge of or experience with the topic, issue or question at hand. Why use them? The techniques used with focus groups are often successful in eliciting people's concerns and beliefs. This information can provide good insight for developers of future surveys, as well as contribute to overall evaluative research. How do they work? The questions tend to be open-ended, which gives participants freedom to answer, but may require an experienced facilitator to keep people focused on the topic. How is the data gathered? To record the session a tape recorder is often used, as well as a neutral facilitator who doubles as a recorder, usually using a flip chart. What does the analysis look like? After the sessions, notes are compiled and tapes are transcribed. Analysis can be labor intensive, with responses being coded and conclusions being drawn. Reliability and validity of the data cannot be measured in the same way that quantitative data can, but if done properly, the focus group can provide valuable data. NEXT: Consider the potential advantages and disadvantages of focus groups, and write your thoughts here. Then turn to the next page for some possible responses:
ADVANTAGES of focus groups can include the following:
Þ Provides valuable feedback to complement other methodologies or can stand alone. Þ When done properly, participants gain insight into the organization and others' perceptions as well. Þ Feedback can contribute to the design of future surveys. DISADVANTAGES of focus groups can include the following:
® Focus groups can be costly in time and money (outside facilitators are often necessary, and incentives are often provided to secure participants).
® The interactivity required can be complex, and will require experienced moderators to facilitate a successful outcome. 12
WHAT ABOUT ON-SITE OBSERVATION? This technique requires one to observe work being done--tasks in progress--and in the case of libraries, watching information-seeking behavior of customers. ADVANTAGES ◆ ◆ ◆ ◆
Data is empirical in nature and can be proved or disproved by continued observation or sampling. The observer is usually knowledgeable about the task or situation being observed, which lends credibility to qualitative aspects of the session. If quantitative values are tabulated, familiarity with processes aids the accurate recording of data. Detailed logs can be valuable for comparative and/or longitudinal studies.
DISADVANTAGES ◆ ◆ ◆ ◆ ◆
Can be a time-consuming methodology: the plan or timeframe must be agreed upon by all parties involved, whether the plan is to be obtrusive or non-obtrusive. In a non-obtrusive setting customers may not know their behavior is being watched, but other service providers should be made aware. Detailed logs and notes must be kept, and exceptions to standard activities must be documented. This methodology is best used in combination with other types of evaluations. A balanced approach requires using internal and external personnel, which may prove unfeasible in terms of time and cost.
Step 4: Analyzing Data Depending on the type of data you have, your analysis will change. Quantitative Data (e.g., numerical or forced-choice questions) Does the data require a statistical package (such as SAS or SPSS), or would a spreadsheet suffice? Are there staff available on the team or eslewhere who can do data entry and work with software? Does the team have an understanding of how to approach the results? Qualitative Data (for example, free-style interviews or focus-groups, or open-ended questions) ▶ ▶ ▶
◩ ◩ ◩
What common threads can be gleaned from the responses? In qualitative analysis the knowledge and judgment of the persons administering the assessment is crucial. A useful approach for beginning to analyze qualitative data is to review the transcripts or notes carefully, observing what themes or categories seem to be repeating themselves. Begin by writing out each theme, using a word processor, fitting each remark or comment under that theme. See below for an example. If a remark fits into more than one category, it can be duplicated. There may be some comments "left over." Analyze them to see if they are sub-themes fitting under one of the existing themes, or if new themes emerge. If not, and there are only a few remarks, you may have a "Miscellaneous" category.
In this example, faculty members working in a particular discipline have been interviewed for their perceptions about the library. Their comments have been organized to fit under themes that emerged during analysis of their remarks:
Collections -- Books A: "I rarely find anything current in my area -- when did the library stop buying books in my area? We still teach several classes a year in this area and the books are really important for students' papers. The information they need is just not available on the web yet." B: "I know you have the stuff I need because I find it when I go into the library and just browse the shelves. But I can't find it ahead of time on Sabio and this is frustrating, for example, when I'm writing a class bibliography." C: "I've heard lots of students complain over the years about the books they need being checked out all the time. You should get multiple copies of these." D: "Great collection -- I almost always find what I need." Collections -- Journals D: "There are just too many titles where the library has gaps. I have to use interlibrary loan a lot more than my colleagues." C: "My students always say they can never find articles in the library. The ones they want are either not on the shelf or they've been ripped out of the cover." A: "Compared to the book collection in my area, the journals are great. I hope we cut again!" B: "The article indexes you provide online are so so helpful!" Library reference and other services A: "The staff members over here are ALWAYS able to answer my questions, it doesn't seem to matter who I ask. I know they employ lots of students and they're all just great. No complaints here." C: "For the last several years my students have complained that they can never find the librarian when they need him. But they've noticed a lot more computers in the area and they seem to find that helpful." D: "Sometimes I don't feel the library responds very well to my requests, such as when I need a particular title or am having trouble finding something I need." Note the negative tone of many of the comments. This is not unusual in interview situations. Notice too, that most of these comments will need to be probed further for specifics: which periodicals have gaps? what are some of the titles missing in the collection that faculty member "A" needs and how can we generalize their characteristics to fit them into a profile?
Step 5: Implementing or Acting on the Findings Okay, you have all this data, now what? It is important to make timely decisions that relate to the needs you have identified. But which needs will you act on first? An Ease/Impact Chart can be useful in helping the group decide what to tackle first.
How To Do It To use an Ease/Impact Chart, list out and number your possible actions -- the actions you might take to address each need identified by your customers. Acquire any additional information about each action that could help you make your decision, for example, cost information. Then on a blank flip chart, draw two lines crossing each other, like this: - The vertical axis represents the Impact a particular action may have. The horizontal axis represents the Ease with which an action may be accomplished. These are relative terms.
I M P A C T
- The bottom left quadrant represents actions that are LOW impact for customers (for example, an action would affect fewer customers, or would not be as important to customers), and LOW ease (meaning -- hard to do). - The bottom right quadrant represents actions that are LOW impact for customers and HIGH ease (meaning -- easy to do).
- The upper left quadrant represents actions that would significantly impact customers (HIGH impact), or impact many customers, but these actions are difficult (LOW ease). Finally, the upper right quadrant represents actions that would
☼ Plot the number of each corresponding action onto the chart into the quadrant that is most appropriate. If an action would affect many customers, but would seem relatively difficult to carry out, plot the number corresponding to that action in the lower left quadrant. ☼ Within the team, it is usually necessary to talk through each item to determine its correct placement on the chart. ☼ Generally, the actions to take first are those in the lower right quadrant: HIGH impact activities that are relatively EASY. For purposes of illustrating how an Ease/Impact Chart works, let's suppose that a needs assessment group has just finished surveying a select group of its customers. They have asked customers questions related to reference services provided by the Main Library, including reference and current periodicals. In our simplified example, here's the short list of actions the needs assessment team might take in response to the needs identified by the students and faculty they interviewed and surveyed: 1. 2. 3. 4. 5. 6.
Purchase the Encyclopedia of Art for the Reference area. Decrease the turnaround time for Interlibrary loan requests. Increase depth of Women's Studies journal collection. Make Word and Excel available on more machines. Replace broken chairs in Reference area. Customers receive more on-demand help when they are at the library. 15
After further investigation, the team relisted the activities, this time including information about who and how many customers would be affected by the action (IMPACT) and their knowledge about the costs and activities that would be involved in carrying out these particular activities (EASE): 1. Purchase the Encyclopedia of Art ($750.00) for the Reference area. EASE: Costs $750.00 to purchase, plus cost of adding one catalog record and holdings to SABIO; could request through one-time Information Resources Council process. IMPACT: Extremely important source, would affect most undergraduate art students; requested by 6 of 110 responding faculty, 6 of 10 responding Art faculty. 2. Decrease turnaround time for Interlibrary loan requests. EASE: Interlibrary loan is a complex process outside the scope of the team's responsibility and authority; would need to work with them on this. IMPACT: 35 of 50 responding graduate students and 20 of 110 responding faculty indicated a need for improved turnaround time. 3. Increase depth of Women's Studies journal collection. EASE: Requires more analysis and information: are there specific titles that are missing from our collection? What uses would be made of these titles? IMPACT: Would benefit most Women's Studies majors and all Women's Studies faculty; requested by 2 of 110 faculty. 4. Make Word and Excel available on more machines in Reference. EASE: Would need to acquire 5 additional machines at $3,000 each, plus 5 additional copies of Excel and Word at $100.00 each; would need to coordinate with LIST/UST for installation. IMPACT: Requested by 33 of 180 responding undergraduates as a high priority 5. Replace broken chairs in Reference area. EASE: Chairs cost $400 each; replacing six would cost $2400. IMPACT: Mentioned by 7 respondents; have other users noticed and not complained? 6. Customers receive more on-demand help when they are at the library. EASE: Requires more analysis and information, for example, what specifically was lacking in the service provided? What would it require to bring the level of service up to expectations? IMPACT: Requested by 2 faculty. Have other users felt a need for this and not expressed it?
How would YOU rank each item in term of its EASE and IMPACT? Place a tick mark on each scale, below, to indicate your ranking of each item's relative ease and impact:
Here is one possible way to rank these action items for their ease and impact. 1. Purchase the Encyclopedia of Art ($750.00) for the Reference area. EASE: HIGH. Costs $750.00 to purchase, plus cost of adding one catalog record and holdings to SABIO; could request through one-time Information Resources Council process. IMPACT: HIGH. Extremely important source, would affect most undergraduate art students; requested by 6 of 110 responding faculty, 6 of 10 responding Art faculty. 2. Decrease turnaround time for Interlibrary loan requests. EASE: LOW: Interlibrary loan is a complex process outside the scope of the team's responsibility and authority; would need to work with them on this. IMPACT: HIGH: 35 of 50 responding graduate students and 20 of 110 responding faculty indicated a need for improved turnaround time. 3. Increase depth of Women's Studies journal collection. EASE: RELATIVELY HIGH. Requires some more analysis and information: are there specific titles that are missing from our collection? What uses would be made of these titles? IMPACT: MEDIUM: Would benefit most Women's Studies majors and all Women's Studies faculty; requested by 2 of 110 faculty, of 2 responding Women's Studies faculty. 4. Make Word and Excel available on more machines in Reference. EASE: LOW: Would need to acquire 5 additional machines at $3,000 each, plus 5 additional copies of Excel and Word at $100.00 each; would need to coordinate with LIST/UST for installation. IMPACT: RELATIVELY HIGH: Requested by 33 of 180 responding undergrads as a high priority 5. Replace broken chairs in Reference area. EASE: MEDIUM: Chairs cost $400 each; replacing six would cost $2400. IMPACT: RELATIVELY LOW: Mentioned by 7 respondents; have other users noticed and not complained? 6. Customers receive more on-demand help when they are at the library. EASE: LOW: Requires more analysis and information, for example, what specifically was lacking in the service provided? What would it require to bring the level of service up to expectations? IMPACT: LOW: Requested by 2 faculty. Have other users felt a need for this and not expressed it?
SUMMARY: Generally, the actions to take first are those in the upper right quadrant: HIGH impact activities that are relatively EASY. Actions that are not easy but have a high impact should be pursued further and the resources needed explored!
A N D ….. Don't neglect to communicate the results and your plans to your customer group, particularly those who participated in your planning! 17
CASE STUDIES CASE 1: The University Of Arizona's Music Library's Needs Assessment Designed by: The Integrative Services Needs Assessment Group (ISNAG)
Special points of this Case Study 1) The use of a hypothesis ▶
A hypothesis is a proposed explanation for the occurrence of a particular phenomenon that must be proved or disproved.
The hypothesis for this needs assessment project identifies a possible causal relationship between characteristics of the library user and the relative importance of specific types of resources to the user.
The original hypothesis posited that factors in the music respondent’s subject specialization, class rank, teaching areas, research expertise and non-music respondent’s purpose for using the Music Library influenced the level of importance the respondent attaches to specific types of resources, but these factors were not analyzed during the project.
2) The relationship of the research questions to the hypothesis and the demographic information gathered about the targeted customer group. 3) The involvement of members from the customer group in developing the needs assessment project. Involving customers can provide the researcher with first-hand information and a base of customer support. Involvement also gives customers some stake in the success or outcome of the project. 4) The mix of methodologies used to gain information from customers. Some methodologies are more appropriate for some questions or for some customer groups. 5) The problems inherent in gathering survey-based data: ▶ ▶ ▶
Missing data Incomplete responses Incongruent responses
Study Questions for this Case Study At the end of this case study, you will be asked to return to this page to fill in your responses to the following questions: 1. Was the hypothesis proposed an appropriate and valuable one for this study? 2. What demographic information was each participant asked to provide? What other information about the targeted customer group might have been helpful in this survey? 3. Does the survey as designed address the research questions? 4. Which methodologies are most appropriate for this type of needs assessment?
Purpose of the Needs Assessment Project The purpose of the study was to examine the information needs of Music Library customers. The team's hypothesis identified a possible causal relationship between characteristics of the library user and the relative importance of specific types of resources and services. The user characteristics included: the music respondent’s subject specialization, class rank, teaching areas, research expertise and non-music respondents' purposes for using the Music Library. Another purpose of the project was for the Integrated Services Needs Assessment Group (ISNAG) to be able to test what they had learned about needs assessment and research methodologies with a subset of users. The group of librarians conducting the study chose the School of Music as the pilot group for the summer sessions of 1998. However, due to low summer enrollment, ISNAG rescheduled the pilot to the Fall semester of 1998.
The Research Questions ISNAG designed the study to investigate the following research questions. These questions would be answered by analyzing the data and correlating responses to one question with responses to another question: 1. Does class rank have any possible correlation to the level of importance music majors attach to certain kinds of music-related information resources, services, facilities or equipment? 2. What is the relationship between a user's characteristics, and the relative importance he or she places on resources, services, facilities or equipment? A. Does subject specialization within the field of music (e.g. music education, history, performance, theory, composition, conducting, etc.) have any correlation to level of importance? B. Do individuals with visually-oriented specializations (e.g., conducting, opera, musical theater) attach the greatest level of importance to visual resources such as videorecordings? C. Do respondents with an analytical focus (e.g., theory or composition) or a performance focus place the greatest level of importance on scores and recordings? D. Considering education’s relationship to the social sciences, particularly psychology, do music education specialists place the greatest emphasis on journals? E. Because of their concentration on comprehensive historical and descriptive studies, do respondents specializing in musicology or music history and literature place the greatest emphasis on books and monographs? 3. Do non-music majors taking music-related classes devoted primarily to avocational listening experiences attach the greatest level of importance to resources which support aural activities, e.g., compact discs, listening equipment? 4. What possible correlations exist between respondents’ suggestions for improvement ("Please describe what would most increase your overall level of satisfaction with the Music Library") and resources which they marked with both the highest level of importance and lowest level of satisfaction?
Methodology Basic demographics When the pilot project was conducted during the Fall, 1998 semester, the total population available numbered 734 individuals:
56 faculty members 375 undergraduate majors 143 graduate majors 160 non-music majors
Survey design and customer involvement In this case study, one of the customers is available to also provide valuable expertise: the group enlisted the assistance of Dr. Rob Cutietta, Associate Director of the School of Music. Dr. Cutietta, a specialist in written surveys who has published numerous articles related to research with music subjects, offered valuable advice on survey design. He also suggested that the survey be administered during the week of October 26th through October 30th, a personal enhancement week following a four-week "performance block" within the School of Music Subsequent administrative support for the pilot was elicited through meetings with Gary Cook, Director of the School of Music and Dance and Dr. Jeff Sowell, Director of Student Advising, who helped identify appropriate classes for administering the written questionnaire. Additionally, music librarian Judy Marley spoke about the pilot at two different music faculty meetings. Originally, three different methodologies were planned, and three different questionnaires were designed (the actual forms and questionnaires can be found in the Appendix): ▶ A written survey (questionnaire) to be administered in different music classes and also sent to music faculty and students via the division's two e-mail distribution lists ▶ A personal interview form for use in conducting half-hour personal interviews ▶ A focus-group form for use with three different focus groups planned as two eight-member faculty groups and one eight-member student group.
Gathering the customer input (data) During the administration of the written questionnaire, 12 different music classes with a total enrollment of 443 students (185 undergraduate majors, 98 graduate majors and 160 non-music majors) were visited. A total of 8 different faculty members took the written survey during class time. The written questionnaire was administered during the first 20 minutes of each class session by various members of the ISNAG project team. A cover letter was provided for each ISNAG member's use while discussing the project. Due to class absenteeism and incompleteness, a total of 209 useable responses, 28.47% of the total population, were obtained. These 209 responses included 7 faculty members, 82 undergraduate majors, 57 graduate majors and 63 non-music majors. There were only two responses to the e-mail version of the questionnaire. These responses, both from faculty members, are included in the 7 useable faculty responses noted in the preceding paragraph. During the data-gathering phase, 18 personal interviews were also conducted, but the focus groups were cancelled due to very low enrollment.
Data analysis Recording the data Microsoft Excel was used to create a spreadsheet of the raw data. The spreadsheet records all individuals' responses to questions asking participants to rate both the level of importance they ascribe to the various types of materials, services, physical facilities and equipment found in the Music Library, as well as their satisfaction with each type of material, service, etc. These questions were posed through the written and email questionnaires, and used a five-point Likert scale (5 = High importance or satisfaction, 1 = Low importance or satisfaction). Excel tables that collated, tabulated and summarized the data were also created, as were four charts visually representing composite responses to importance/satisfaction ratings. Here is a sample of the spreadsheet showing some of the "raw data." The sample shows the responses students in the Masters Program gave about the importance of and their satisfaction with books, scores, and recordings. For example, 34 responses were received about the importance of books, but only 33 recorded their satisfaction. Of those responding, 64.71% rated books as being "very important" (they chose "5" from the Likert scale) but 24.24% also rated their satisfaction as "5."
1. Materials Books (Importance)
64.71% 20.59% 2.94% 2.94% 2.94% 94.12%
24.24% 54.55% 12.12% 3.03% 0.00% 93.94%
91.18% 0.00% 2.94% 0.00% 2.94% 97.06%
18.75% 34.38% 25.00% 15.63% 0.00% 93.75%
79.41% 14.71% 0.00% 0.00% 2.94% 97.06%
18.75% 21.88% 31.25% 21.88% 0.00% 93.75%
Total # Responses Average of All Responses Percentages of Each Response 5 4 3 2 1
Books Scores Scores Recordings (Satisfaction) (Importance) (Satisfaction) (Importance)
Representing the Data: Correlating Importance with Satisfaction Numerical including statistical data can be represented in two ways: through tables, and through charts or graphs. Here is a table that summarizes the relative importance of material types as compared to users' satisfaction with that resource in the library: Music Library 1998 Customer Survey: Importance / Satisfaction with Information Resources
Recordings Books Scores Electronic databases Web sites Periodicals Software Videotapes
Importance 4.7 4.2 4.2 3.8 3.7 3.6 3.4 3.1
Satisfaction 3.6 3.8 3.6 3.5 3.6 3.6 3.3 3.1 21
Here is the same information represented through a bar chart:
How does the representation of the data affect your ability to understand it? Similar comparisons were made for services provided by the library and for facilities and equipment. These next two tables show the average responses for each service and type of equipment: Music Library 1998 Customer Survey: Importance / Satisfaction with Library Services
Circulation Reserves Electronic assistance Interlibrary loan
Importance 4.2 4.0 3.8 3.7
Satisfaction 3.9 3.7 3.6 3.5
Music Library 1998 Customer Survey: Importance / Satisfaction with Equipment & Facilities
Listening & AV equipment Computers Copiers Individual study areas Group study areas Printers
Importance 4.5 4.5 4.4 4.1 3.8 3.8
Satisfaction 3.9 3.3 3.5 3.6 3.2 3.2
More data analysis: Comparing different groups of users One question the project was designed to answer was whether or not different types of users rated the various material types, services, etc. differently. This chart compares average ratios for importance/satisfaction across user groups and demonstrates that all groups rated services as most important, followed by materials, then facilities and equipment. Note that Masters-level and faculty users tied in scoring services as their most important/least satisfied area, followed by doctoral users, then the bachelors group. In general, bachelors-level students Music Library 1998 Customer Survey - Comparison of Average Rates for Satisfaction/Importance Across User Groups 5.0 4.5 4.0 3.5 3.0 2.5 2.0 1.5 1.0 Satisfaction all
Physical Facilities & Equipment
demonstrated the least amount of differences between importance and satisfaction with regard to materials, services, facilities and equipment.
Analyzing the qualitative data Participants were asked to describe what would most increase their overall level of satisfaction with the Music Library. The responses were recorded during the 18 personal interviews and participants in the written and email surveys also responded. The project team reviewed the 348 responses and determined that they could be grouped into eight categories, or themes: (1) (2) (3) (4) (5) (6) (7) (8)
Better quantity and variety of recordings** (n=49; 14.08%) More SABIO terminals (n=45; 12.93%) Improved listening and audiovisual facilities and equipment (n=33; 9.50%) Less e-mail use on SABIO terminals (n=31; 8.90%) More user instruction / SABIO training (n=28; 8.04%) Less noise (n=27; 7.76%) Better quantity and variety of scores (n=24; 6.90%) Better trained staff/more staff with music background (n=19; 5.45%).
Summary of main findings The project team summarized their main findings as follows. Customers identified recordings as being the most important type of material. Many customers also indicated that they need both a score and a recording to enable them to study music material. Waiting for computer stations was perceived as a problem. Customers also requested updated listening and audiovisual equipment. 1) Since recordings emerged as the most important type of material, library-wide efforts to improve the quantity, variety and accessibility of recordings should be undertaken. The retirement of the copy cataloger for compact discs in February of 1999 has left the system without any cataloging of recordings (copy or original) at the present time, so plans to investigate outsourcing companies need to begin. 2) Since many respondents indicated the need to have both a score and a recording for a particular composition, a condition not currently offered by the Music Library, approval plans that offer this service should be investigated. Currently, Theodore Front is the only vendor whose approval plans offer some ability to correlate the purchase of both a score and a recording for the same composition. In addition, Front's plans to begin offering "copy cataloging" passthroughs to OCLC would speed the processing of new scores and recordings. 3) Obtaining more SABIO terminals (the Music Library currently has only 5 stations) and designating one terminal for e-mail use would reduce some of the waiting and e-mail overload noted by many respondents. Also, obtaining updated listening and audiovisual equipment (e.g. a modern television to replace the current, older model, a DVD player, etc.) would significantly improve the listening/viewing area. 4) Once the Music Librarian's Web site devoted to locating scores, musical recordings and musical videos in SABIO is completed, it will be used in training individuals from the Materials Access Team (MAT) who staff the Music Library's Circulation Desk. Also, this site will be publicized to the School of Music and used during customer instruction sessions offered by the Music Librarian. Hopefully, the site will lead to better trained staff and users, two themes discovered in customer responses to the improvement question.
Study Questions for this Case Study Take a few minutes to note your answers to the following questions: 1. Was the hypothesis proposed an appropriate and valuable one for this study?
2. What demographic information was each participant asked to provide? What other information about the targeted customer group might have been helpful in this survey?
3. Does the survey as designed and implemented address the research questions?
4. Which methodologies are most appropriate for this type of needs assessment?
CASE 2: The University Of Arizona's Circulation and Collection Management (CMC) Needs Assessment Project Designed by: Members of the CMC workteam
Special Points in this Case Study 1. The project team's early efforts to integrate resulting projects into the larger team's goals for the coming year 2. The piloting and marketing of the assessment to customers 3. The use of quantitative and qualitative data Study Questions for this Case Study 1. What actions did the project team take to integrate their needs assessment into their larger teams overall goals and priorities? 2. What difficulties did the team encounter in trying to get data from their customers? How could these problems have been avoided? 3. What difficulties did the team encounter in trying to analyze their data? How could these problems have been avoided?
Part I: Context A. Background Collection Maintenance and Circulation (CMC) is a work team of the larger Materials Access Team (MAT). Members work at all Main and branch library locations. CMC is charged with accurately shelving library materials to ensure their accessibility to users, and with providing accurate, friendly and efficient circulation processes. Most members also work at an Information Desk service site. The team recognized early on the importance of having customer input when planning services. MAT charged a new team -- the CMC Needs Assessment Project Team -- with the task of finding out what customers needed. The input would be used in MAT's strategic planning, and would be used to answer such questions as: Which services are most important to our customers? Which services are in need of improvement, from our customers' perspective? Which services need attention first -- which projects should we tackle this year? B. Purpose of the project Four members of MAT's CMC work team were chosen to serve as the project team. They were charged with: (1) Performing needs assessment on CMC's customers (2) Determining the feasibility of performing needs assessment on potential customers (3) Acting upon the findings and changing services as necessary 25
Parameters included addressing CMC services only and the expectation that their work and results will be recorded and handed off to a cross-functional team within MAT. The project team was also asked to look at how to integrate needs assessment into MAT's ongoing work. A potential difficulty was that the team lacked previous experience in needs assessment. See the complete charge to the project team, which they developed themselves
Part II: Methodology For Gathering Data A. Choosing the instrument From the beginning, the team realized that reaching the broadest and largest customer base possible would give them the best information. Interviews and focus groups are very useful for gathering succint and highly relevant information, but reach fewer people relative to other instruments. The team thus decided to use a survey as their main instrument, followed by focus groups to get more in-depth information. They chose to use a web survey in conjunction with a paper survey in order to capture the different ways that individuals like to respond. Staff would not need to be present for the surveys to be completed. The web and paper versions would need to be identical in content to facilitate inputting and analysis of the data. Both needed to be easy to read, short enough to be inviting, and focused on a few critical services. B. Planning the survey content To get ideas on how to design their assessment instrument, the project team contacted other teams in the Library that had completed formal needs assessment projects. The project team asked these questions: A. B. C. D. E. F. G. H.
Have you performed needs assessment on potential customers? If yes, how did you do it? What would you do differently? How long did it take? What was the sample size? What was the instrument? What were the costs? How did you determine who potential customers would be?
After gathering and sharing this information, the team developed these questions to jumpstart the design of their own instrument: Do we target particular customers? specific libraries or service sites? Which aspects should be focus on: collection maintenance or circulation or both? What incentive measures will get good results? Should our results be weighted in the analysis (for example, have faculty responses weighted more than student reponses)? At this point in their deliberations, the team considered focusing their questions to customers using this framework: How important is [X service] to you? Are you satisfied with [X service]? What changes would you make to [X service]?
C. Designing the survey instrument The team began their design phase by listing out all the services their team currently provides, along with service-related issues: ▶ ▶ ▶ ▶ ▶ ▶ ▶ ▶ ▶
Loan periods Recall policy Fines policy Application fees Number of book limits ILL policies [community users] Circulation Desk hours of service Self-check Patron blocks
▶ ▶ ▶ ▶ ▶ ▶ ▶ ▶ ▶
CatCard Pplicy Customer education Renewal policy Delivery return locations Hold policies Ability to locate material Online circulation functions - self-help General customer view point
This list was long, and would have made the survey too long for customers. So after further discussion, the team prioritized the services they would ask about, based on which services they could reasonably expect to change: ◩ ◩ ◩ ◩ ◩ ◩
The standard amount of time given to borrow material The recall policy The policy on monetary fines Circulation desk hours Required use of CatCard Renewal policy
Assistance in locating materials Where material may be returned Library signage to assist in finding materials Note which services were dropped for purposes of the project.
◩ ◩ ◩
Part II, Methdology For Gathering Data, Cont. D. Finalizing the design The team has identified the list of services it most needs input on, has decided to use a survey instrument and has identified criteria for the survey: easy to read, short, and focused on critical services. The next step was to shape how the choices would be presented to customers. The team wanted to compare the importance of particular services to customers, with how satisfied customers were with those services. For example, if users ranked a service as very important to them, yet their satisfaction level was very low, the service would need attention and improvement. Importance and satisfaction are both attitudes, the extremes of which can each be depicted on a continuum. This continuum is best represented in surveys by the use of the Likert scale. Here are examples from the survey of how the questions used a Likert scale to gather information about each respondent's attitudes: How important are the Circulation Desk hours? (1 = Very unimportant, 6 = Very important)
How satisfied are you with the hours of the circulation desk? (1 = Very unsatisfied, 6 = Very satisfied)
If you could change the hours of the circulation desk, what changes would you make?
These are also examples of forced choice questions, meaning the respondents must choose from a predefined set of possible answers. In case customers wanted to comment further, the team decided to also add a place for respondents to offer additional suggestions or input after each question. Summary: The final form of the questions would use this forced choice format: each service or service issue would have one question related to the issue's importance to the respondent, one question about how satisfied the respondent was, and a place for additional comments. They would also include information about why they survey was being given and who to contact for questions on the surveys. See the complete survey following this case study.
Part II, Methdology For Gathering Data, Cont. E. Marketing and distributing the survey To test the success of their survey design before distributing it to the public, the team piloted the web and paper versions with two different groups of their colleagues. Piloting the survey in this way ensures that the wording of the questions is clear and unambiguous and it allows testing of the data analysis, ensuring that the data that's gathered can be analyzed and will be useful. Concurrently with finalizing the design, the team also developed the following strategies: · ·
Placing an advertisement in the Wildcat: the ad cost approximately $300 for 15 days, but did not generate much response Providing incentives for respondents: Based on previous teams' experiences, the team chose to use a monetary incentive to help increase return rates and offered respondents a chance of winning one of twenty $25.00 gift certificates. Gift certificates were chosen because they were easy to obtain and relatively inexpensive yet still of a sufficient amount to be attractive to customers. Respondents filling out their name and contact information on the survey would automatically be entered and would be notified by a certain date. Posting easily noticed signage for the collection boxes where surveys could be dropped off
The survey was available for a three-week period near the end of the Spring semester. Paper surveys were distributed at the circulation desk at the end of each transaction. For the paper survey, collection boxes were left near distribution points, (i.e. the circulation desk,) and were cleared out nightly. The web survey was advertised in the Wildcat add and by placing a link to it on the front page of the Library's portal. The team established an email account just for returned surveys so they would receive prompt attention and not get "lost" in an individual's mailbox. Early on, the team had made a decision to not "push" the surveys onto people. This decision resulted in low return rate, so the they then decided to try approaching customers and asking them to fill one out. To facilitate this enhanced marketing approached, they developed a short script for use by their CMC and MAT team members so the approach would be consistent and comprehensive. The response rate improved and over 100 responses were received during the three weeks. Less than they wanted This was still less than they had hoped for, since Circulation statistics for the period covered by the survey showed 7,000 circulations, but these 100-plus random responses nonetheless generated useful information.
Part III. Analyzing The Quantitative Data A. Getting started Qualitative and Quantitative Data: A problem emerged that made this stage of the project difficult: team members were unsure how to mesh the qualitative input received with the quantitative data. All in all, 108 responses were obtained during the three-week survey period. The team was concerned about this and also about how to manage the data they did collect. After entering the comments into an Excel spreadsheet, so that they could be printed, a consultant was called in to give suggestions on how to start the analysis. Quantitative Data: For the numeric responses (the quantitative data) the consultant suggested creating a scatter chart of aggregated data, grouped by user type. Aggregated data is a set of individual responses that have been grouped together. So rather than analyzing individual responses to the survey, the data was aggregated by user group, for example, undergraduate students, graduate students, faculty, etc. This allows useful comparisons to be made and averages and other significant calculations to be made. A scatter chart or scatter diagram shows relationships between variables. In this case, the variables are Satisfaction and Importance. In this project, a scatter diagram was used to graphically depict the data gathered from each question. B. Scatter diagrams
Survey respondents were offered a choice from 1 to 6, where 1 is low and 6 is high, to rate how satisfied and how important a service was to them. The results were compiled and then graphed on a scatter diagram where the Y axis represented satisfaction and the X axis signified importance. In this example diagram, four points have been plotted to illustrate how this method was used by the CMC team. The graph essentially has 4 quadrants, each one representing a different correlation between satisfaction and importance.
A) We can see that one respondent indicated that they felt this service had an importance rating of 6 and their satisfaction was at a 6. That response may be interpreted that the customer feels that this service is extremely valuable to them, and they are very satisfied with it. When the CMC team encountered responses like this, it was interpreted as no remedial action regarding the service was needed. B) Another respondent rated their importance at 1 and their satisfaction at 2. This might be interpreted to say that while this service was not viewed to be vital to the customer, they were not getting what they wanted out of it. The team would have selected this service as one to review as time allows, putting forth their efforts elsewhere first. C) A third respondent indicated an importance level of 5 and a satisfaction level of 1. This suggests that this particular service is of great value to the customer, but they are not at all pleased with how it is being conducted. Responses in this quadrant were focused on first by the CMC team, who recognized that working on these areas would provide the biggest impact for their efforts. D) The final respondent in this example self-identified the importance at 2 and their satisfaction at 5. This depicts a customer who is not particularly interested in this service, but is none the less content with it. This type of response would be given a low priority for the CMC team to work on. In summary, one can use a scatter diagram to find out what the majority of their customers feel about the relative satisfaction and importance of any given service. Should the majority feel the service is to their satisfaction, action may not be necessary. If, however, a significant number of the survey base feel the service is important, but not being performed to their satisfaction, one can focus in on that problem. C. Summarizing the aggregate data To summarize the aggregate data, the team created these two charts, Importance to Customers and Satisfaction of Customers. Unfortunately, by splitting the two variables in this way and looking at them independently, the relationship between them is lost:
Importance to Customers
Satisfaction of Customers
Key: 6 = Very important // 1 = Very unimportant
Key: 6 = Very important // 1 = Very unimportant
Circ desk hours
Circ desk hours
Stacks assistance Monetary fines
D. Correlating importance with satisfaction Another way to look at the data would have been to draw up a table showing the Importance and Satisfaction results for each question. In taking the data just from the lower right quadrant of the scatter diagram for each question (LOW satisfaction and HIGH importance), such a table would have looked like this:
LOW SATISFACTION / HIGH IMPORTANCE # Question % Low satis. AND Hi importance 7 Stacks assistance 20% 1 Borrowing time 18% 9 Signage 15% 4 Circ desk hours 12% 2 Recall policy 10% 8 Return location 9% 6 Renewal policy 8% 3 Monetary fines 8% 5 CatCard 2%
This quantitative data, while preliminary, does indicate further investigation is needed for three or four areas: · · · ·
Assistance in Stacks Borrowing time Signage Circulation Desk hours
This kind of quanititative analysis can be verified through additional qualitative data, such as that generated through the comments written on the surveys by customers, or through focus groups. In the next section, we'll take a look at the qualitative data obtained by the project team.
Part IV. Analyzing The Qualitative Data A. The form of the data The survey generated scores of comments from customers. Here are some examples: ◆ ◆ ◆
◆ ◆ ◆
"I would prefer that renewal not be solely via email as a book I checked out was recalled during a multi-week computer problem...." "I would make the policy recall policy known more -- promote it." "Why don't you put more automatic checkout and checkin machines like the ones at Main Library throughout other libraries? Is there a book drop which can be accessed from outside of buildings which can be used when the libraries are closed?" "Need more information on signs, sometimes it's confusing." "Another type of ID besides the CatCard should be sufficient to check out books and other things." "If you want everyone to have equal access to a fair playing field regardless of race, sex or economic background, you need to abolish your current lending policies. I'm surprised you haven't been sued yet."
One of the most daunting tasks for a needs assessment team is to take these seemingly random and contradictory comments and make them make sense. This task can be especially difficult when the comments seem unfair or even malicious. In this next part of the case study, we'll look at how the team worked with their qualitative data.
B. Organizing the data The comments received were retyped into a document organized by the question categories. Here are some actual comments from the returned surveys preceded by the Importance and Satisfaction ratings of the customers: Question 2. Recall policy 1) I=6, S=4: 3 days instead of a week. 2) I=2, S=2: I would prefer it not be a solely via email as a book I checked out was recalled during a multi-week computer problem(and also avoid systems in the school from requiring a u.arizona email address, ie. Getting access to rooms/labs though I guess that's not a library issue). 3) I=4, S=5: I'd get duplicate books so that the book is not snatched from the person who needs it most. 4) I=4, S=6: I'm an Honors student and I've had books recalled from me before. It works well the way it is now. Etc. Question 7. Assistance in stacks 1) I=6, S=1: Not many people at the desk help to locate books, they are just tooo busy. 2) I=2, S=6: I have noticed that some of the Government Documents call numbers are wrong on Sabio. 3) I=1, S=1: Most library employees seem to be far too busy with their left-wing pamphlet reading to help out. 4) I=6, S=3: Hire more help in evenings. 5) I=6, S=5: Make assistance more available. 6) I=2, S=4: None - easy to find. Etc. C. Analysis - condensing the qualitative data The team read and reread the comments until they could discern certain patterns or themes. This realization allowed them to condense the suggestions even further to make them more manageable, without losing important information. Here are excerpts from the spreadsheet they used to collate the suggestions into more workable and understandable lists: Question 2 (Recall) Decrease time Increase penalty Question 3 (Fines) Offer prior notice Education Eliminate fines Increase penalty Decrease penalty
Question 4 (Circ Hours) Increase Music/Arch. Increase branches during finals Offer barcloner for self-checkout Increase (general) Question 6 (Renewal) Online No renewals Permanent check-out Allow renewal of special items
Each suggestion has implications -- will the suggestion be easy to do? Will implementing it affect other services? Will a change affect many customers, or only a few, or somewhere in between? Will it be expensive -- will the cost-benefit ratio be acceptable? These questions required one more step in analysis -- the use of Ease/Impact charts.
D. Analysis Using Ease/Impact charts Each suggestion was rephrased as a potential action item. The team also brainstormed additional actions that might solve the problems noticed by customers. The team chose the Ease/Impact Chart to help them make decisions about which actions to implement. (See the discussion of Ease/Impact charts earlier in this tutorial.) A blank Ease/Impact chart looks like this:
The team evaluated all potential actions for their Ease and for their Impact. For example, if an action would positively impact a large number of constituents, its impact would be considered high. If a suggestion would be fairly easy to implement, not cost much, and not negatively impact other processes or services, its Ease would be high. Determining the Ease/Impact for actions requires both detailed knowledge of the work and some reasoned judgement. After developing an Ease/Impact chart for each question, the team summarized the Ease and Impact of possible actions in a list form (see below).
Ease/Impact list for Question 6 (Renewals): Key: 1 = Low Ease or Impact 6 = High Ease or Impact Question 6 (Renewal) Education
EASE: 5 IMPACT: 6
EASE: 5 IMPACT: 1
EASE: 5 IMPACT: 1
Allow renewal of special items
EASE: 5 IMPACT: 1
This list suggests that educating customers about current Renewal policies will be both easy ("5") and will have a significant impact ("6"). Allowing renewal of special items, in contrast, is also relatively easy ("5"), but will have a relatively low impact ("1").
E. Study questions for data analysis Take a few minutes to think about and note down your responses to these study questions: (1) What difficulties did the team encounter in trying to get data from their customers? ________________________________________________________________________________ ________________________________________________________________________________ ________________________________________________________________________________ ________________________________________________________________________________ ________________________________________________________________________________ (2) How could these problems have been avoided? ________________________________________________________________________________ ________________________________________________________________________________ ________________________________________________________________________________ ________________________________________________________________________________ ________________________________________________________________________________ (3) What difficulties did the team encounter in trying to analyze their data? How could these problems have been avoided? ________________________________________________________________________________ ________________________________________________________________________________ ________________________________________________________________________________ ________________________________________________________________________________ ________________________________________________________________________________
Part V. Implementing Changes A. Decision-making In the previous section, the team has completed charting all possible actions onto an Ease/Impact chart. The next step involved choosing which actions to implement. The team used the following criteria to drive their decision-making: ◆ ◆ ◆
Any issue that received a high rating on both impact and ease was selected as a project or research opportunity Any issue that received a low rating on both impact and ease was de-selected All other areas were considered on a case-by-case basis 32
Actions might fall into one of three categories: Projects, for actions that would be implemented in the near future; Research, for actions that needed additional information before proceeding, and Hand-offs, for actions that were outside the team's scope but nonetheless needed attention. Here are examples of the Projects they decided to pursue: Project 1 - Signage Project Statement: Publicize the idea of Customers not to reshelve books to minimize the number of mis-shelved items in the stacks. Timeline: 3 weeks Number of people: 1 Project 2 - Stack Assistance Project Statement: Create checklist for Shelving Coordinator tasks and communication between the coordinator shifts to ensure all shelving activities are performed. Timeline: 3 weeks Number of people: 1 Project 3 - Education Project Statement: Educate customers regarding recalls, fines, renewals, stack assistance, return locations, self checkout, security and reshelving to raise the level of awareness about our policies. Timeline: 3-4 months Number of people: 3-5 Here are the a couple of actions that demanded more research before deciding whether or not to implement: Research 1 - CatCard Research Statement: Investigate feasibility/legality of using and ID for circulation service (i.e. not just the CatCard). Timeline: 1 month Number of people: 1 Research 2 - Circulation Hours Research Statement: Increase Fine Art’s Library hours during finals. Timeline: 2 weeks Number of people: 1 An example of an issue they needed to hand-off was the renewal of compact disks; this issue and relevant comments were forwarded to the music librarian for further investigation and recommendations.
Conclusion As a first-attempt at doing a needs assessment of their customers, the CMC team learned some things which may you might find useful.
Learning 1: Time The project team sorely underestimated the amount of time their needs assessment would take. The three main phases of their work, design of the survey tool, distribution and data analysis, all took at least twice as long as they had first thought it would. While each needs assessment endeavor will be different, it is reasonable to suggest that if you have never done this sort of work before, to make sure that you leave enough fluidity in your planning to adequately address all issues that will arise. 33
Learning 2: Method of Distribution Not wanting to alienate their customers by forcing a survey on them, the project team decided to have surveys available, but not hand one to each customer as they left the circulation desk. This resulted in an extremely low distribution rate, and an even lower return rate. They were forced to alter their methodology in mid-survey to offering a survey to each customer, which dramatically helped the number of survey's returned. They found, as well, that if they explained why they were offering this survey, (to help serve them better,) it was rare that a customer became aggravated by the intrusion on their time. The web distribution of the survey was an overall failure. Such a small return rate came from the web, as compared to the amount of time it took to create the online survey, that it was not worth the effort. A possible way to overcome this would be to get a random sampling of your customer's e-mail accounts and send out an explanation and a link to the online form. The team, again for reasons of not wanting to intrude too much on their customers, decided not to do this. Learning 3: Timing Because the team did not take into account the amount of time it would take to develop the survey, they were not able to distribute it when they had wished. They were forced to hand out surveys during a period of relatively low use in the Library, which negatively impacted their return rates. While a better understanding of Learning 1 would have prevented this, a side learning can be found here in looking at your customer activity over time. If you're looking to reach as many of your customers as possible, consider doing your needs assessment during peak activity. Just remember to take into account the increased workload on your staff during those peak times.
Learning 4: Scope At the end of the assessment, the project team had identified roughly 20 projects that could be started, each one addressing a distinctly different issue raised by their customers. The project team made an error, in hindsight, of bringing all of those projects back to the larger team with the expectation that they would be worked upon. As one might expect, the larger team was overwhelmed with the amount of work to be done. None of the projects were outlandish, but the sheer number of them ended up working against the project team. A way to combat this would be to select one or two projects that address high importance / low satisfaction areas and that have high visibility, and move forward with those.
This concludes the CMC Case Study. Please continue on to the next section to begin developing your own Needs Assessment Plan.
MAKE YOUR OWN PLAN
Remember that needs assessment consists of five basic steps: 1. 2. 3. 4. 5.
Identifying the customer or customer group. Ascertaining what the customer wants (planning for data collection). Collecting the data. Analyzing the data. Implementing or acting on the findings.
In this section of the tutorial, we will concentrate on steps 1-3: 1. Identifying the customer or customer group. 2. Planning for data collection 3. Choosing a method for collecting data Begin: First, think about the PURPOSE for conducting needs assessment. Capture that purpose here:
In each of the following parts, write your answers to the questions in the space provided.
Step 1 -- IDENTIFYING YOUR CUSTOMERS OR CUSTOMER GROUP a) Who are the people you serve -- who are your key customers or users?
b) Should you break down the customer base? Identify any subsets of this customer base and how many total customers there are in each group:
c) Which of these customers benefit most from the services we offer?
d) Are there prospective customers (i.e., those who currently are not using your services)? Who are they and how would you find them? Jot your ideas down here:
Step 2 -- PLANNING FOR DATA COLLECTION a) Which services do your customers use? Place a check mark next to those that will be included in your needs assessment project:
q q q q q q b) How do your customers use your services? Briefly describe how customers find out about and access your services:
c) How do your customers define success in terms of your services? What do your customers value?
d) What changes do your customers see coming in their environment? Jot down your ideas here:
Step 3 -- COLLECTING DATA TO IDENTIFY CUSTOMERS' NEEDS There are many different ways to gather needs assessment data. In a previous section of the tutorial, you learned of the following methods: · · · ·
Surveys including automated Surveys Interviews Focus groups On-Site Observation
If you need a review of any of these methods, please consult the tutorial now. a) What is (are) the best instrument(s) to reach your customer base and why?
b) What are your first thoughts on what you want to find out from your customers? Use these thoughts when developing your data collection instrument or technique.
c) Who on your team can analyze your data? Who can you use as a resource?
d) Which stakeholders will you need to consult as you make decisions on your data?
e) How will you announce to your customers your decisions?
If you need more help, consult the Resources section of the tutorial. 38
Part VI. RESOURCES Articles Altman, Ellen and Peter Hernon. "Service Quality and Customer Satisfaction Do Matter." American Libraries (August 1998), p. 53-4. Discusses customer satisfaction as an investment, presenting four steps to assess quality service to retain customers in today’s competitive environment, and the relationship between the library and its clientele. Addresses long term examinations of these expectations and the need to create reputation that will be known to the library’s community and funders. [UA call number: Z673 A5A6] Andaleeb, Syed Saad and Patience L. Simmonds. "Explaining User Satisfaction with Academic Libraries: Strategic Implications." College and Research Libraries, 59:2 (March 1998), p. 156-167. The study conducted with three academic libraries in Pennsylvania explores the shortcomings of SERVQUAL and proposes another model. Suggests that academic libraries seeking to improve user satisfaction concentrate on two major elements: resources, and demeanor. [UA call number: Z671 C6] Bender, Laura, et al. "A Science-Engineering Library's Needs Assessment Survey: Method and Learnings at the University of Arizona Library." Science and Technology Libraries, 17:1 (1997), p. 19-34. A summary of one of the early needs assessments efforts by the University of Arizona Library. The project intent was to develop a survey that would return statistically valid data about science and engineering users' information-seeking behavior and information needs. The article describes the methodology, the process of designing the survey, the role of qualitative data, the survey results, and the follow-up activities in response to the results. [UA call number: Z675 T337] Brown, James Dean."Do English and ESL Faculties Rate Writing Samples Differently?" Tesol Quarterly 2 5:4 (Winter 1991), p.587-602. This article concentrates on two different perspectives on grading for students of ESL100 and ENG100. The central purpose of the study was to investigate the relative writing abilities of native speakers and ESL students at the end of their different first-year composition courses. The criteria, method, materials, process, analysis and the reliability of the results are discussed. [UA call number: PE 1128 T2] Clougherty, Leo, et al. "The University of Iowa Libraries' Undergraduate User Needs Assessment." College and Research Libraries, 59:6 (November 1998), p. 572-584. The University of Iowa Library assessed the use of library facilities and services by undergraduates, and the satisfaction level of those users. Survey results are presented and analyzed, and the article concludes with recommendations for action. . [UA call number: Z671 C6] Ferguson, Chris D."The Shape of Services to Come: Values-Based Reference Service for the Largely Digital Library." College & Research Libraries, v. 58 (May 1997), p.252-265. This article reviews some of the values focused on customer services that includes: references services, increased attention to evaluation and the effective use of a variety of staff specialties and levels of expertise, integrating technologies, holistic computing environments, and service. Plenty of food for thought when addressing customer needs. [UA call number: Z671 C6] Kane, Laura. "Access vs. Ownership: Do We Have to Make a Choice?" College & Research Libraries, v.58 (January 1997), p. 59-67. This article addresses "access and ownership." Presents some charts regarding the use of serials 1992-1993, talks about theconflict between ownership and access and how to look for a compromise to end the controversy by building a bridge between the traditional library and the library of the future. [UA call number: Z671 C6] Kleiner, Janellyn-Pickering. "Libraries 2000: Transforming Libraries Using Document Delivery, Needs Assessment and Networked Resources." College & Research Libraries, v.58 (July 1997), p. 355-374. This article describes the assessment on periodicals done at the 39
Louisiana State University and Agricultural and Mechanical College (LSU) to determine document delivery cost and periodicals subscribed to by LSU. Contains charts on statistical data for 2 pilot needs assessments. [UA call number: Z671 C6] Lawton, Bethany. "Library Instruction Needs Assessment: Designing Survey Instruments." Research Strategies, v.7 (Summer 1989), p.119-128. Presents various survey instruments and provides some guidelines for directions library instruction, materials acquisition and activities will take. The questionnaires were directed to faculty, staff and students. The population-specific surveys enhanced the usefulness of the survey results. [UA number Z675 U5 45] LeClercq, Angie. "The Academic Library/High School Library Connection: Needs Assessment and Proposed Model." The Journal of Academic Librarianship, v.12 (March 1986), p.12-18. This article presents some tables on the use high school students make of public and academic libraries. Funding was through a grant to the University of Tennessee, Knoxville library. Methodology, research assumptions, research findings and the developing of a model for networking programs are briefly described. 18 [UA call number: Z671 J57] Richmond, Elizabeth B. "Alternative User Survey and Group Process Methods: Nominal Group Technique Applied to U.S. Depository Libraries." Journal of Government Information, v.23 (March-April 1996), p. 137-149. [UA call number: Z7164 G7 G613] Stamatoplos, Anthony and Robert Mackoy. "Effects of Library Instruction on University Students’ Satisfaction with the Library: A Longitudinal Study." College and Research Libraries, (July 1998), p. 323-34. Discusses how students' satisfaction depends on expectations of services. This article studies the changes in student expectations and how they relate to information accessibility, staff helpfulness, computer usefulness and ease of use and skill level for using libraries. The study suggests that libraries may be well served by measuring patron satisfaction and learning what variables drive satisfaction at particular libraries. [UA call number: Z671 C6] Westbrook, Lynn. "Information Access Issues for Interdisciplinary Scholars: Results of a Delphi Study on Women's Studies Research." Journal of Academic Librarianship. v.23 (May 1997), p. 211-216. Reports on a Delphi study developing consensus about the information needs of members of the Women's Studies faculty at the University of Michigan. Discusses the implications of the findings for future research in collection development, instruction, and reference. Also presents the consensus on top priority and lowest priority issues. Needs assessment and user needs are the main subjects. [UA call number: Z671 J57]
Books Alreck, Pamela L. and Robert B. Settle. The Survey Research Handbook. Homewood, Ill.: Irwin, 1985. [A "must read" before beginning any survey. UA call number: HF 5415.2 A33 1985] Berg, Bruce L. Qualitative Research Methods for the Social Sciences, 3rd ed. Boston: Allyn & Bacon, 1998. [UA call number: H61 B52 1998] Greenbaum, Thomas L. The Handbook for Focus Group Research. Thousand Oaks, Calif: Sage, 1998. [UA call number: HF5415.2 G695 1998] Lawton, Robin L. Creating a Customer-Centered Culture: Leadership in Quality, Innovation and Speed. Milwaukee, Wisc.: ASQC Press, 1993. [UA call number: HF5415.5 L39 1993] Strauss, Anselm and Juliet Corbin. Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory, 2nd ed. Thousand Oaks, Calif.: Sage, 1998. [UA call number: HA29 S823 1998]
On behalf of the ISNAG and NADM teams, we wish you the best of luck in your needs assessment endeavors. 40