As I mentioned in my last post, we’re doing a design refresh of our library website, with a goal to make it “beautiful.” As such, we’re not touching much of the organization. But of course we have to pay attention to not just how the information is categorized but also where it appears on the page. We learned that a few years back when we tried adding a “Spotlight” feature near our Library Hours (tl;dr: people stopped being able to see the Hours when other content shared the space). So we are firm believers that user testing and iterative design is vital in making sure we don’t make parts of our site invisible by moving elements around.
After the results of our user research earlier in the fall, we came up with a design drawn from the sites that our users liked most that also worked within our current site structure. The layout was essentially the same, with three major changes:
We pulled “Quick Links” out of the menu and put it in a box on the front page
Hours moved from a box on the side to a banner under the search box
Our Help and Chat button also moved to this banner
We wanted to do user testing to make sure that users could:
find today’s hours
get to the full set of hours
figure out how to access help or chat.
We also asked them if there was anything they hated about the draft design. Just to flag anything that could cause problems but that we weren’t specifically asking about.
Since we were doing this testing early in the process, we didn’t have a live site to show. Our Web Developer, the fabulous Kevin Bowrin, built the mockup in Drupal since he’s more comfortable in Drupal than in PhotoShop, but it wasn’t on a public server. So we used a printed screenshot for this round of testing.
The first version of the design had a grey banner and small text and it was clear after talking to a few users that visibility was a problem. We only talked to 4 people, but only 2 saw the Hours and they were really squinting to make it out. Finding when the library is open should be really really easy. We decided to increase the text size and remove the grey background.
This time, even fewer people saw the hours: 1 out 6. Since people didn’t see today’s hours, we couldn’t even get to the part where we tested whether they knew how to access the full set of hours. We decided to see if adding an “All Hours →” link would help; perhaps by echoing the convention of the “View More →” links in other parts of the page, it would be clearer that this section was part of the content.
Again, quite quickly we saw that this section remained invisible. Only 1 person in 5 saw it. One user noticed it later on and said that he’d thought that part of the website was just a heading so he ignored it. Clearly, something was making people’s eyes just skip over this part of the website. We needed another approach.
Kevin and I talked about a few options. We decided to try making the section more visible by having Library Hours, Help and Chat, and Quick Links all there. Kevin tweeted at me after I’d left for the day: “Just dropped the latest iteration on your desk. I kinda hate it, but we’ll see what the patrons have to say!” I had a look the next morning. I also hated it. No point in even testing that one!
We decided to put Hours where the Quick Links box was, to see if that would be more visible. We moved chat down, trying to mimic the chat call-out button on the McMaster Library website. Quick Links were removed completely. We have some ideas, but they were never a vital part of the site so we can play with them later.
Success! Most of the people we talked to saw the Hours and almost all of them could get from there to the full set of hours. (I did this round of testing without a note-taker, thinking I could keep good enough track. “Good enough?” Yes. Actual numbers? No.) The downside was that most people didn’t notice the Help and Chat link (not pictured here). However, I think we’ll really need to test that when we can show the site on a screen that people can interact with. The “always visible” nature of that button is hard to replicate with a print-out. I feel like we’re in a good enough place that we can start building this as more than just a mock-up.
Oh, and no one we talked to hated anything about the design. A low bar perhaps, but I’m happy that we cleared it.
We did all of this in one week, over 4 afternoons. For version 3, Kevin just added text to the screenshot so we could get it in front of people faster. Quick iterating and testing is such a great process if you can make it work.
My University Librarian has asked for a refresh of the library website. He is primarily concerned with the visual design; although he thinks the site meets the practical needs of our users, he would like it to be “beautiful” as well. Eep! I’m not a visual designer. I was a little unsure how to even begin.
I decided to attack this the way we attack other problems: user research! Web Committee created a set of Guiding Principles a few years back (based on Suzanne Chapman’s document). Number one in that list is “Start with user needs & build in assessment” so even though I was having difficulty wrapping my head around a beautiful website as a user need, it made sense to move forward as if it were.
How does one assess a beautiful website? I looked at a whole bunch of library websites to see which stood out as particularly beautiful and then discern what it was that made them so. Let me tell you, “beautiful” is not a word that immediately leaps to mind when I look at library websites. But then I came across one site that made me give a little exclamation of disgust (no, I won’t tell you which one). It was busy, the colours clashed garishly, and it made me want to click away instantly—ugh! Well. We might not be able to design a site that people find beautiful but surely we can design something that doesn’t make people feel disgusted.
I had an idea then to show users a few different websites and ask them how they felt about the sites. Beauty can mean different things to different people, but it does conjure a positive feeling. Coming up with feeling words can be difficult for people, so I thought it might be easier for me to come up with a list they could choose from (overwhelming, calm, inspiring, boring, etc.). Then I decided that it might be better to have users place the sites on a continuum rather than pick a single word for their feeling: is the page more calming or more stressful? Is it more clear or more confusing? I came up with 11 feelings described on a continuum, plus an overall 🙂 to 🙁.
There had been some talk of the library website perhaps needing to mirror other Carleton University websites a little more closely. However, there is not uniformity of design across Carleton sites, so I wanted to show users a mix of those sites to get a sense of which designs were most pleasing. I also wanted to show a few different library sites to get a sense of which of those designs were most appealing to our users. I worked with Web Committee to come up with a list of 7 library sites and 5 Carleton sites.
There was no way I was going to ask someone to give us feedback on 12 different websites; I decided a selection of 3 was plenty for one person to work through. Since I was looking mostly for visceral reactions, I didn’t think we needed a lot of people to see each site. If each site was viewed 5 times (with our own library site as a baseline so we could measure improvement of the new design), we needed 30 participants. That was three times what we often see for a single round of UX research, but still doable.
I planned a 10-minute process—longer than our usual processes where we test one or two things—and wanted to compensate students for this much of their time. That fell apart at the last minute and all I had was a box of Halloween mini-chocolates so revamped the process to remove a few pre- and post- questions and cut the number of continuums from 12 to 9 (8 feelings plus the overall positive/negative). That cut the time down to about 5 minutes for most people, and I was comfortable with a 5-minutes-for-chocolate deal. So in the end, these are the continuums we asked people to use to label the sites:
We set up in the lobby of the library and saw 31 people over four time slots (each was 60-90 minutes long). There were 31 participants instead of 30 because the last person came with a friend who also wanted to participate. Happily, the only person to have difficulty understanding what to do was one of these very last people we saw. He had such trouble that if he’d been the first person we’d seen, I likely would have reconsidered the whole exercise. But thankfully everyone else was quick to understand what we wanted.
Most people saw one Carleton site, one library site, and then our own Carleton library site. Because we had more library sites than Carleton sites, a few people saw two library sites then the Carleton library site. I had planned out in advance which participant would see which sites, making sure that each site would be seen the same number of times and not always in the same order. Participants looked at one site at a time on a tablet with a landscape orientation, so the sites looked similar to how they would look on a laptop. They filled out the continuum sheet for one site before looking at the next. They could refer back to the site as they completed the sheet. I had a note-taker on hand to keep track of the sites visited and to record any comments participants made about the sites (most people didn’t say much at all).
Partway through, I discovered a problem with the “Up-to-date / Old-fashioned” continuum. I was trying to get at whether the design felt old and stale or contemporary and up-to-date. But many people assumed we were referring to the information on the site being up-to-date. I thought that using “old-fashioned” rather than “outdated” would mitigate this, but no. So this was not a useful data point.
Usually with these kinds of processes, I have a sense of what we’re learning as we go. But with this one, I had very little idea until I started the analysis. So what did we find?
I had purposely not used a Likert-type scale with numbers or labels on any of the mid-points. This was not quantitative research and I didn’t want users to try to put a number on their feelings. So, when it came time for analysis, I didn’t want to turn the continuum ratings into numbers either. I colour-coded the responses, with dark green corresponding to one end of the continuum, red to the other and yellow for the middle. I used light green and orange for less strong feelings that were still clearly on one side or the other.
In determining what colour to code a mark, I looked at how the person had responded to all three sites. If all their marks were near the extremes, I used light green/orange for any mark tending toward the middle. If all their marks were clustered around the middle, I looked for their outer ranges and coded those as dark green/red (see examples in the image below). In this way, the coding reflected the relative feelings of each person rather than sticking to strict borders. Two marks in the same place on the continuum could be coded differently, depending on how that user had responded overall.
After coding, I looked at the results for the 🙂 ↔ 🙁 continuum to get a sense of the general feeling about each site. I gave them all an overall assessment (bad, ugh, meh, or ok). No site got better than ok because none was rated in the green by everyone who saw it. Then I looked at how often each was coded green, yellow, and red across all the continuums. Unsurprisingly, those results corresponded to my bad/ugh/meh/ok rating; participants’ 🙂 / 🙁 ratings had been reflective of their overall feelings. Our site ended up on the high end of “meh.” However, several participants made sure to say their ratings of our site were likely high because of familiarity, so we are really likely firmly in “meh” territory.
Now that I’d looked at the overall, I wanted to look at each of the continuums. What was our current site doing really well with? I was happy to see that our current site felt Useful and Organized to participants. “Organized” is good because it means that I feel confident about keeping the structure of the site while we change the visual design. What did we need to improve? Participants felt the site was Discouraging and Ugly. “Discouraging” is something I definitely feel motivated to fix! And “Ugly?” Well, it helps me feel better about this project to make the site beautiful. More beautiful at least.
After this, I looked at which sites did well on the aspects we needed to improve. For both the Carleton sites and the library sites, the ones felt to be most Inspiring and Beautiful were the same ones that were rated highly overall. These same sites were most felt to be Welcoming, Clear, and Calming. So these are the aspects that we’ll concentrate on most as we move through our design refresh.
Now, Web Committee will take a closer look at the two library sites and two Carleton sites that had the best feeling and see what specific aspects of those sites we’d like to borrow from. There’s no big time squeeze, as we’re aiming for a spring launch. Lots of time for many design-and-test iterations. I’ll report back as we move forward.
In 2015/16, I interviewed 10 undergraduates and 7 graduate students, asking them very broad questions about their research process – how they got started, what they did when they got stuck – as well as some specific things about our website and subject guides. I did follow-up interviews with 7 of the undergraduates and 6 of the grad students on an even more amorphous topic: as they were engaged in a research project, what were the moments that made them really excited? What brought them delight in their research? Other projects then intervened, but I’ve finally got around to analyzing what students told me about delightful research moments.
As an aside, let me say how wonderful it was to listen to students reflect on what brought them joy. So much nicer than subjecting them to a frustrating usability test! It was a glorious way to spend my time.
Overall, a few themes emerged:
Having a sense of progress
Many of the students talked about either making steady progress or having breakthroughs. Finding the right article, or search terms that then broke the topic open for them. Brainstorming with someone to solidify or advance their ideas. Realizing that they had something to write about; that they were actually going to be able to complete the assignment or project.
Working with, or getting feedback from, other people
Brainstorming was a common theme, as was having someone read a draft and provide feedback. Receiving and acting on feedback would either help shape their ideas or hone the expression of those ideas.
Connection came in many forms: having a personal connection to a topic, connecting a topic to real people, feeling a connection with a professor or TA while working on the project, wanting their research to connect with the wider world or seeing how that could happen.
It was in this last point – connection – that I started seeing differences between undergraduates and graduate students, and I find these points of difference more interesting than the overall themes.
Undergraduates spoke more about feeling a connection to their topic; either the importance of choosing a topic of interest to them or the difference it made when their topic was one they were really interested in.
“Knowing that I was interested in that made it… it’s almost like regardless of how difficult it’s going to be, it’s not going to be a pain in my butt. Because I’m actually going to enjoy doing it even if it’s difficult.” (4th year student)
Graduate students spoke more about how their research would connect with the wider world.
“It is kind of also exciting to say, like, OK I think there’s a good chance that if I do this right and put it out, it will cause a discussion.” (grad student)
“I have to take these moments where I come back to where I started and think about who are the actual people involved in this and who, you know, how are these decisions actually affecting lives? And I think those are the moments where I’m like “Ah, yes, this is important. This matters to people.”” (grad student)
In talking about their research projects, undergrads used the word “interest” more while grad students used the word “important” more. And no, I didn’t make a word cloud (ick) but rather looked at word frequency as a way to check on my impressions.
A few themes seemed to be specific to undergraduate students. Although certainly no one used this word, the idea of “mastery” came up with several undergrads. They spoke about internalizing their topics, about getting better at organizing their time and their thoughts, improving their search strategies, getting better at writing, asking more for feedback.
“When it comes to doing an action you have to have a certain level of familiarity with the action, then eventually you say, like, this is what I do now… and feel proud of that. And then that helps you even perform better when you do that action the next time and it makes you that much more efficient and motivated.” (4th year student)
Getting good grades was a theme for undergraduates, always followed up with how getting a good grade made them want to continue to get good grades, so it made them want to work harder. Good grades also increased confidence, and many noted that this increased confidence led to better work habits, which then kept those good grades coming.
“Until that time I never got A in my, any subject. I used to get B, C. So that was so thrilling. And I was so, because I experienced after that, that when you get A, your confidence level just boosts.” (3rd year student)
“That moment I started to like that research because I came to figure out that it’s possible, like, I can do it as long as I work hard.” (4th year student)
Many of the undergraduates talked about how they found joy when writing began to come easily to them. Three students specifically talked about “flow” as they were writing and others described it in other words (and no, no one name-checked Mihaly Csikszentmihalyi). Flow came in different ways and meant different things to them. One student said she could tell that she was interested in a topic if the words were still flowing after writing two pages. Another said the words were “flowing freely” during a presentation because he was so familiar with the literature. Others said that because they had a topic of interest, the writing flowed out.
“The more I read, the more interested I got on it. It allowed me to type more [laughs], like… words flowed freely.” (3rd year student)
Flow was clearly a good sign; a sign of interest, a sign of mastery, an indicator of delight.
So what does this mean for libraries? Well, I’m not sure yet; I want to mull it over a bit longer. If you have thoughts, please share them!
This was a virtual conference from Rosenfeld Media; a full day of sessions all about user research. Have a look at the program to see what a great lineup of speakers there was. Here are the bits that stood out for me.
Erika Hall: Just Enough Research
First off, Erika won me over right away with her first slide:
I found she spoke more about the basic whys and hows of research, rather than how to do “just enough,” but she was so clear and engaging that I really enjoyed it anyway. Selected sound bites:
Keep asking research questions, but the answers will keep changing
Assumptions are risks
Research is fundamentally destabilizing to authority because it challenges the power dynamic; asking questions is threatening
Think about how your design decisions might make someone’s job easier. Or harder. (and not just your users, but your colleagues)
Focus groups are best used as a source of ideas to research, not research itself
3 steps to conducting an interview: set up, warm up, shut up
You want your research to prove you wrong as quickly as possible
Leah Buley: The Right Research Method For Any Problem (And Budget)
Leah nicely set out stages of research and methods and tools that work best for each stage. I didn’t take careful notes because there was a lot of detail (and I can go back and look at the slides when I need to), but here are the broad strokes:
What is happening around us?
Use methods to gain an understanding of the bigger picture and to frame where the opportunities are (futures research fits in here too – blerg)
What do people need?
Ethnographic methods fit in nicely here. Journey maps can point out possible concepts or solutions
What can we make that will help?
User research with prototypes / mockups. New to me was the 5-second test, where you show a screen to a user for 5 seconds, take it away and then ask questions about it. (I’m guessing this assume that what people remember corresponds with what resonates with them – either good or bad.)
Does our solution actually work?
Traditional usability testing fits in here, as does analytics.
I kind of like how this question is separated from the last, so that you think about testing your concept and then testing your implementation of the concept. I can imagine it being difficult to write testing protocols that keep them separate though, especially as you start iterating the design.
What is the impact?
Analytics obviously come into play here, but again, it’s important to separate this question about impact from the previous one about the solution just working. Leah brought up Google’s HEART framework: Happiness, Engagement, Adoption, Retention, and Task Success. Each of these is then divided into Goals (what do we want?), Signals (what will tell us this?), and Metrics (how do we measure success?).
Nate Bolt: How to Find and Recruit Amazing Participants for User Research
Recruiting participants is probably my least favourite part of user research, but I’m slowly coming around to the idea that it will always be thus. And that I’m incredibly lucky to be constantly surrounded by my target audience. Nate talked about different recruitment strategies, including just talking to the first person you see. For him, one of the downsides of that was that the person is unlikely to be in your target audience or care about your interface. Talking to the first person I see is how I do most of my recruiting. And it also works really well because they are very likely to be in my target audience and care about my interface. Yay!
One comment of Nate’s stood out most for me: If someone doesn’t like your research findings, they’ll most likely attack your participants before they’ll attack your methods. This is familiar to me: “But did you talk to any grad students?” “Were these all science students?” Nate recommended choosing your recruitment method based on how likely these kinds of objections are to sideline your research; if no one will take your results seriously unless your participants meet a certain profile, then make sure you recruit that profile.
Julie Stanford: Creating a Virtual Cycle: The Research and Design Feedback Loop
Julie spoke about the pitfalls of research and design being out of balance on a project. She pointed out how a stronger emphasis on research than design could lead to really bad interfaces (though this seemed to be more the case when you’re testing individual elements of a design rather than whole). Fixing one thing can always end up breaking something else. Julie suggested two solutions:
Have the same person do both research and design
Follow a 6-step process
Now, I am the person doing both research and design (with help, of course), so I don’t really need the process. But I also know that I’m much stronger on the research side than on the design side, so it’s important to think about pitfalls. A few bits that resonated with me:
When evaluating research findings, give each issue a severity rating to keep it in perspective. Keep an eye out for smaller issues that together suggest a larger issue.
Always come up with multiple possible solutions to the problem, especially if one solution seems obvious. Go for both small and large fixes and throw in a few out-there ideas.
When evaluating possible solutions (or really, anytime), if your team gets in an argument loop, take a sketch break and discuss from there. Making the ideas more concrete can help focus the discussion.
Abby Covert: Making Sense of Research Findings
I adore Abby Covert. Her talk at UXCamp Ottawa in 2014 was a huge highlight of that conference for me. I bought her book immediately afterward and tried to lend it to everyone, saying “youhavetoreadthisitsamazing.” So, I was looking forward to this session.
And it was great. She took the approach that making sense of research findings was essentially the same as making sense of any other mess, and applied her IA process to find clarity. I took a ridiculous amount of notes, but will try to share just the highlights:
This seems really obvious, but I’m not sure I actually do it: Think about how your method will get you the answer you’re looking for. What do you want to know? What’s the best way to find that out?
Abby doesn’t find transcriptions all that useful. They take so much time to do, and then to go through. She finds it easier to take notes and grab the actual verbatims that are interesting. And she now does her notetaking immediately after every session (rather than stacking the sessions one after another). She does not take notes in the field.
Abby takes her notes according to the question that is being asked/answered, rather than just chronologically. Makes analysis easier.
When you’re doing quantitative research, write sample findings ahead of time to make sure that you are going to capture all the data necessary to create those findings. Her slide is likely clearer:
Think about the UX of your research results. Understand the audience for your results and create a good UX for them. A few things to consider:
What do they really need to know about your methodology?
What questions are they trying to answer?
What objections might they have to the findings? Or the research itself?
In closing, Abby summarized her four key points as:
Keep capture separate from interpretation
Plan the way you capture to support what you want to know
Understand your audience for research
Create a taxonomy that supports the way you want your findings to be used
I have quite a few notes on that last point that seemed to make sense at the time, but I think “create a good UX for the audience of your results” covers it sufficiently.
Cindy Alvarez: Infectious Research
Cindy’s theme was that research – like germs – is not inherently lovable; you can’t convince people to love research, so you need to infect them with it. Essentially, you need to find a few hosts and then help them be contagious in order to help your organization be more receptive to research. Kind of a gross analogy, really. But definitely a few gems for people finding it difficult to get any buy-in in their organization:
Create opportunities by finding out:
What problems do people already complain about?
What are the areas no is touching ?
Lower people’s resistance to research:
Find out who or what they trust (to find a way in)
Ask point-blank “What would convince you to change your decision?”
Think about how research could make their lives worse
“People are more receptive to new ideas when they think it was their idea.” <– there was a tiny bit of backlash on Twitter about this, but a lot of people recognized it as a true thing. I feel like I’m too dumb to lie to or manipulate people; being honest is just easier to keep track of. If I somehow successfully convinced someone that my idea was theirs, probably the next day I’d say something like “hey, thanks for agreeing with my idea!”
Help people spread a message by giving them a story to tell.
Always give lots of credit to other people. Helping a culture of research spread is not about your own ego.
It’s been interesting finishing up this post after reading Donna Lanclos’ blog post on the importance of open-ended inquiry, particularly related to UX and ethnography in libraries. This conference was aimed mostly at user researchers in business operations. Erika Hall said that you want your research to prove you wrong as quickly as possible; essentially, you want research to help you solve the right problem quickly so that you can make (more) money. All the presenters were focused on how to do good user research efficiently. Open-ended inquiry isn’t about efficiency. As someone doing user research in academic libraries, I don’t have these same pressures to be efficient with my research. What a privilege! So I now want to go back and think about these notes of mine with Donna’s voice in my head:
So open-ended work without a hard stop is increasingly scarce, and reserved for people and institutions who can engage in it as a luxury (e.g. Macarthur Genius Grant awardees). But this is to my mind precisely wrong. Open exploration should not be framed as a luxury, it should be fundamental.
… How do we get institutions to allow space for exploration regardless of results?
I presented about our Web Committee’s redesign project at Access 2016 in Fredericton, NB on October 5, 2016. We started doing user research for the project in October 2015 and launched the new guides in June 2016 so it took a while, but I’m really proud of the process we followed. Below is a reasonable facsimile of what I said at Access. (UPDATE: here’s the video of the session)
Our existing subject guides were built in 2011 as a custom content type in Drupal and they were based on the tabbed approach of LibGuides. Unlike LibGuides, tab labels were hard-coded; you didn’t have to use all of them but you could only choose from this specific set of tabs. And requests for more tabs kept coming. It felt a bit arbitrary to say no to tab 16 after agreeing to tab 15.
We knew the guides weren’t very mobile-friendly but they really were no longer desktop-friendly either. So we decided we needed a redesign.
Rather than figure out how to shoe-horn this existing content into a new design, we decided we’d take a step back and do some user research to see what the user needs were for subject guides. We do user testing fairly regularly, but this ended up being the biggest user research project we’ve done.
Student user research:
We did some guerrilla-style user research in the library lobby with 11 students: we showed them our existing guide and a model used at another library and asked a couple of quick questions to give us a sense of what we needed to explore further
I did 10 in-depth interviews with undergraduate students and 7 in-depth interviews with grad students. There were some questions related to subject guides, but also general questions about their research process: how they got started, what they do when they get stuck. When I talked to the grad students, I asked if they were TAs and if they were, I asked some extra questions about their perspectives on their students’ research and needs around things like subject guides.
One of the big takeaways from the research with students is likely what you would expect: they want to be able to find what they need quickly. Below is all of the content from a single subject guide and the highlighted bits are what students are mostly looking for in a guide: databases, citation information, and contact information for a librarian or subject specialist. It’s a tiny amount in a sea of content.
I assumed that staff made guides like this for students; they put all that information in, even though there’s no way students are going to read it all. That assumption comes with a bit of an obnoxious eye roll: staff clearly don’t understand users like I understand users or they wouldn’t create all this content. Well, we did some user research with our staff, and turns out I didn’t really understand staff as a user group.
Staff user research
We did a survey of staff to get a sense of how they use guides, what’s important to them, target audience, pain points – all at a high level
Then we did focus groups to probe some of these things more deeply
Biggest takeaway from the research with staff is that guides are most important for their teaching and for helping their colleagues on the reference desk when students have questions. Students themselves are not the primary target audience. I found this surprising.
We analyzed all of the user research, looked at our web analytics and came up with a set of design criteria based on everything we’d learned. But we still had this issue that staff wanted all the things, preferably on one page and students wanted quick access to a small number of resources. We were definitely tempted to focus exclusively on students but about 14% of subject guide use comes from staff computers, so they’re a significant user group. We felt it was important to come up with a design that would also be useful for them. In Web Committee, we try to make things “intuitive for students and learn-able for staff.” Student-first but staff-friendly.
Since the guides seemed to have these two distinct user groups, we thought maybe we need two versions of subject guides. And that’s what we did; we made a quick guide primarily for students, and a detailed guide primarily for staff.
We created mockups of two kinds of guides based on our design criteria. Then we did user tests of the mockups with students, iterating the designs a few times as we saw things that didn’t work. We ended up testing with a total of 17 students.
Once we felt confident that the guides worked well for students, we presented the designs to staff and again met with them in small groups to discuss. Reaction was quite positive. We had included a lot of direct quotations from students in our presentation and staff seemed to appreciate that we’d based our design decisions on what students had told us. No design changes came out of our consultations with staff; they had a lot of questions about how they would fit their content into the design, but they didn’t have any issues with the design itself. So we built the new guide content types in Drupal and created documentation with how-tos and best practices based on our research. We opened the new guides for editing on June 13, which was great because it gave staff most of the summer to work on their new guides.
The first of the two guides is the Quick Guide, aimed at students. I described it to staff as the guide that would help a student who has a paper due tomorrow and is starting after the reference desk has closed for the day.
Hard limit of 5 Key Resources
Can have fewer than 5, but you can’t have more.
One of the students we talked to said: “When you have less information you focus more on something that you want to find; when you have a lot of information you start to panic: “Which one should I do? This one? Oh wait.” And then you start to forget what you’re looking for.” She’s describing basic information overload, but it’s nice to hear it in a student’s own words.
Some students still found this overwhelming, so we put a 160-character limit on annotations.
We recommend that databases feature prominently on this list, based on what students told us and our web analytics: Databases are selected 3x more than any other resource in subject guides
We also recommend not linking to encyclopedias and dictionaries. Encyclopedias and Dictionaries were very prominent on the tabbed Subject Guides but they really aren’t big draws for students (student quotations from user research: “If someone was to give this to me, I’d be like, yeah, I see encyclopedias, I see dictionaries… I’m not really interested in doing any of these, or looking through this, uh, I’m outta here.”)
Related Subject Guides and General Research Help Guides
Link to Detailed Guide if people want more information on the same subject. THERE DOES NOT HAVE TO BE A DETAILED GUIDE.
Added benefit of the 2-version approach is that staff can use existing tabbed guides as the “Detailed Guides” until they are removed in Sept.2017. I think part of the reason we didn’t feel much pushback was that people didn’t have to redo all of their guides right away; there was this transition time.
From a design point of view, the Detailed Guide is simpler than the Quick Guide. Accordions instead of tabs
Students all saw all the accordions. Not all students saw the tabs (that’s a problem people have found in usability testing of LibGuides too)
Default of 5 accordions for the same reasons that Key Resources were limited to 5 – trying to avoid information overload – but because target audience is staff and not students, they can ask for additional accordions. We wanted there to be a small barrier to filling up the page, so here’s someone adding the 5th accordion, and once they add that 5th section the “Add another item” button is disabled and they have to ask us to create additional accordions.
There’s now flexibility in both the labels and the content. Staff can put as much content as they want within the accordion – text, images, video, whatever – but we do ask them to be concise and keep in mind that students have limited time. I really like this student’s take and made sure to include this quotation in our presentation to staff as well as in our documentation:
When I come across something… I’ll skim through it and if I don’t see anything there that’s immediately helpful to me, it’s a waste of my time and I need to go do something else that is actually going to be helpful to me .