User Research for Everyone: Conference Notes

This was a virtual conference from Rosenfeld Media; a full day of sessions all about user research. Have a look at the program to see what a great lineup of speakers there was. Here are the bits that stood out for me.

Erika Hall: Just Enough Research

First off, Erika won me over right away with her first slide:

Slide text: Hello! You will need to imagine the emphatic gesturing.

I found she spoke more about the basic whys and hows of research, rather than how to do “just enough,” but she was so clear and engaging that I really enjoyed it anyway. Selected sound bites:

  • Keep asking research questions, but the answers will keep changing
  • Assumptions are risks
  • Research is fundamentally destabilizing to authority because it challenges the power dynamic; asking questions is threatening
  • Think about how your design decisions might make someone’s job easier. Or harder. (and not just your users, but your colleagues)
  • Focus groups are best used as a source of ideas to research, not research itself
  • 3 steps to conducting an interview: set up, warm up, shut up
  • You want your research to prove you wrong as quickly as possible

Leah Buley: The Right Research Method For Any Problem (And Budget)

Leah nicely set out stages of research and methods and tools that work best for each stage. I didn’t take careful notes because there was a lot of detail (and I can go back and look at the slides when I need to), but here are the broad strokes:

  • What is happening around us?
    • Use methods to gain an understanding of the bigger picture and to frame where the opportunities are (futures research fits in here too – blerg)
  • What do people need?
    • Ethnographic methods fit in nicely here. Journey maps can point out possible concepts or solutions
  • What can we make that will help?
    • User research with prototypes / mockups. New to me was the 5-second test, where you show a screen to a user for 5 seconds, take it away and then ask questions about it. (I’m guessing this assume that what people remember corresponds with what resonates with them – either good or bad.)
  • Does our solution actually work?
    • Traditional usability testing fits in here, as does analytics.
    • I kind of like how this question is separated from the last, so that you think about testing your concept and then testing your implementation of the concept. I can imagine it being difficult to write testing protocols that keep them separate though, especially as you start iterating the design.
  • What is the impact?
    • Analytics obviously come into play here, but again, it’s important to separate this question about impact from the previous one about the solution just working. Leah brought up Google’s HEART framework: Happiness, Engagement, Adoption, Retention, and Task Success. Each of these is then divided into Goals (what do we want?), Signals (what will tell us this?), and Metrics (how do we measure success?).

Nate Bolt: How to Find and Recruit Amazing Participants for User Research

Recruiting participants is probably my least favourite part of user research, but I’m slowly coming around to the idea that it will always be thus. And that I’m incredibly lucky to be constantly surrounded by my target audience. Nate talked about different recruitment strategies, including just talking to the first person you see. For him, one of the downsides of that was that the person is unlikely to be in your target audience or care about your interface. Talking to the first person I see is how I do most of my recruiting. And it also works really well because they are very likely to be in my target audience and care about my interface. Yay!

One comment of Nate’s stood out most for me: If someone doesn’t like your research findings, they’ll most likely attack your participants before they’ll attack your methods. This is familiar to me: “But did you talk to any grad students?” “Were these all science students?” Nate recommended choosing your recruitment method based on how likely these kinds of objections are to sideline your research; if no one will take your results seriously unless your participants meet a certain profile, then make sure you recruit that profile.

Julie Stanford: Creating a Virtual Cycle: The Research and Design Feedback Loop

Julie spoke about the pitfalls of research and design being out of balance on a project. She pointed out how a stronger emphasis on research than design could lead to really bad interfaces (though this seemed to be more the case when you’re testing individual elements of a design rather than whole). Fixing one thing can always end up breaking something else. Julie suggested two solutions:

  1. Have the same person do both research and design
  2. Follow a 6-step process

Now, I am the person doing both research and design (with help, of course), so I don’t really need the process. But I also know that I’m much stronger on the research side than on the design side, so it’s important to think about pitfalls. A few bits that resonated with me:

  • When evaluating research findings, give each issue a severity rating to keep it in perspective. Keep an eye out for smaller issues that together suggest a larger issue.
  • Always come up with multiple possible solutions to the problem, especially if one solution seems obvious. Go for both small and large fixes and throw in a few out-there ideas.
  • When evaluating possible solutions (or really, anytime), if your team gets in an argument loop, take a sketch break and discuss from there. Making the ideas more concrete can help focus the discussion.

Abby Covert: Making Sense of Research Findings

I adore Abby Covert. Her talk at UXCamp Ottawa in 2014 was a huge highlight of that conference for me. I bought her book immediately afterward and tried to lend it to everyone, saying “youhavetoreadthisitsamazing.” So, I was looking forward to this session.

And it was great. She took the approach that making sense of research findings was essentially the same as making sense of any other mess, and applied her IA process to find clarity. I took a ridiculous amount of notes, but will try to share just the highlights:

  • This seems really obvious, but I’m not sure I actually do it: Think about how your method will get you the answer you’re looking for. What do you want to know? What’s the best way to find that out?
  • Abby doesn’t find transcriptions all that useful. They take so much time to do, and then to go through. She finds it easier to take notes and grab the actual verbatims that are interesting. And she now does her notetaking immediately after every session (rather than stacking the sessions one after another). She does not take notes in the field.
  • Abby takes her notes according to the question that is being asked/answered, rather than just chronologically. Makes analysis easier.
  • When you’re doing quantitative research, write sample findings ahead of time to make sure that you are going to capture all the data necessary to create those findings. Her slide is likely clearer:
    Slide from Abby Covert's talk
  • Think about the UX of your research results. Understand the audience for your results and create a good UX for them. A few things to consider:
    • What do they really need to know about your methodology?
    • What questions are they trying to answer?
    • What objections might they have to the findings? Or the research itself?
  • In closing, Abby summarized her four key points as:
    1. Keep capture separate from interpretation
    2. Plan the way you capture to support what you want to know
    3. Understand your audience for research
    4. Create a taxonomy that supports the way you want your findings to be used

I have quite a few notes on that last point that seemed to make sense at the time, but I think “create a good UX for the audience of your results” covers it sufficiently.

Cindy Alvarez: Infectious Research

Cindy’s theme was that research – like germs – is not inherently lovable; you can’t convince people to love research, so you need to infect them with it. Essentially, you need to find a few hosts and then help them be contagious in order to help your organization be more receptive to research. Kind of a gross analogy, really. But definitely a few gems for people finding it difficult to get any buy-in in their organization:

  • Create opportunities by finding out:
    • What problems do people already complain about?
    • What are the areas no is touching ?
  • Lower people’s resistance to research:
    • Find out who or what they trust (to find a way in)
    • Ask point-blank “What would convince you to change your decision?”
    • Think about how research could make their lives worse
    • “People are more receptive to new ideas when they think it was their idea.” <– there was a tiny bit of backlash on Twitter about this, but a lot of people recognized it as a true thing. I feel like I’m too dumb to lie to or manipulate people; being honest is just easier to keep track of. If I somehow successfully convinced someone that my idea was theirs, probably the next day I’d say something like “hey, thanks for agreeing with my idea!”
  • Help people spread a message by giving them a story to tell.
  • Always give lots of credit to other people. Helping a culture of research spread is not about your own ego.

Final thoughts

It’s been interesting finishing up this post after reading Donna Lanclos’ blog post on the importance of open-ended inquiry, particularly related to UX and ethnography in libraries. This conference was aimed mostly at user researchers in business operations. Erika Hall said that you want your research to prove you wrong as quickly as possible; essentially, you want research to help you solve the right problem quickly so that you can make (more) money. All the presenters were focused on how to do good user research efficiently. Open-ended inquiry isn’t about efficiency. As someone doing user research in academic libraries, I don’t have these same pressures to be efficient with my research. What a privilege! So I now want to go back and think about these notes of mine with Donna’s voice in my head:

So open-ended work without a hard stop is increasingly scarce, and reserved for people and institutions who can engage in it as a luxury (e.g. Macarthur Genius Grant awardees).  But this is to my mind precisely wrong.  Open exploration should not be framed as a luxury, it should be fundamental.

… How do we get institutions to allow space for exploration regardless of results?

Advertisement

Redesigning our Subject Guides: Student-First and Staff-Friendly

I presented about our Web Committee’s redesign project at Access 2016 in Fredericton, NB on October 5, 2016. We started doing user research for the project in October 2015 and launched the new guides in June 2016 so it took a while, but I’m really proud of the process we followed. Below is a reasonable facsimile of what I said at Access. (UPDATE: here’s the video of the session)

Our existing subject guides were built in 2011 as a custom content type in Drupal and they were based on the tabbed approach of LibGuides. Unlike LibGuides, tab labels were hard-coded; you didn’t have to use all of them but you could only choose from this specific set of tabs. And requests for more tabs kept coming. It felt a bit arbitrary to say no to tab 16 after agreeing to tab 15.

desktop-unfriendly

We knew the guides weren’t very mobile-friendly but they really were no longer desktop-friendly either. So we decided we needed a redesign.

Rather than figure out how to shoe-horn this existing content into a new design, we decided we’d take a step back and do some user research to see what the user needs were for subject guides. We do user testing fairly regularly, but this ended up being the biggest user research project we’ve done.

  • Student user research:
    • We did some guerrilla-style user research in the library lobby with 11 students: we showed them our existing guide and a model used at another library and asked a couple of quick questions to give us a sense of what we needed to explore further
    • I did 10 in-depth interviews with undergraduate students and 7 in-depth interviews with grad students. There were some questions related to subject guides, but also general questions about their research process: how they got started, what they do when they get stuck. When I talked to the grad students, I asked if they were TAs and if they were, I asked some extra questions about their perspectives on their students’ research and needs around things like subject guides.
    • One of the big takeaways from the research with students is likely what you would expect: they want to be able to find what they need quickly. Below is all of the content from a single subject guide and the highlighted bits are what students are mostly looking for in a guide: databases, citation information, and contact information for a librarian or subject specialist. It’s a tiny amount in a sea of content.guide-overload

I assumed that staff made guides like this for students; they put all that information in, even though there’s no way students are going to read it all. That assumption comes with a bit of an obnoxious eye roll: staff clearly don’t understand users like I understand users or they wouldn’t create all this content.  Well, we did some user research with our staff, and turns out I didn’t really understand staff as a user group.

  • Staff user research
    • We did a survey of staff to get a sense of how they use guides, what’s important to them, target audience, pain points – all at a high level
    • Then we did focus groups to probe some of these things more deeply
    • Biggest takeaway from the research with staff is that guides are most important for their teaching and for helping their colleagues on the reference desk when students have questions. Students themselves are not the primary target audience. I found this surprising.

We analyzed all of the user research, looked at our web analytics and came up with a set of design criteria based on everything we’d learned. But we still had this issue that staff wanted all the things, preferably on one page and students wanted quick access to a small number of resources. We were definitely tempted to focus exclusively on students but about 14% of subject guide use comes from staff computers, so they’re a significant user group. We felt it was important to come up with a design that would also be useful for them. In Web Committee, we try to make things “intuitive for students and learn-able for staff.” Student-first but staff-friendly.

Since the guides seemed to have these two distinct user groups, we thought maybe we need two versions of subject guides. And that’s what we did; we made a quick guide primarily for students, and a detailed guide primarily for staff.

We created mockups of two kinds of guides based on our design criteria. Then we did user tests of the mockups with students, iterating the designs a few times as we saw things that didn’t work. We ended up testing with a total of 17 students.

Once we felt confident that the guides worked well for students, we presented the designs to staff and again met with them in small groups to discuss. Reaction was quite positive. We had included a lot of direct quotations from students in our presentation and staff seemed to appreciate that we’d based our design decisions on what students had told us. No design changes came out of our consultations with staff; they had a lot of questions about how they would fit their content into the design, but they didn’t have any issues with the design itself. So we built the new guide content types in Drupal and created documentation with how-tos and best practices based on our research. We opened the new guides for editing on June 13, which was great because it gave staff most of the summer to work on their new guides.

Quick Guide

quick-guide

The first of the two guides is the Quick Guide, aimed at students. I described it to staff as the guide that would help a student who has a paper due tomorrow and is starting after the reference desk has closed for the day.

  • Hard limit of 5 Key Resources
  • Can have fewer than 5, but you can’t have more.
  • One of the students we talked to said: “When you have less information you focus more on something that you want to find; when you have a lot of information you start to panic: “Which one should I do? This one? Oh wait.” And then you start to forget what you’re looking for.” She’s describing basic information overload, but it’s nice to hear it in a student’s own words.
  • Some students still found this overwhelming, so we put a 160-character limit on annotations.
  • We recommend that databases feature prominently on this list, based on what students told us and our web analytics: Databases are selected 3x more than any other resource in subject guides
  • We also recommend not linking to encyclopedias and dictionaries. Encyclopedias and Dictionaries were very prominent on the tabbed Subject Guides but they really aren’t big draws for students (student quotations from user research: “If someone was to give this to me, I’d be like, yeah, I see encyclopedias, I see dictionaries… I’m not really interested in doing any of these, or looking through this, uh, I’m outta here.”)
  • Related Subject Guides and General Research Help Guides
  • Link to Detailed Guide if people want more information on the same subject. THERE DOES NOT HAVE TO BE A DETAILED GUIDE.
  • Added benefit of the 2-version approach is that staff can use existing tabbed guides as the “Detailed Guides” until they are removed in Sept.2017. I think part of the reason we didn’t feel much pushback was that people didn’t have to redo all of their guides right away; there was this transition time.

Detailed Guide

detailed-guide

  • From a design point of view, the Detailed Guide is simpler than the Quick Guide. Accordions instead of tabs
    • Mobile-friendly
    • Students all saw all the accordions. Not all students saw the tabs (that’s a problem people have found in usability testing of LibGuides too)
  • Default of 5 accordions for the same reasons that Key Resources were limited to 5 – trying to avoid information overload – but because target audience is staff and not students, they can ask for additional accordions. We wanted there to be a small barrier to filling up the page, so here’s someone adding the 5th accordion, and once they add that 5th section the “Add another item” button is disabled and they have to ask us to create additional accordions. add-accordion
  • There’s now flexibility in both the labels and the content. Staff can put as much content as they want within the accordion – text, images, video, whatever – but we do ask them to be concise and keep in mind that students have limited time. I really like this student’s take and made sure to include this quotation in our presentation to staff as well as in our documentation:
    • When I come across something… I’ll skim through it and if I don’t see anything there that’s immediately helpful to me, it’s a waste of my time and I need to go do something else that is actually going to be helpful to me .

And speaking of time, thank you for yours.