It seems to me that many user tests (mostly usability tests) have students doing very specific tasks that they may never do in real life. Of course user testing is artificial, and usability testing generally needs to be task-based, but is it useful to evaluate our sites with people who don’t care about the task they’re performing?
I thought one way to elicit information about tasks that meant something to students would be to ask them to explain the site to a peer. This way, maybe we could get a sense of what sections of the site they saw as particularly useful and which were not even on the radar.
Armed with chocolate, we hit the lobby of the library to ask students for about 5 minutes of their time. We asked them to pretend I was a first year student and explain the library website to me — what was useful, what wasn’t, what I could safely ignore. The plan was then to ask what task they do most often on the library website and walk us through that task, and then to ask what task they found most confusing and walk us through that.
I’m going to pause here and say that I pretested our script and it worked really well in the pretest.
We did this test in Weeks 4 & 5, with 10 participants total, and what we found was that the students really only used the website to search. They searched the catalogue, Summon and databases, but that was about it. Some signed into Ares for their course reserves. A few used subject guides, but exclusively as a source for which databases to search. Did they mention any actual content that a library staff member wrote? Just once: booking a group study room.
So, the question about the task they do most often? Well, they had already answered it: they search. Most confusing task? When search doesn’t work.
There were a few interesting things that did come up (*list below). But I had been hoping to find some big pain points, areas of friction that we could attack. I guess it’s good news that students generally expressed satisfaction. But maybe we just asked the wrong questions. “Explain” is probably the wrong word. It probably worked well in the pretest because I pretested with a student employee. Who explains the website as part of his job. Ah, hindsight.
Now that I think about it more, I’m not surprised that search was all that came up. A huge percentage of use of the library website is search. Asking random students what task they perform on the site, is more than likely going to bring up search. So the next test: How well is search working? Could we do better?
*A few interesting things that did come up in these two tests:
- Many students told us things they had obviously learned through instruction, but sometimes they got things a little wrong (use the catalogue for journal articles and Summon for books; in a Boolean search you put AND in brackets; you have to use Boolean to combine words in Summon; you have to use the Summon advanced search to limit by date; )
- Some students find eBooks very inconvenient to use (“It’s like a scanned, older-than-pdf kind of thing”). A few prefer print books to eBooks.
- Moving to MyCarletonOne login made login less confusing
- Adding the action button to the group study room page made that page less confusing