DonnaM » Usability evaluation

Usability evaluation

IA & collaborative design – workshop

Tuesday, May 6th, 2008

Yet another workshop announcement…

On 7&8 August, I will be teaching a 2-day master class on information architecture and collaborative design, run via Ark Group. The thing that is slightly different about this workshop compared to my IA workshop is that, duh, it includes a lot of collaborative design.

I’m adding more material on user research, design games, usability testing and designing in teams – I don’t usually get to teach these in a one-day workshop. And 2 days allows more hands-on, practical stuff than one, and that is always good.

So if you know someone who may be interested, and can get to Sydney, please pass on the details: Information architecture and collaborative design workshop.

Why I don’t offer a usability testing service

Monday, January 28th, 2008

A couple of weeks ago I reworked the business part of my website – I had to move it to a new host and remove some content (and it really needed some polish). So I decided not only to think about re-doing the website, but to re-think what I’m doing with my business.

One of the things I needed to sort out was the types of service I offer – I want to focus narrowly enough in my area of expertise to attract clients who I suit, without looking like so much of a specialist that good people pass over me. So of course I decided to focus on design training, IA and interaction design – these are my core offering, they are what I’m good at, and what I want to be doing right now.

So I had a page on my old website about usability testing (I’m finally getting to the point of the story). I automatically brought it over and added it to my list of services. I did it because I thought it was just something that a business like mine should offer.

But when I came to write the content, and convince you why you should hire me to help you with usability testing, things started to unravel. In writing it, I realised that I actually didn’t want to offer a usability testing service.

I thought about that some and realised why I don’t want to do standalone usability testing:

  1. Usability testing is easy to learn and easy to conduct. Yes, really. I’d prefer to teach a team how to do it themselves.
  2. Because it is so easy, it really is silly paying my rate to do usability testing. That money would be better used teaching other people to do it.
  3. Usability testing really should be an integral part of a user-centred process, and happen informally (and sometimes formally) throughout a project. For most projects, getting an outsider to do this means money, which means it isn’t done as often as possible. Guess what – I’d prefer to teach someone to do it themselves.
  4. I hate providing recommendations without knowing the design context, the challenges, the constraints of a project. I have seen too many usability test results that offer dumb, shallow recommendations that aren’t actionable because of the real constraints in a project.
  5. I don’t mind running a solid test and providing detailed outcomes with no recommendations; but that’s not worth me spending my time on (I’m a designer and want to design), and not worth you spending the money on.

So lots of people are going to be upset with me about that, so I will acknowledge that there are some caveats:

  • Usability testing is easy, but also easy to really stuff up. But for most of the types of tests I get asked to do as a consultant (mid-cyle to pre-release basic validation testing) it is not life or death.
  • If you really need to do a detailed research-style study into something, hiring a consultant can be a good investment. I’m talking about fairly shallow validation testing.
  • I do believe in the value of usability testing – I’d just prefer to do it on projects where I know the design constraints and issues and where I (or my small, close team) use it to help us tweak a design.

So I now don’t offer a standalone usability testing service – and don’t feel the loss at all. But I will teach others and will test on my own projects…and I’m comfortable with that.

Where are the cool usability testing blogs

Friday, December 21st, 2007

I’m in the middle of writing a full day usability testing workshop (not for my business, but for a client). It’s been a few years since I wrote a workshop on usability testing, and around 5 years since I wrote my first.

So one of the things I wanted to do was collect some new resources (my workshops are always full of follow-on reading). So I went hunting on the internet, figuring there must be some bloggers talking passionately and honestly about usability testing, the stories and the challenges.

But I can’t find them. I have a few old posts on usability testing (some which are good and I had forgotten) and found a couple of articles in online magazines. But I can’t find any good bloggers who write good, meaty, practical stuff on usability testing.

Ideas anyone?

Being brave & usability testing

Thursday, August 9th, 2007

I did something very brave and very scary recently. No, I didn’t skydive, bungy jump or ski – I sent out chapters of my card sorting book to colleagues to review. And in doing so, I realised something very important about usability testing.

The reason it was so scary to get review feedback on my book was that I was sending out something quite personal and the people I sent it to are people I respect. That was an insanely scary thing to do. I didn’t know how good the book was and was putting myself in a situation where my peers could have thought ‘I thought she was smart, but what’s this rubbish? Maybe she’s not as smart as I thought’.

But I knew that the book would become better with input from smart people. And I knew that I wasn’t making a token effort – I was genuinely interested in the feedback and would do something with it. So I took a deep breath and sent it out.

I got a lot of good feedback and my colleagues were honest enough to tell me the things that didn’t work as well as those that did. The feedback was constructive, nicely balanced and didn’t make me feel like I was silly. I feel good about myself, and know what to do to make the book better.

How does this fit with usability testing? In the past year or so I’ve been on the receiving end of some usability tests of my designs and have had the chance to read some done for a client.

Universally, they failed to acknowledge how hard it is to put something up for critique, and to respect the expertise and hard work of the client team. A few reports included a short list of ‘things that worked well’ (that felt like a token), and a long list of things that didn’t go well. Most reports I read included nothing about the good aspects, and no comments acknowledging the challenge of the situation. None commented that the good things are often invisible and the bad things stood out. And large parts of the reports reported on tiny, trivial things wrapped up in the guise of ‘usability problems’. And the recommendations…well, i won’t go there today.

If usability folks want to get their contribution acknowledged and become more involved in projects, they must start to think harder about the human aspects of their work – not on the user’s side, but on the client’s. They have to get off their high horse and acknowledge that loads of hard work has been done and significant problems have been successfully resolved. They can’t continue to report failures without reporting successes. And they have to identify the difference between observations and genuine problems.

It takes a brave person to put up their work for critique. Respect their skills, tell them what is great, and be constructive about the things that aren’t.

Writing memorable scenarios for usability testing

Wednesday, April 27th, 2005

Scenarios in usability testing

One of the most important aspects of running a successful usability test is getting the scenarios right. Making a mess of scenarios will, more than anything else, result in a usability test that is worthless or highly biased.

Good usability test scenarios can be hard to write – they have to be realistic, have enough detail to be complete, be jargon-free and should not bias towards a particular action.

Most importantly, and I cannot stress this enough, they have to motivate the participant to work as they would in a normal situation. I have seen and read results from tests where I could tell that the participants were just following the script – picking up a couple of words out of the scenario and looking for those in the interface, or worse, typing them straight into the search engines. These aren’t bad behaviours in themselves but in a usability test they indicate to me that the participant has not connected with the scenario enough for it to be representative of their normal situation. It is like painting by numbers. In order to replicate a real-life situation, the participant has to make a connection with the scenario and be motivated to complete it.

Memory

I’ve been doing a lot of reading into memory lately. One interesting aspect (amongst many) is that stories that contain content that triggers an emotional response, or that contain very vivid details, are remembered and recalled better than those that aren’t. Details are often not recalled well, but the essence of the situation is recalled well. This occurs even for stories that are unrelated to our personal experiences.

Linking the two

I’ve been leveraging this aspect of memory in usability tests recently. Instead of minimising the amount of information in a scenario, I’ve been enriching them with vivid detail and emotional aspects. I include a real names, products and places and describe a situation in a lot of detail. The scenarios may be long, but the richness of detail means that the test participant visualises and connects to the entire scenario. When they approach the task, they aren’t trying to recall the detail, they are feeling the situation. They remember the essence of the scenario and work through that rather than hunting for keywords to put into a search box. This means that they are working more realistically and we can put more trust in the outcome of the test.

This method works best for usability tests that contain a small number of scenarios. In order for it to work effectively, the participant must be given enough time to read slowly through the scenario and think on it a little before starting.

Try it. I promise you more realistic test results.

90% of All Usability Testing is Useless

Wednesday, June 16th, 2004

An excellent article by Lane Becker called “90% of All Usability Testing is Useless”. I have been thinking the exact same thing (and writing it here and other places).

Been thinking of writing something on pitfalls in usability testing big, informational sites. Keep an eye out…

Selecting scenarios for a usability test

Tuesday, June 15th, 2004

I used a new method of selecting scenarios for a usability test for a client last week. The test was for a system that would help frontline staff with client questions. We had an initial list of 30 scenarios that I thought looked good, but because this was my first exposure to the users, I wasn’t sure. I would normally select 10 or so scenarios, but didn’t want to in case the ones I picked weren’t realistic or representative.

So we printed out all scenarios on index cards. At the beginning of the usability test I asked the participants to put them into 4 piles according to how often they were asked that question – frequent, sometimes, rarely and never. I also asked participants if there were questions that were asked frequently but were not covered. This gave me extra information about what clients ask about and checked that the scenarios were realistic. I chose 4 scenarios from the ‘frequent’ pile and 4 from the other piles, making sure I didn’t double up on topics.

This worked well – the participants were working with real scenarios so I was confident that their reactions were close to a realistic situation. Choosing more from the ‘frequent’ pile ensured that the usability test covered core questions.

Normally I wouldn’t use a large variety of scenarios with a small number of participants as I would not get enough coverage to identify repeated usability issues. However it worked in this situation because it was on a small, reasonably homogeneous set of information. The test showed that the participants’ actions and usability issues were similar for different scenarios.

I bombed the usability quiz

Sunday, June 6th, 2004

Who am I to criticise…according to the HFI usability and web site quiz, I should be calling them for help. Did you know that:

“Usability testing for a Web site can be performed optimally with

  • a. An initial list of potential functions
  • b. Human task flow diagrams”

Wow – I’d love to know how to usability test with an initial list of functions, and human task flow diagrams are my favourite technique.

“when writing for the Web one should … a. Avoid paragraphs”. What a great suggestion. My enter key is getting way too worn out anyway.

“To satisfy both novice and expert users, the best strategy for label and field alignment is…b. Left align both fields and labels”. What does this have to do with novices and experts? And what do we do for the perpetual intermediates.

During usability testing it is OK to “Keep the testing situation as ambiguous as possible” but is not OK to “Start out by showing the participants how the software works”. OK, I wouldn’t often go into a lot of detail about the software, but there are times where I would demo part of it, particularly in an early stage, exploratory test. But I would never, ever keep the testing situation ambiguous!

Go try the tests – see if you are worthy.

How many participants?

Sunday, June 6th, 2004

This month’s article from HFI is about how many participants to involve in a usability test. And the answer is [insert drum roll here] 12 per user segment. Yay!

No matter what the answer is (Spool, Nielsen), the more important issue is that the question is wrong. The discussion about how many participants to include in a usability test is based on the premise that the ultimate goal of running a test is to identify as many usability issues as possible.

This is not the goal of a usability test. It’s not even a sensible or realistic goal. Even if we could identify every usability problem, by the time we fix all of them, we will have introduced new ones. Then we find all of the problems again, fix, introduce new ones, and so on until only a very small number of usability problems remain.

In reality, the most important goal of a usability test is to identify the main usability problems – the ones that affect all users, are high impact or high risk. We redesign those and test again, but with a smaller focus to start with, we can get to a good product more quickly.

As I mentioned a few days ago, big-bang testing is not the right way to usability test anyway. Usability testing is most useful as part of an iterative, user-centred design cycle. If you usability test as a part of the design process rather than as a scientific experiment, you will have happy customers instead of statistical significance…I know what I’d prefer.

Usability testing: bias doesn’t matter

Thursday, June 3rd, 2004

Had a thought on the way home today…it actually doesn’t matter if there is a bit of bias in usability tests. All of the effort that we go to to make sure that the test represents the real world, the perfect set of participants are chosen, the scenarios are real and worded beautifully and we introduce no bias into our tests – all a bit of a waste of time really.

More important than doing one perfect test is running multiple tests (and trying not to make them too imperfect). The only time when as much bias as possible should be eliminated is when there is only one usability test – and this isn’t the right way to test anyway.

Usability testing is an inherent part of an iterative user-centred design process. Research, design, test, design, test etc. In this model, a bit of bias doesn’t matter. It is still important to choose the right type of test, make sure the participants are in the user group and that the scenarios aren’t leading. But a leading question here or there won’t hurt. The test won’t unravel on you. You have plenty more chances to explore design and usability issues.

Usability testing as a stand-alone process is wrong. The single test by a usability guru is a waste of money. Put the effort into research and iterative design instead…

How much to intervene in usability testing

Thursday, May 6th, 2004

A good article by Clifford Anderson in the STC Usability SIG newsletter about how much interaction is appropriate during a usability test.

I’m quoted in a couple of places in this. Funniest quote for readers who know me personally:

As Donna Maurer puts it, “One of the hardest things to do is to learn when to keep quiet.”

UPA 2004

Saturday, March 13th, 2004

The UPA 2004 conference is open for registrations. This looks amazing!

Usability workshop

Monday, January 19th, 2004

I’ve been quiet here for a few days because I’ve been working hard writing material for the Intro to usability evaluation workshop that I’m running in March.

You should see this material. If I may say so (and I may, because it’s my blog), it’s pretty amazing – I’ve managed to write a good basic usability testing book.

Hmmm…now there’s a thought ;)

Five ways to identify intranet usability issues

Tuesday, January 13th, 2004

My latest article for work has been published:

“Five ways to identify intranet usability issues”.

Go read it – it’s good ;)

And here’s a link to my previous article, which is also good:

“Escaping the organisation chart on your intranet “

A new metric – restarts

Friday, November 7th, 2003

I discovered an interesting metric to collect for a usability evaluation of a site – restarts – the number of times the participant ‘starts again’ when trying to find information.

Usually I collect task success, time to completion, clicks (compared to minimum), number of searches and whether search led to success. The ‘restarts’ metric surfaced during the evaluation I’m working on at the moment – in a couple of cases, people tried 5-6 independent paths of attack to find what they need. Interesting…