I’ve been experimenting with a really clever usability testing technique that uses human intelligence to collect quantitative data about a user’s experience. The way a development project traditionally gathers this data is through conducting a study—gathering participants to sit down in a room and perform a set of tasks. We usually observe these tasks and pay them a gratuity of $75 or so, but it could be quite expensive reserving a lab for hundreds of people. This is a neat alternative to doing an in-depth study.
Here’s how it works:
- Sign up for an Amazon Mechanical Turk account
- Embed tracking code into your web pages, I recommend Crazy Egg—so you can collect more precise data and present it with overlying heat maps
- Instruct several “users” to go through a series of scenarios and watch the data come in
This technique is a nice way to analyze how user’s react to certain calls-to-action, navigation items, and placement of graphics/text. I’ve been using this as a mechanism to determine placement of components via A/B testing, but it could just as easily be used to also test how long it takes for them to get to their destination—if they can figure it out at all. Just rotate two different designs across all of your participants and see which one performs better.
Did I mention this is basically free? It costs around $0.05 cents per scenario, per user. Testing a 100 users could cost you around $5 bucks. Compare that to a traditional test, you can save thousands of dollars just by implementing this one technique into your lifecycle.
While I’d love to say that “the numbers don’t lie”, they can be very deceiving. Even though you can use this technique to track trends in human behavior, bad data doesn’t necessarily mean bad design. While this can be very effective, I want to stress that is not a replacement for traditional user testing and should only be used to verify your previously conducted research.
There are two reasons why you should conduct this test along side many other styles of usability testing. The first reason is because it lacks qualitative feedback. When you collect only quantitative data, you are missing out on some very important pieces of information, like “wow” factors, user quotes, and moments of frustration. The second reason is because you’re not actually testing authentic users, so this data is probably not going to be relevant to your website audience. It does, however, do a great job at measuring user behavior.
I think it’s a very big mistake to neglect usability testing. This is a great way to incorporate real data into your development efforts.