They just lost an eager customer
December 12, 2012
I’d read about Three Buoys — the new fish restaurant that recently opened up a few blocks from my office. It sounded like a wonderful place for simple fish sandwiches, and for more sophisticated seafood preparations as well. So, I thought this morning, why not try it for lunch today.
I was warmly welcomed, but then presented with a dense page of typewritten text, that must have contained at least 100 different menu items. All were priced well above what I wanted to spend for lunch, and most were not seafood.
Yes — I could have scanned the menu, checking out the seafood items. There probably were some that would have interested me, and at a price not too much above what I expected to spend on lunch. But the very fact that there were so many items on the menu was proof positive to me that this place couldn’t be doing anything very well.
My “proof positive” may have been completely wrong. I might have missed a taste thrill for lunch today. But that’s not the point. What’s interesting is that I entered the restaurant wanting to buy something, and I left feeling upset that this mission was so hard. They might have lost not just my one purchase today, but a loyal customers for years to come.
What might have helped turn me into a real customer?
- A menu sorted by kind of item — Perhaps fried seafood, broiled seafood, soups, salads, meat, poultry, etc.
- A menu that was just shorter.
- A server who has offered to help me pick out something on the long menu for this new place.
- A page of lunch specials — perhaps only slightly cheaper than the full menu, but much more approachable.
Also, as I left, somebody might have asked, “We’re sorry you’re leaving . . . What were you looking for that you didn’t see on the menu?”
I want this little place to succeed, and may go back to offer my feedback — for what it’s worth. But if they are not querying their customers (or would-be customers), there’s not a lot of hope that they will get it right.
Assessing web site usability
October 4, 2012
There are lots of tests to ensure that web sites have readable type, clearly delineated links, reasonable numbers of elements per page, etc. Web sites can be assessed according to various standards of accessibility, such as for people with motor or visual handicaps. All the information gathered from such tests can be useful – but doesn’t by itself answer the important question, “Does the web site work?”
A web site works when users feel comfortable navigating it, find themselves engaged in the experience, are able to find the information or understanding that they want. It works when users are spared those moments of fear during which they are not sure how to proceed, and are afraid that they will lose their place in some way. It works when the users’ experience is enjoyable, and doesn’t end when the most immediate goal is reached.
But – perhaps most important – a web site “works” when the user is engaged in the virtual conversation that the site owner has tried to create. This might be “Let us help you find the software you need to get your printer working”, or “. . . find the car you need, and can afford”, or “. . . sign up for the education program that will help you meet your life goals.” Of course, a specific client’s goal may be quite different than these examples.
What connects all of these conversations is that they have to do with more than just information – although information is important. They are about a user experience, that promotes engagement, that cements a relationship with the vendor or provider, that instills confidence, and, often, that results in continued sales. A colleague of mine once said, “If you want to use social media, you need to be social”, and I find this dictum a very helpful guiding principle for all web development and evaluation.
Imagine a web page for the car manufacturer, that offers three choices:
• Daisy models
• Tulip models
• Amaryllis models
While these names may be perfectly clear to those very familiar with this carmaker’s line, their presence would probably be intimidating to many users. “How do I know where to begin?”, they would ask themselves, and would then feel that they are just making a guess on one of these three.
Now consider an improvement on this:
• Our basic line – the Daisy series
• Adding features and elegance – our Tulip series
• The car you’ve dreamed of owning – the fine Amaryllis series
This removes the ambiguity for users not familiar with the car models. In that sense it’s probably “correct”. But what kind of relationship does it establish with the user? What’s the conversation? It’s simply, “We have these cars. You can learn about them here.” That’s not the conversation that will create eager buyers, or will sell many cars.
So, lets imagine a stronger approach, designed to really engage the user:
• Configure your Daisy model – a basic car, for any budget.
• Configure your Tulip – offering you more comfort, style, and class.
• Configure your Amaryllis – and be so proud of the car you’ll be driving.
Here we have a strong invitation to the website user to really try out one of these cars, start looking at colors, options, etc. The language here may not be exactly right, but I expect most of us would still find this third option the most likely one to win friends and initiate sales. It invites a relationship that must, of course, be continued in the rest of the web site interaction.
In the examples above we can see at least three aspects of web site usability:
• Users can proceed with clarity and confidence (not made to feel foolish).
• Users learn relevant information about product or service.
• User are drawn in to a conversation, engaging with the vendor.
How can we assess these in a systematic way? As a skilled practitioner, I can certainly review a web site, and offer much constructive feedback. Indeed, much of my role is in offering such expert critique or suggestion.
But such one-person theoretical review has strong limitations. The real test is how the web site works when actually used by typical users. (I may resemble the “typical” printer user or car buyer, but I’m certainly not the typical prospect for a vocational college.) My method is simple to understand, but logistically can be quite complex.
- Clearly identify the persona to be used in testing. (This should have happened during web site design, but often it does not.)
- Define a test script, which the subjects will be asked to perform. (This may be finding some information, assessing several institutions, learning a skill, etc.)
- Determine a performance test, that will be used after the test to see what the subject has learned, their inclination to proceed with the content, their inclination to consider a purchase if there is a sales objective.
- Find the test subjects, using the criteria identified in (1) above. Typically subject will be paid for their time.
- Conduct the test, simply watching each subject, but with no intervention. Sometimes we will video the test as well.
- Conduct the test again, but asking the subjects to annotate their behavior – at each step, say what they are doing, why, and what kind of response they are seeking.
Note that we are never correcting or guiding the subjects – with one exception: If they appear to be lost, we may inquire what they are seeking. We will not answer their question, but will record in detail the dilemma the user reported.
On occasion, we’re called upon to review not just a web site in isolation, but its performance relative to the sites of competing vendors. This might involve simply repeating the test on several sites, or we may devise particular performance tests that measure how subjects rate the various vendors based on the web site experiences.
What I’ve described here may seem quite different from the more analytical evaluation processes often used by other usability consultants. I prefer this holistic approach, in which web sites are evaluated primarily by their performance rather than by an enumeration of characteristics.
Only after going through the testing process might I want to review the statistical data offered by such tools as Google Analytics. These tools are particularly helpful for identifying how users arrive at the site and where on the web site they tend to go. But the tools offer little guidance about the user experience, motivation, relative ease or frustration, etc.
In summary, I recommend, and I practice a holistic evaluation of web sites, in which behavioral goals are clearly identified, and in which silent observers watch users during real interactions with the web site, or in which the observers interact with the users only to identify more completely the user’s experience. Web sites work when they create and engage users in a productive conversation.
Postscript: Usability review is not design review. I’m a very visual person, and appreciate fine typography, uncluttered layout, elegant design. I’d like to believe that these are an important part of web site success. But data suggests that they may not be as important as I would like. In any case, the tests that I’m describing here evaluate how users behave when working with the site, and not how the site appears to its designers or critics.
The everyday tasks of leadership
September 6, 2012
I was so impressed with this article by George Ambler, that I’m posting this link to the article, The everyday tasks of leadership.
Ambler talks about three major leadership tasks:
- Setting direction (mission, vision, values)
- Building commitment (trust, accountability, cooperation)
- Creating alignment (common ground, shared responsibility)
Are there others that he left out? Do all the significant leadership tasks fit into these three themes? Please share you comments on this blog, and let’s make this a fruitful discussion.