Clients without deep interface design experience look at UX design professionals as experts, suspecting that we can simply look at an interface and understand what’s wrong with it. Yes - it’s true that we often have more experience in the design process. And - it’s tremendously hard to understand the business, technology and design constraints that a particular UI is developed under. In these cases, to objectively judge the quality of a design as effective is a tremendous challenge. Yet, it is a common expectation of a heuristic review.
The ask often takes this form:
The worst heuristic reviews are requested as a sort of tie-breaker for long running turf battles. Design critique is an easily accessible pass-time for any sighted person, and even more so for product owners. It can be a symptom of larger problems rooted in communication and goal alignment issues, not accessible copy and form label alignment.
In this sort of research project, an expert review can help uncover sticky areas of a site, sometimes overlooked for years. It’s massively valuable in terms of quickly orienting to the content and structure of a web property, and for defining the design values that you’re trying to align to, starting as an exercise in designing the process.
Like any particular type of audit: security, language, content, code re-use, accessibility, usability… and this list goes on and on, you typically end up with a report of many pages with many colors and lots of technical terms that tend to promote glossy-eyed gazing. It’s another huge challenge to make a review actionable, and to have the determination to execute on the feedback.
An average review is a graded survey against a number of clearly defined “rules of thumb” like these. UX professionals spend time familiarizing themselves with the site, and then supply a score for each design heuristic. The list of these heuristics easily runs into the 200+ range, and prompts for the reviewers look like this:
- Fields in data entry screens contain default values when appropriate and show the structure of the data and the field length
- Website accommodates adjustable font sizes without compromising the layout
- There is sufficient space between targets to prevent the user from hitting multiple or incorrect targets
For each guideline, we assign a priority, or score for how well the site is executed, which gives us a metric to track. In the end, we’re really just looking for strengths to celebrate, and problems to address.
A better review makes an effort at collecting data points from different reviewers, at regular intervals, and then supplies a targeted set of recommendations. You’re not going to get a lot of return on going back to re-write the entire interface just to make the text scale-able, but there are approaches that can work.
The best reviews aren’t single events, but part of a designed process where the heuristics are tied to specific needs of target audience, or personas for the site. We used one review to supply topics for future research, and a backlog of simple development tickets that could be easily addressed in an iterative process.
I’ve done a number of these, and I have learned that sometimes, the proof is not in the pudding. The review itself, like any artifact, isn’t where real value is found. The real value is the discussion, the visualization of all the dimensions of an experience, and the opportunity to build common understanding for what is important about an online experience.