User Experience Index

Wade Shearer
9 min readApr 26, 2020

--

I believe that the two greatest challenges in product discovery are understanding the needs, desires, challenges, and limitations of customers; and quantifying qualitative learnings so that they can be tracked and measured. This work is costly and fraught with subjectivity.

Product Management, Design, and Development teams are constantly fighting personal bias. Consciously or not, everyone hears what they want to hear and human memory is flawed. Even the most expert practitioners, who are able to achieve superhuman levels of abstraction, are limited by what they have previously experienced and can recall. Learning is a process that builds upon itself, so ones ability to comprehend and interpret will always be tinted in some way, positively or negatively.

And if that wasn’t frustrating enough, the act of observing and measuring changes the results. Reasons vary from the subject feeling important for being singled out, a desire to appease the researcher, to the researcher’s presence simply altering the environment.¹ It’s also been proven that as we increase the precision in measuring one quantity, we are forced to loose precision in measuring another.² So, the closer we get, the less accurate and less clear the results will be.

Ethnography is the most effective form of research though, so you shouldn’t let the challenges discourage you. The key is to understand the limitations to methodologies and how to combat them, be aware of individual strengths and weaknesses of your team, and to practice.

Just observing your customers though isn’t enough though. You need to get inside their heads and understand their emotions and motivations. This is why I am such a strong advocate for contextual inquiry. It is expensive, in terms of both time and resources, but the most valuable and rewarding things in life always are. I strongly believe that regular customer exposure is the prime attribute of successful product teams.

Until the technology exists to read each other’s minds though, how can you know what a customer is thinking?

Ask them!

That may have given you pause. Surely by now everyone knows that you’re not supposed to ask customers, right? But really? Not anything? I believe that the strong proselytization against asking customers what they want has had a negative effect on talking to customers at all.

Here are some questions customers are often asked:

  • How likely are you to recommend this product or service to a friend?
  • Do you think that you would use this product frequently?
  • Do you think that you would need the support of a technical person to be able to use this product?
  • Do you imagine that most people would learn to use this product quickly?
  • Would you likely use this product in the future?

The underlying problem with these questions is that you’re asking people to predict the future or to state what other people would do—something humans aren’t very good at. “How likely are you…?” “Do you think…?” “Would you likely…?” “Do you imagine…?” How vague, ethereal, and non-committal. How about a simple yes or no about the here and now? Did you like it or not? Did you get value out of it or not? Will you pay for this? Right now?

Let’s single NPS out for a moment. It’s supposed to be about loyalty and growth. The problem is that the survey doesn’t talk about personal investment or sacrifice. In fact, it doesn’t even ask about loyalty. Asking someone what they’ll do in the future is asking them to attempt to predict the future. That’s just optimism; and most often, will be jaded by pressure to be nice. If you really want to determine how loyal customers are, you should ask them how often they have actually recommended your product.

Jeff Gothelf, product consultant, author, and speaker, sums it up well:

Asking people to predict the future—what the NPS conversation is all about—is highly risky. In the future we always make the best decisions, we never make mistakes and we don’t make people feel bad. However, measuring, getting stories about or better yet, observing first-hand, recent behaviour in a particular product or service is far more telling about how well you’re meeting customer needs and whether it’s enough to retain those customers and attract new ones.

Ask them

So, what should we ask customers and when? There are three times that I recommend talking to customers: interviews, contextual inquiry sessions, and usability or preference post-test questionairs.

  1. Interviews. These are best done before you have a design to inform job stories. While all research is best done in person, in the user’s native environment, these are more easily done remotely than contextual inquiry sessions, so you don’t have any excuses. The user should do most of the talking, but the format is similar to an interview.
  2. Contextual inquiry sessions. You should be silently observing for most of a contextual inquiry session, but it is appropriate and effective to supplement your observation with prompts for descriptions of the tools they are using, roadblocks, compensating behaviors, and what they are thinking.
  3. Usability or preference post-test questionnaires. Do not ask or share the questions with the user before hand or it will taint the test—causing them to approach the tasks you give them differently; or at a minimum, be distracted or experience unnecessary anxiety.

A thorough review of the strengths and best-practices of each of these are beyond the scope of this article, but it’s paramount that they they each be conducted one-on-one (with the exception of a note taker). Avoid focus groups like COVID-19. The goal of each of these techniques is to gain insights into the activities or work that the customer performs (their “job to be done”), challenges they have, what works well and what doesn’t, what is important, and what ideas for improvement they may have. Note that we’re not asking them to solve the problem, but everyone is a designer and has a solution in mind, and the pain that caused that is product discovery gold.

I love the way that Nate Walkingshaw, Chief Experience Officer at Pluralsight, describes the process of digging into the mind of a customer:

Interviewing someone in order to walk a mile in their shoes takes practice. It is part science and part inter-personal skills. Today I am giving you [function, usability, technical, environment, interaction, and support] for the discussion guide: areas that are important to explore with the person you are interviewing. Ultimately, the intent is to leverage these in a free flowing conversation, so you can pick up as many of these items as you can along the way. I allow the user to wander deep into their psychology about the context we are exploring, if it pertains to what I am trying to solve. It is important to remember every question can build upon itself, which at times can take the interview to new and strange places often extending your 60–90 minute window.

I have witnessed far too many precious usability sessions wasted because the product manager or designer was too focused on the test itself. Remember, what the user is thinking and feeling is more valuable than whether they successfully completed the test. Let the adage, “you have two ears and two eyes, but only one mouth; use them proportionately,” guide you not just through the session, but even after it has concluded. Not asking questions leaves some of the most valuable insights undiscovered.

There are many popular questionnaires for test-level studies:

  • SUS: System Usability Scale (10 questions)
  • SUPR-Q: Standardized User Experience Percentile Rank Questionnaire (13 questions)
  • CSUQ: Computer System Usability Questionnaire (19 questions)
  • QUIS: Questionnaire For User Interaction Satisfaction (24 questions)
  • SUMI: Software Usability Measurement Inventory (50 questions)

Interpreting and measuring

You can’t act on what you don’t know and you can’t track what you don’t measure. This is why effective knowledge transfer and quantifying qualitative data is crucial for product teams. You must have a standardized way to collect, synthesize, measure, and share your learnings.

The key to successfully utilizing your new knowledge is interpreting learnings and making decisions on how to proceed. This requires concensus. It’s difficult to hit a KPI with a team that isn’t in agreement about the goal or the data it represents. At Workfront, we set out to determine the best method for doing this. In addition to ease of use, we also wanted to measure user’s impression of overall value and delight. It was critical that we have baseline values and the ability to measure progress towards success.

We asked by means of a formalized questionnaire to each test participant at the end of the test session and randomly to current users within the product. We used it both during the prototyping/usability testing stage, as well as with existing products. The available answers were structured such that the scales match in positive direction, allowing us to calculate an overall numeric score, or index.

It’s important to note that this is signal, not answers. The index allowed us to pinpoint where a solution was falling short, especially in comparison to other knowns that had already been established. It provided trend-lines that we could use as guardrails. It also helped us identify, quantify, and communicate UX to stakeholders, and to get clarity about our positioning and competitive advantages.

Why did we create something new? The other existing solutions were all too long, redundant, complicated, expensive, or asked the user to predict the future. We wanted something short that would be easy to administer so that it was done consistently. It needed to be brief enough that users would be willing to complete it. And finally, it needed to be easily quantifiable.

We crafted three simple questions, each with a five-point scale as answers:³

  1. Thinking about the feature/product you just used, how well did it meet your needs? (value)
  2. How easy was it for you to complete the tasks we asked you to do? (ease of use)
  3. How enjoyable was it to use this feature/product? (delight)

We simply called this the User Experience Index. We had no intent to brand it or officially publishing it as a method. No official research was conducted (kind of meta, if you think about it) on the method itself or to compare it’s effectiveness against other methods. We were continually evaluating it and open to it evolving, but for us, as of this writing, it’s working. The primary reason that we were finding success with it is simple: we were using it.

Below is the complete survey. The UXI score is determined by calculating the average across the three questions. So, a feature or product that provides optimal value, usability, and delight, would have a score of 5.

Here are a few final reminders and notes:

  • It’s important to be consistent, in both administration of the questionnaire and measuring the results. Be careful to ensure that you’re comparing apples to apples.
  • Only administer this survey after a prototype test or use of your product. It’s invalid to simply show people some screens and ask them how well they like them.
  • If people haven’t actually used your product to perform realistic tasks, they can’t predict how satisfied they would be when actually using the product.
  • This is test-level measurement, not task-level. Task-level is administered after the completion (whether or not they complete the goal) of a task and attempts to guage and quantify how difficult or easy it was to complete a certain task in a user test. Test-level satisfaction measurement, on the other hand, is administered at the end of a session and measures users’ overall impression of the usability of the experience.

Thinking about the feature/product you just used, how well did it meet your needs?

  1. Met none of my needs
  2. Met some of my needs
  3. Met many of my needs
  4. Met most of my needs
  5. Met all of my needs

How easy was it for you to complete the tasks we asked you to do?

  1. Not at all easy
  2. Slightly easy
  3. Moderately easy
  4. Very easy
  5. Extremely easy

How enjoyable was it to use this feature/product?

  1. Not at all enjoyable
  2. Slightly enjoyable
  3. Moderately enjoyable
  4. Very enjoyable
  5. Extremely enjoyable

[1] Hawthorne Effect and The Observer Effect

[2] Heisenberg’s Uncertainty Principle

[3] Inspired by the Forrester CX Index. I would like to thank Ingrid Cruz for first introducing it to me.

--

--

Wade Shearer

Vice President of User Experience at Workfront, Cofounder of Front www.wadeshearer.com