How UX research prevents expensive errors in products
UX research enables the team to test base product hypotheses at the earliest stage, even before the developers join the process. As a result, the team can adjust the UX approach at the “foundation pit” stage, a time when fixing an error does not demand overhauling an entire building. We will explain how we conduct UX research and what it requires.

True Engineering UX laboratory brings together PMs, designers, analysts, and marketologists. Its members develop the company’s UX expertise by utilizing the interface, thus tackling clients’ business needs.
The aim is to better grasp how users perceive the interface and what difficulties they have and why. For the product owners, this is a chance to make sure the clients understand the meaning of their product. For the designers, this is a chance to get swift user feedback; they can see live how real, unprepared people interact with their layouts. Herewith, there is no need to involve front-end developers.
When it comes to complex services, the team has to realize which scenario will be intuitive for the clients. The research participants are immersed into context and asked to solve a problem. Next, their actions are to be observed. Specifically, users’ logic may vary greatly from the views of the developers, and this is normal. It must be examined why a person acts one way or another, as well as what motivates or disturbs them.
An inaccurate definition may lead in the wrong direction; information for a solution may be insufficient. Alternatively, a part of the interface may turn out too unorthodox for the target audience. Of course, the customers’ logic may completely coincide with the route. This can only be clearly determined after research.
It is important to realize that there are no “bad” or “wrong” answers in UX testing. From a certain point of view, one can say that the worse a person performs in a test, the better for us—it means we’re identifying the most extreme issues, meaning that real users would no longer face these. Yet this doesn’t mean that we provide non-working prototypes to the surveyed members.
Research methods
In the early development stages, it’s best to use qualitative research that answers the “how” and “why” questions. How does the user follow the script? Why do they open this particular hood to find the information they need? What would be a convenient way for them to reach the settings? Why would the other way be inconvenient?
There are various qualitative research technologies: in-depth interviews, focus groups, and others. For our purposes, the most suitable method is moderated usability testing:
- We create an interactive prototype using mock-ups and ask the respondent to complete several tasks directly related to the interface.
- We watch the user, listen to them, and ask questions.
- We ask them to evaluate the steps taken and explain their assessment.
At any moment, the moderator can ask the respondent a question and clarify something, as well as speak with them after the test. The more correct and timely questions get asked, the more effective and valuable the test results are. Usability tests can be performed without a moderator’s participation, but then the results would be less detailed.
The method enables the first feedback to be obtained swiftly from potential or real customers. And if the product is already available, it shows what can be improved.
Now—about how such research is structured.
Formulating an objective and hypotheses
An objective is the key question we want to answer. It can be general, for example, how users deal with the insurance registration via the app, with yes/no as options. There could also be more precise objectives, including the fact that registration doesn’t take more than 15 minutes, yet users must pay attention to some special conditions and other factors. The objective is then split into hypotheses, which we then test during the research.
Examples of hypotheses: the client understands what this key is doing, knows how to select on the screen the conditions they require, and notices how one or another factor affects the price. These small questions constitute the main answer to the question defined as the objective of the research.
Defining research metrics
- Task success. The user completed the task with virtually no problems (100%); problems arose, yet the goal was reached (50%); the task wasn’t completed (0%).
- Subjective stage simplicity evaluation. The user personally rates the interface from 1 to 5 and explains their reasoning. This is when we normally get the most valuable information. Specifically, respondents highlight the most successful UX solutions and share their impressions on what was convenient and what wasn’t.
- Issue appearance frequency. This is determined based on the test results, and we check what issues arose and how often they did.
Selecting the audience
When respondents are selected, the following details matter: gender, age, city of residence, financial condition, experience with IT services, and the devices they use. Obviously, to maintain experimental integrity, we don’t involve people from the development team and the IT sector. We also try hard to reach the target audience; for example, there’s little sense in testing premium services on students.
Curiously, 5-8 respondents are enough for qualitative research. UX experts have discovered that if the number of participants is increased, the issues identified by the test start to recur, so the researcher would get no new information. Yet in quantitative studies, the number of respondents should reach tens, if not hundreds.
Creating a context
The key to doing research correctly is a context in which we immerse the users. We don’t simply give the respondents an app prototype, but instead, we create a sort of picture. This means they can experience what it would be like as future users. For example: “You're going to work, getting into your car. You have to use the road service you downloaded yesterday. What are your actions?"
The context affects the product interaction experience. It’s important that the context we create is as close as possible to real clients’ stories, since usage scenarios may vary depending on the situation.
Preparing a script
The research scenario is the same for all participants. The moderator provides a lead-in to how the test will be conducted, and describes the product and the context.
The scenario includes tasks for the respondent to complete and questions for the moderator to ask at each stage. It's crucial that the definitions don't lead the user to certain functions directly — they have to look for a technical solution by themselves.
So, we don't tell the respondent: “You need to disable the service in the settings. How are you going to do it? " Instead, the task is presented like this: “You've experimented with the service and understood that now it's not relevant for you. What are your actions?" Then the user has to understand themselves that this is to be done in the settings, go there and find the required option. If they fail, this is a signal for the designers.
Gathering and analyzing the results
When all the interviews are over, we collect all the data from the respondents into a single database. To understand if the research was successful, the following questions must be answered:
- Are the hypotheses proven or disproven? The answers are grouped by hypotheses; then we make sure we've collected enough data on each point.
- Which parts of the scenario were the most convenient for the users, and which involved the most problems? Again, now we do not contemplate why this happened and what to do about it — only facts matter.
- What difficulties did they mention during the test? We prepare a list of all the points that were unclear, all the counter-intuitive steps and dead ends. Plus, subjective assessments of simplicity with reasoning and other comments.
Finally, we decompose the obtained data by scenario stages and tasks so as to assess where and what difficulties arose. For each step, we calculate a percentage of people who finished along with an overall simplicity assessment. The percentage of completion and the simplicity assessment may not correlate — in our experience there was a case in which the respondents easily completed the task but admitted that under real conditions they would surely have forgotten about this function.
With this information in mind, we obtain complex data on how things go in the scenario. We can now draw suggestions for improvement, and we understand what issues the product has and if we can prioritize them.
In addition, we can assess whether a follow-up revision of a complex scenario will pay off at this stage or whether it can be waived. Another example of our experiences, based on UX research results for a radically novel product, demonstrated that users could not determine one of its two basic performance modes. We offered to remove this mode from the MVP to focus on one scenario for that moment and refine the second during the next stage of the project.
UX research start checklist
To test a new product’s UX, the team needs the following data:
- Product description. What specifically are we going to test?
- Research target. What will the results affect? What decisions will the team make using this data?
- Scenario. What steps does the user take to reach their target? One product may involve any number of scenarios.
- Hypotheses and questions. What are the team’s doubts? What should be clarified in the scenarios? What assumptions are there regarding the interface issues? Which essential questions remain unanswered?
- User description. Which users are interesting to us in terms of this research (role, organization features, industry, service activity)? What tasks do they deal with via the service? Are there any loyal users? What are their contact details?
- Terms. What’s the deadline for the research results?
The product team will have no problem answering all these questions. And then it's a mere technicality for the UX experts — they'll select the appropriate methods, build a respondent group, conduct a research, and help to interpret the results correctly.