Some time ago, while at a conference, someone asked me: If you could only use one UX research methodology, which one would it be? At the time, it seemed that it did not make much sense to ask such a thing: why should we use just one technique? That was not how I thought the research field should be approached, nor how I’d been taught to understand it.
The more I thought about it, I realised what it was about. It’s what we have called fast research: a new way of approaching research that is more suitable to the time and needs of consulting projects. In the new era of slow food and mindfulness, many of us (clients and agencies) are still on the hamster wheel. The faster we run, the more plans we have on the agenda, the faster we want to consume, and, above all, the quicker we want to see results. And research isn’t exempt from this trend.
Here at Good Rebels, our clients range from large multinational brands to medium-sized companies, family businesses, and even start-ups. Increasingly, all of them are fully aware of the ROI generated by research, and at Good Rebels we have done our best to prove it to them every day.
Even so, in product launch or redesign processes where the client seeks to include research, the nature of some projects doesn’t allow for an extensive research phase, either because the timings are tight or the budget is limited.
For all these reasons, there are some projects where suggesting a comprehensive “slow research” and applying research methodologies in an orthodox way is not ideal. Conversely, we see that the way research is understood and implemented in consulting is becoming more and more intertwined with the concept of fast research, which is closely linked to hybrid methodologies. These methodologies allow us to combine two or more different research methods into a single research session, thus reducing costs and time.
Let’s start at the beginning: what exactly are hybrid methodologies?
To have a more academic definition of what a hybrid methodology consists of, we first must clarify what generative and evaluative research are:
- Generative research helps us get to know different users in-depth and understand what they experience in their day-to-day lives. It allows us to see them as people, beyond their interaction with a product or service. The most representative tool of this methodology is in-depth interviews.
- Evaluative research consists of assessing how a product or service works when shown to a user. It’s not just about functionality but also searchability, efficiency, and emotions. Some examples would be A/B testing, tree testing, or clickstream analysis. With a higher dose of researcher bias, others are heuristic evaluation, benchmarking, or competitive analysis.
These techniques are often performed separately as they usually respond to different phases, discovery and testing, generally separated by… time! Time is needed to process the information, share it with stakeholders, and evaluate the next steps. But the truth is that there are ways to combine the timings and make the “magic” happen: hybrid research brings together generative and evaluative research, helping us understand our users and, at the same time, how a product or service performs. However, because it covers both spaces, it doesn’t go as deep as two separate pieces of research would.
Some tools we could consider as hybrid are surveys, concept testing, focus groups, and card sorting. But when it comes to saving time and money, the hybrid approach per excellence is combining interviews with evaluative methodologies, such as usability testing.
Now, does this mean we are leaving user characterisation aside? No, not at all. There are formulas to outline and understand users and their needs that don’t involve interviews, and are much easier and cheaper: surveys, co-creation sessions, ethnographic research on social media, exploitation of users’ databases, etc. In this way, we use interviews only at the testing phase, the perfect opportunity to do a two-for-one deal.
However, the approach will always depend on the objectives of the research. If the aim is to understand the buying motives in-depth and the problem the digital product is trying to solve, it seems more advisable to schedule the interviews early in the plan.
Fast vs slow research: what are the benefits of using hybrid methodologies?
In UX consulting projects, often there is not a predefined specific research objective, and the research team is in charge of finding it based on the aim of the project. For example, the project’s purpose is to improve the ROI of an investment app, but we have no previous studies, and the client has not told us what we should focus our research on.
If we put ourselves in the shoes of the researcher in charge of this project, what would the research plan look like if we approached it in a more or less orthodox and academic way?
- First, we would start by studying our client’s website and app, understanding its products and functionalities, benchmarking with the main competitors, and performing some desk research on sector trends.
- Next, we would develop buyer personas and customer journeys through surveys, database mining, in-depth interviews, analysis of relevant KPIs in the funnel, monitoring the consumer journey on the website, and heat mapping.
- Then, we would develop a prototype of the current website for usability testing to define HMW challenges and devise solutions, which we would test on a second prototype. We would refine and iterate until the final validation of the prototype.
- Lastly, once the improvements on the website have been developed, we will monitor the relevant KPIs on a dashboard.
Now, let’s put ourselves in the shoes of a client who urgently needs to improve their ROI in a consultancy environment and asks for agility and speed in the research process. We will then have to apply fast research, with the added difficulty of having no predefined problem or research objective (the most complex and common starting point):
- First, we would define the general user flow and simplify the preliminary research: benchmarking only against most relevant competitors and replacing desk research with social listening and sector-specific queries.
- Next, we would also narrow down the user characterisation process. Instead of using all the techniques described in slow research, we would make a selection of the most relevant methodologies and tools.
- Once we had defined the jobs to be done and the HMW challenges, we would then devise preliminary solutions that we would test through guerrilla testing. Only then would we develop a prototype for the usability test, which we would iterate until the final validation and development of the improvements on the website.
|Slow research||Fast research|
|Study of the website, app, products, and functionalities.||Definition of the general user flow and detection of main brand problems through social listening.|
|Benchmarking of investment websites and apps.||Benchmarking of the most relevant or most similar investment websites and apps.|
|Desk research on sector trends.||Specific consultations on hypotheses or doubts about the sector in Statista.|
|User characterisation surveys (by email or via private area).||Selection of the most effective methodology and tools for user characterisation with proto personas.|
|Exploitation of user databases (hyper-real quantitative approach to user characterisation data and product use).|
|Key KPIs in the purchase funnel (Google Analytics, dashboards…).|
|User journey monitoring and heat maps.|
|In-depth interviews with real users (detection of pain points, reasons, and triggers for purchase, most used functionalities, etc.)||Hybrid methodology: short interviews with users to obtain key data and usability tests of the real product.|
|Elaboration of buyer personas and their customer journeys.||Elaboration of jobs to be done.|
|Prototype (A) of the real website (to avoid the bias of comparing a real website versus prototype).||Use of the existing website for testing. Already included in the hybrid methodology.|
|Usability testing of typical buyer persona profiles through prototype (A).|
|Definition of HMW challenges and solution ideation.||Definition of HMW challenges and solution ideation.|
|Prototyping (B) with website improvements.||Ideation and design of lo-fy prototype.|
|Usability test to profiles that fit with the buyer persona through prototype (B).||Usability testing of profiles matching the buyer persona through prototyping.|
|Refinement and iterations of the usability test and final validation of the prototype.||Refinement and iterations of these usability tests and final validation of the prototype.|
|Development of the validated improvements on the website.||Development of the validated improvements on the website.|
|Monitoring of relevant KPIs in a dashboard.||Recommendations on the relevant KPIs to monitor.|
In the case of fast research, we can see that numerous tricks are applied: favouring hybrid methodologies, prioritising parts of the plan, choosing the most effective methods, prioritising guerrilla tests over others, detecting the key hypotheses, and trying to resolve them. If our client does not need a numerical or analytical check, we can also omit the control of KPIs or just give a few recommendations.
In short, fast research allows us to obtain relatively quick results in contexts where resources are limited, or we have little to no information.
However, this does not imply that it is easier or less professional than academic research. It requires an equally sharp sense of smell to detect as quickly as possible which hypothesis(es) need to be validated, defining the research plan in a lean and agile way and intense communication with the client, especially at the beginning. Fast research also entails a greater involvement of the research team: it forces us to carry out guerrilla tests or be mystery shoppers.
Of course, it will be much easier if we have the right tools to support us, such as Brandwatch or Statista. At Good Rebels, our multidisciplinary team also allows us to incorporate the figure of a data analyst to analyse certain aspects of the client’s databases and perform queries in Google Analytics or other dashboards, providing a full 360º service.
And what about the question we posed at the beginning of the article? We have a somewhat rebellious answer: if we had to choose just one UX research technique, we’d go with hybrid methodologies. While two projects are not the same, hybrid methods are flexible enough for us to design a tailor-made research plan that fits the project timeline, and allow us to understand our users while analysing how the product or service meets their needs and expectations. One more way of putting people (users and clients) at the centre of everything we do is through an agile, flexible and multidisciplinary approach.