<aside> 📌 $Case\ overview$


Year: 2021

Role: Head of UX lab, led all primary UXR activities

Product: Koshelek is a digital wallet app for storing loyalty and discount cards. Catalog features let users issue loyalty and bank cards, find and use coupons and discounts, and participate in promotions

Goal: Identify ways to improve Koshelek’s Catalog core metrics

Method(s): Data analytics, User Interviews, Usability tests, Surveys

Framework(s): JTBD, CJM

Result: research informed product development and design decisions, which resulted in improving Catalog adoption, retention, average number of viewed and used offers, and reducing churn

</aside>

Context

<aside> 📌 TL;DR

The Catalog team of the Koshelek app was struggling with coming up with product development hypotheses and making decisions.

</aside>

Koshelek is a mobile application that helps more than 13 million people buy and save money on purchases.

The Catalog team has just outlined a pyramid of metrics (they’ve reflected this process in this article) and expected it to be a sufficient tool for product development.

But, even with the product metrics pyramid, the team was struggling with making product development decisions: dashboards were showing the numbers, but the way to impact them was still unclear.

In one of our regular meetings, the team highlighted this problem, and I knew that research will shed some light on it.

Problem & solution

<aside> 📌 TL;DR

I’ve figured out that the missing knowledge that led to struggles was a poor understanding of the reasons behind user behavior that impacts product metrics. I’ve chosen a mixed-method approach to reveal the answers we were looking for.

</aside>

I’ve arranged a kickoff meeting with the team to gain a better understanding of the problem. I’ve figured out that the main struggle team had with generating hypotheses for product improvement and making product development decisions was due to the lack of understanding of the reasons behind the user behavior that impacts outlined product metrics.

The team didn’t understand what drives users to visit and use the catalog, what need they are trying to satisfy, what value they gain from using the product and what prevents them from gaining it, what they lack, why some of the users do not convert into catalog users, and why catalog users churn. Thus, the goal of the research project was to answer these questions.

I’ve decided that the best approach will be to use the combination of qualitative and quantitative research methods: understand what we’ve already known (aka do homework), conduct interviews and usability tests with users and validate insights with surveys and analytics.

I expected this project to leave us with enough data to use as a sturdy ground for making product development decisions that would let us improve the product.

I’ve agreed on this research design with the team and the project got started 🚂

Work process

<aside> 📌

TL;DR

It was a big research project with mixed methods: we’ve started with secondary data (behavioral data, support tickets, reviews, and previous research analysis) which let us design and conduct a well-thought-through qualitative study (interviews + usability studies) and at the end, we validated the insights we found and even evaluated some of our solutions via quantitative studies (surveys).

</aside>

So, the research project included 3 steps: homework, interviews, and quantitative validation of the insights.

  1. First, we had to do homework: collect and analyze the data we already have on our topics of interest. In my experience this dramatically improves the quality of active research: it’s fast and cheap, it helps to dive deep into the problem space, and even can give us answers to some of the stated questions.
  2. Then we carried out a series of interviews/usability tests with outlined groups: to develop a deep understanding of users, their needs, context, and overall experience with the catalog, to find out the whats and whys behind their behavior that impacts our metrics.
  3. At last, we had to conduct surveys and data analytics: to validate the insights we’ve gathered, so we would have a foundation for prioritizing them.