What's the difference between A/B tests and user tests? Which one gives you qualitative data, and which gives quantitative data? Are user tests even worth your time?
We get questions about user tests and A/B tests a lot, so we're here to help clear them up.
This video breaks down the questions and explains how you can use both user testing and A/B testing to build a better product — all in just 2 minutes.
Something a lot of people ask me is, what is the difference between A/B tests and getting the qualitative data that we get from running user tests (as a part of Design Sprints)? Is there real value in just running a test with 5 people? Qualitative data — what kind of value does it have?
The answer, in my opinion, is that you need both.
With the quantitative data — which you get from analytics software, whether you're using Google Analytics, Mixpanel or any other software that gives you insights — you figure out, where is the problem? You figure out which part of the user journey is broken or people are not happy about.
For example, you get to know that 30% of people are dropping off at this particular point in your onboarding journey.
That's a very interesting stat that tells you, where is the problem?
But what it does not tell you is, why is that problem happening? Why are people bouncing off, or why are people dropping off in the onboarding journey?
That's something that you can figure out very easily using a qualitative method like user testing. When you ask them [users] this information as open questions during a user test and observe how they're interacting with your product, there are deep insights that come in.
We have seen our clients and people get really amazed at the kind of information that comes out of these kind of tests, which is why you need both quantitative data and qualitative data to do better products.
How they fit in, you can look at it, is — you start with the quantitative data (understand where the problem is) and then use qualitative data to go deeper into it (figure out what is the root cause, where is the exact problem?) and then repeat the cycle.
Make it [the product] live quickly and start getting some more data from the analytics software that you're using to figure out where's the problem, and start repeating the loop to keep making the product continuously better.
That's how I think you can use both of them [qualitative and quantitative data], and both of them have their own place in adding value to the product.
We're constantly making videos on product design and sharing our experiences of running these tests and running Design Sprints for clients.
If you have any questions around product design, user testing, or anything that you would like us to talk about, do let us know in your comments.