Group 3 Created with Sketch.

The variables and correlations within the CRO process

CRO (Conversion Rate Optimization) starts with data! So it’s important to gather both quantitative and qualitative data. Quantitative data is all the data we gather from our website, for example with Google Analytics. The qualitative data comes from user sessions, surveys, and reviews.

This part is optional but can be very rewarding. Some companies have a full-time data scientist. The data scientist gets questions from the company which then are answered according to all the data that’s collected. The data scientist has a data lake/data warehouse for this.

CRO starts when companies realize they can improve their website. Conversion specialists (someone who’s specializing in CRO) check data and write hypotheses on how to improve the website. To be able to write a hypothesis for CRO, the conversion specialist needs knowledge of user experience (UX), data, and psychology. The hypothesis for CRO is written as follows:

If we apply [THIS CHANGE - UX], then [THESE METRICS CHANGE - DATA] for [THIS GROUP OF USERS - DATA], because of [THIS BEHAVIORAL REASON - PSYCHOLOGY].

Example of the hypothesis when used: If we change the color of the call-to-action button, then conversions for new users on mobile devices will increase, because of visual cueing.

Do you see it? We combine data and science so that everything is thought of. We don’t test at random! Metrics, next to conversions, could be bounce rate, time-on-site, revenue, click-through rate, and page views. After writing the hypothesis, designing (optional) it, and having built it in the tool that’s used for the test it’s now time to start the test. It depends on the MDE (Minimum Detectable Effect) how long a test should run and what sample size you need.

Time to activate the test!

There are different ways to send traffic to your test to get the data you need. Most of the time a combination of SEA (search engine advertising), SEO (search engine optimization), and direct/returning traffic is needed. For CRO you need lots of data, so do everything you can to get this data. This is the reason why CRO might be hard to do for smaller companies if you don’t have enough data you can’t get significant results (within 4 weeks, that’s the maximum time a test should run).

The data is getting in, you’ve reached the end date, and you have enough data to work with. Now it’s smart to do an analysis of the data and evaluate. The first thing to check if the test is significant, always use a ‘second opinion’ for this (another tool, to see if the outcome is the same). Always do a deep dive in the data from the test to check if you found any other data you didn’t expect there. This extra data can be handy when you want to iterate on the test or just might inspire you for new tests.

If everything is good and the significance level you’ve decided is reached (most of the time 90%, 95%, or 99%) implement the test. The question is, can you optimize the results even further after implementing the changes to the website? Gather data and see if there are still users struggling with a specific part of the website.

If you didn’t have enough data to get significant results you should figure out why and run the test again. Did the test fail? Congratulations the website has been doing great so far! It might be smart to do the test again with some changes attached to it.

Basically, the cycle starts over and over again, and what isn’t working now might be working a year later. It isn’t strange to test something multiple times over a few years. Companies that test have a big advantage over companies that don’t. What is the advantage you ask? You constantly learn about your target audience and you keep improving your website to make it easier for your users to buy or book something, for example (improving the user experience).