CRO Audit Report should be your first step in conversion optimization. It will inform your HIPPOs (highest paid person), detect, prioritize and test critical points of buyers journey for your most important user segments. It will help you avoid costly mistakes and be more customer-centric company by delivering a better value to your clients. That's why you need an independent and unbiased point of view before fixing/testing elements of the conversion puzzle or discussing conversion optimization with specialized providers. Tech and analytics vendors. AB-testing software providers, heatmapping tools, UX designers and copywritting services, PPC agencies, etc. All of them are important but each of them have its own agenda in mind - and you need a customized approach - based on your current state, i.e. insights derived from the CRO Audit Report. Not to mention "free CRO audits", "top conversion hacks" or "cro best practices" extracted from the context and in most cases very generic and/or misleading. As mentioned, the whole idea of CRO Audit and conversion optimization followed by continuous experimentation programme (AB-testing) is to learn what works (no "winners" vs. "losers") i.e. how to attract and retain best customers by delivering them better value than your competitors.
How to start with CRO? Any Conversion Rate Optimization Checklist?
Simplified CRO checklist: - device/browser/operational system (OS) compatibility- high conversion rate discrepancies to be checked by UX, Design, Quality Assurance (QA) teams, - page speed - does it load and run quickly - (e.g. Core Web Vitals, on page SEO), - accessibility - can visitors customers can read it, - usability - user experience (UX) - (e.g. internal search engine, design, navigation, product/category pages, checkout) - persuasiveness - marketing stuff - (e.g. value proposition, copywritting). Most companies focus on the marketing stuff and tweaking small aspects of front-end and neglect others more important elements. The best designed site with perfect value proposition will be ignored if it loads in 10 seconds - no one will wait so long (especially new visitors who are spoiled by big tech companies like Google, Amazon, FB). P.S. It goes without saying that tracking of all micro and macro KPIs via analytics tools (GA4, heatmaps) is in place. P.P.S. Maybe even more important is to align tactical and or internal KPI's (to mitigate "siloing") with company's "North Star metric" (e.g. monthly revenue vs. number of active users)
What is expected conversion rate uplift upon a CRO Audit?
It might be 30x or 10% - it all comes to the : - user segment/audience whose experience you're improving (based on behavioural patterns, i.e. improvement potential detected by quantitative/qualitative/heuristic/technical analysis). New vs. returning visitors, mobile vs. desktop, Firefox vs Chrome, cart abandoners, etc; - current website state, business type, size, maturity; - internal resources, processes and budget - people/time/money. You'll need different skill sets (programming, design, quality assurance, ...,) to fix/test different bottlenecks/opportunities. - type of the issue - fixing underperforming browser version, your page speed or Core Web Vitals metrics should be easier than improving your off-page SEO by defining and creating content for most relevant keywords/topic, internal or external linking strategy. - buy-in from your C-suite and other departments who have their own KPIs (siloing).
What about CRO best practices, conversion optimization hacks or stealing ideas from competitors?
We are, as human beings, very biased (and biases usually work against us:) - based on our demographics, geographic, background, role, past experiences, personality, etc. So, any conversion optimization tip not aligned with customers behaviour is a tricky business. What works for your competitor is related to specific segment of their audience, their internal organization, leadership, processes, skills,... The best ideas will always come from your customers and website visitors (2 different worlds). So you have to use all available tools and techniques (quantitative, qualitative, heuristic research) to track behavioural patterns. Collected insights then should be aligned with best practices, competitors approach, etc. This way you'll be able to create testing hypothesis, prioritize and validate them through continuous experimentation (AB-testing) whenever it makes sense.
Are Conversion Rate Optimization (CRO), Customer Experience Optimization (CXO) and Experimentation the same?
Conversion rate optimization implies improvement of customer experience (UX). CRO activities delivered through continuous experimentation (AB-testing) should be aligned with short/long term KPIs and the "North Star" metric (the value company is delivering to their end users). Uplift in conversion rate, revenue growth or higher margin is meaningless if the lifetime value is eroding or churn rate skyrocketing. Conversion optimization is wrongly interpreted as a tactical tool and used for small changes (front-end, design, copy). Unfortunatelly, bigger changes to the websites (e.g. new feature) are very often done by development team whose KPI is to deliver a requested feature (product led growth) not the results from that feature (customer-centric approach). As you probably know, in practice on average 10%-15% of all experiments producing a winning result (from customers perspective) - so 90% of the new features are wasted money that could be saved by conversion optimization, i.e. experimentation culture built into your company. So, don't bother with the industry jargon and fancy naming - they're constantly changing (e.g. 20 years ago when I started the cool term was "response rate"). Focus instead on your customers as fundamental principles of human behaviour won't change.
02. Experimentation - AB-Testing
Most important benefits of continuous online experimentation, i.e. AB-testing?
There's way more benefits than improving your short-term goals (in most cases financial): - informed decision making, - risk mitigation, - competitive advantage, i.e. continuous innovation, through data-driven research and development (R&D), - improved conversion rate & financial KPIs like revenue, profit, lifetime value (LTV) Think experiments not learning by Itamar Gilad brilliantly explains why experiments are just a means to an end - Learning.
What is the success rate of online experimentation (AB-testing) programmes?
Industry avg. success rate of AB testing programmes is about 10%-15%. Meaning that 1 out of 10 testing ideas will either fail or be flat. Companies like Airbnb, Booking.com, Amazon with mature experimentation programme are running thousands of experiments per year (thanks to the traffic volume to their sites). They implemented "experimentation culture" in their business model and have around 30% success rate (depends on the source). So, it is very important to have right people in your C-suite and accept that goal of experimentation is learning! Be mindful of 300% uplift claims in AB testing. Always ask for at least 3 factors: 1.) test duration, 2.) sample size and 3.) initial conversion rate - use this AB test calculator to play with your numbers.
What's the best prioritization framework for online experiments?
There's ton of impact/effort prioritization frameworks out there: PIE, ICE, RICE, PXL... The PXL framework created by CXL is trying to be a bit more objective - as includes different research methods that support potential hypothesis. The problem with "impact /effort" prioritization frameworks is that they are based on our estimates. As we're very biased, we usually underestimate effort ("planning falacy" phenomenon by Daniel Kahneman & Amos Tversky) and overestimate impact/potential (optimism, random events, etc.). So, whatever framework you pick, it is very important to support these estimates with different research methods (qualitative, quantitative, heuristics, best practices, competitors research..), i.e. do a CRO Audit. This way you'll be able to properly hypothesis and prioritize testing ideas that will deliver you the best results - play and learn. P.S. You can use this Confidence meter (by Itamar Gilad) to help you with idea prioritization.
Experimentation programme implementation
It must go top-down. Sometimes, paradoxically, that is the problem. The C-suite is expected (well paid too) to know what to do, i.e. to lead the company. Apart from HIPPOs (highest paid person opinion), very often the noise becomes (weak) signal used for decision making - "siloing", biases, egos and politics - and now there's a new "experimentation" element. Having the right people and mindset is crucial. Experimentation is not another box in marketing or sales org chart (or marketing activity) - it is means to an end - learning. Usually, the best effect is when experimentation is treated as a center of excellence (CoE). There you have existing team members, i.e. experimentation ambassadors (great mix of product domain knowledge and experimentation skills). So, ensure your C-Suite is supporting you, start with something (do a CRO Audit ) that will bring fast results - so you can push it further. Don't bother with tooling if you're in "crawl" stage of Experimentation - free tools like Google Optimize will be just fine!
Take Your First Step Right with Conversion Optimization!