Usability and testing roadmap V.1
This week ahead several of the pending courses in the Growth Marketing program of CXL instituted. Seeing that I only have 30% completed so far, it generates a great desire to advance in each of the programs I have pending. However, when I start watching each of the videos, I understand that it is not a matter of time, but rather a matter of learning about the program so that you can master each of the topics that are shown.
Fortunately, in this time of global uncertainty about jobs, I am keeping mine as the leader of the digital team. Therefore, I feel the need to learn as much as possible from this program, to bring the knowledge to my team and to implement it in our daily lives. However, I made the decision to study 5 hours a day, from now until I finish the course, which gives me a total of 150 hours of study. The truth is, I don’t know if I will be able to complete the course with these hours of study, but I will do my best to do it, because the content is worth it and the level of learning is worth it.
This week I advanced with Pep’s course about the CXL framework, I also learned about usability and the different usability tests and the difference with user experience. I also took up topics like statistics and the need for its application in the A/B tests. I haven’t finished the probability courses yet. I haven’t finished the probability courses yet. They took longer than I would like, given the complexity of the courses. However, I share some points that I think are relevant. First of all, let us dive in Usability and what Jakob Nielsen defines as 5 pilars:
· Learnability: How easy is it for users to accomplish basic tasks the first time they encounter the design?
· Efficiency: Once users have learned the design; how quickly can they perform tasks?
· Memorability: When users return to the design after a period of not using it, how easily can they reestablish proficiency?
· Errors: How many errors do users make, how severe are these errors, and how easily can they recover from the errors?
Satisfaction: How pleasant is it to use the design?
Some tasks for me: I need to open my website in one window and the other open the usability checklist. Write in a spreadsheet every issue that you found. Then, start to rank every issue by the ease of implementation from 1 to 3:
- 1 = no-brainer, super easy to implement.
- 2 = easy, but can be done within 1–2 hours.
- 3 = lots of development and/or designer hours needed.
Implement every issue-solution according to the prioritization defined.
Survey Design Theory
· Qualitative survey approach: Must be done by a zero-sum analysis. Get all the qualitative answers and make clusters according to keywords or insights found. Then do a Quantitative analysis according to the qualitative data.
· Bouncing betas: When research is made to a very small audience, not all the customers are going to fit in the research, so the answers would be 0.
· Error in surveys: Mixing behavior questions with attitude questions. Others are questions that don’t communicate. Surveys too long (5 to 10 minutes long max.) An error of central tendency, fatigue increase, and people answer with neither agree or disagree, but you can’t go further in the analysis.
· Selective Perception > Something customers agree with you, tent to automatic agree
Survey Customers via Email
Important to send out a purchase as soon as possible, from the point that your customers purchase your service/product.
From 8 to 10 questions max, avoiding the fatigue of customers.
Usability Testing Vs A/B Testing
The difference between both options is that usability testing shows what issues are causing the user problem or friction to accomplish a goal, against a/b testing that shows the probability that A option is better than B, with a statistical significance. In the usability test, you only need some users. In an A/B test, according to the number of web visitors, you need a specific amount of them to validate a hypothesis.
The way to create a test on our websites is to make a Usability test to find the problems on the website. Later, create a hypothesis, make an A/B test and get a result in your hypothesis.
Is useful to identify:
- Where people click and where they don’t
- How far they scroll on any given page
For a Head Map analysis, it’s possible to use tools like “algorithm tools”, but it’s necessary to take into consideration that these instant head maps are created by machine predicting algorithms, so there’s no attribution to real end-users.
Google Analytics Health Check
If I have a service fee for a google analytics health diagnosis, It should create trust with customers, and also, people want fast deliveries on their works. So It could work great.
The first thing is to check for the needs of the company, tracked in GA. Ask several important questions:
- “Does it collect what we need?”
- “Can we trust this data?”
- “Where are the holes?”
- “Is there anything that can be fixed?”
- “Is anything broken?”
- “What reports should be avoided?”
It’s well used and recommended to add a goal when there is any error pop on a form or a checkout page. This way, with “reverse goal path”, you can check how many goals are completed with these mistakes, so you need to be able to fix them and see a decrease in the total mistakes.
A/B Testing Mastery Course
Speaker: Pepp Laja
Type of experiments:
- Lift elements: Just delete some elements on the page that doesn’t have value to your users and are negatively impacting on your website.
- Optimization: Lean deployment, is the best way to A/B Testing some elements.
The ROAR Model
If you don’t have at least 1,000 goals per month, you cant create A/B tests oriented to goal conversion optimization.
Which KPI to Pick
From a mature perspective, you might select a KPI in importance from top to bottom if you are a mature company.
- Potential Lifetime Value
- Revenue per user
- Transactions (at least this, if you want to focus on a more business approach)
What can be optimized?
- Customer behavior study: Start looking at what your customers want, frictions etc..
- Get the most important insights into your customer journey
Track your website changes with several tools. Also, we can track the changes of any competitor page to see if there are major changes in the site, so we can test. If the population is shared with them.
Behavioral metrics for website
- % Light interactions in a website
- % High interactions in a website
- % Low intention to purchase
- % High intention to purchase
What to report when we have these numbers?
- Amount of users in every cluster
- Time for users to move from a cluster to another.
Also, it is important to talk with customers’ service or hear a call to understand what customers want, and need about our product.
Create modules asking for feedback online. Use at much as possible your current users. It could be the ones who interact with your service or product already.
What type of test can we do to evaluate our assumptions
- Five seconds test (measure users first impression)
- Question test (get users feedback)
- Click test (visualize where users click)
- Preference test (find out what users prefer)
- Navigation test (find how your users navigate in your site)
It’s important to run an A/B test in Google Optimize as similar as possible to every possibility. So, it’s recommended to create a set of pages: “Original”, “Default” and “Variant”. The original one is going to receive 0% of the total traffic, and the default and variant 50% each. So, in this way, we make sure that the original and test version is as similar as possible, so results are more accurate.
Why do we have to take a complete week for a test?
Weekdays’ behavior affects results compared to weekends. Also, evening effects compare to business hours.
Why 1, 2, 3 to 4 weeks?
Sample dilution or not. Pace/velocity versus business cycles.
You have to take into account the amount of time that a visitor converts on your website, so you can recognize the complete effect of the experiment on a business cycle of a customer.
Difference between an SRM-Sample Ratio Mismatch, when we design a test with the same number of visitors (50% 50% split test), and you have a 50.2% for any variation instead of 50%, there is a bug in the test. So there is a formula that enables you to find the mismatch.
Statistics Fundamentals of testing
Statistics is the way marketers can tell if an A/B test is true or false, according to data and more importantly, validates statistically any hypothesis.
- Population: all potential users from a group that we want to measure.
- Parameters: variable of interest that can be measured.
- Sample parameter: it’s a sample of a representative group.
Population parameter: it’s the interest parameter we want to measure, from all the population.
· Mean: U
· Standard deviation: O
Mean: is the Average (central tendency measure)
Variance: is the shape of the data and the variance of the points.
Variability: The standard deviation shows how much is the variation of the data.
Standard deviation: how to spread out the data is according to the media.
I will continue with my probability courses for marketers hoping to move forward with the program. See you in my next post.