Week IV — Play for always win!
I start with a blunt phrase from Peep Laja:
“You have to decide if you play for playing or play for winning.”
This phrase had a great power to motivate me to continue learning and applying all the knowledge of growth marketing in the CXL course. By asking myself Peep Laja’s question, I play to win, so I advance with my studies and share with you who reads these lines of text, what I have learned during this week.
I begin my training by listening to Pauline Marol and Josephine Fouchers, who shows the importance of creating a prioritization framework for testing ideas they want to execute. The importance of creating a process to prioritize the tests starts from the need to share with all the different areas of the company the tasks to be executed by the digital marketing managers.
The proposal starts by defining the variables related to the “Time” and “Impact” of the test. As far as time is concerned, it is taken into account:
- Reach: The global impact of the test (scope of the test)
- Lift: Confidence that the test will or will not be successful
- Fits: That both the test fits the needs and objectives of the company.
The second point “Lift”, must be supported on the greater amount of experience possible, in terms of where the market or the competition is moving, as well as the greater amount of quantitative and qualitative data that is had on this hypothesis.
Personally, I understand that this variable has a great deal of subjectivity in the sense that a person may have had a successful experience in the past with the test that he or she wants to take, but the circumstances and moments of the company are surely different and this causes a loss of objectivity in the confidence in the test.
Likewise, a hypothesis may come from a stakeholder, who has greater decision-making power, compared to other people in the team. In this sense, his intuition is what defines the success of the test a priori. When the team allocates time to create the test and evaluate it, given the high priority from which it originated, the sense of priority is lost with respect to the total number of tests to be executed.
Secondly, the impact variables are taken into account:
- Creativity: How complex or abstract is the idea in the reality of the company. Has it been tested before?
- Development: How much time and effort do you require from UX or Development teams, to be able to execute and start this test.
- Coordination: It is necessary to coordinate with multiple areas of the company in order to carry out this test. And if so, what priority would this test have for them.
The second variables, known as impact variables, measure not only the complexity at an individual level to develop the test, but also measure the complexity required for other teams to support this idea.
Again, if we take into account that the framework is an open document for consultation by other areas, it is possible that these other areas involved understand the impact that this test has on the objective and development of the company so that they get involved more easily with it.
Finally, the structure of the hypothesis that will later be qualified in the framework as a priority should have a structure of the form:
“If we do XXXX in our website/product”, “Then our XXXX will increase”, “Because our customers will XXXX”.
Finally, having this process not only allows us to achieve a prioritization in the multiple tasks that lead to the optimization in conversion but also achieves a transparent horizon for the whole company, where the aim is to increase the results of the main objective that the company has.
Then, I started with the course taught by Peep Lajaj, “Research and Testing”. Peep starts by asking not to test on stupid hypotheses or on variables that will have results in the long term.
It reminds me of the phrase, in the long term, we will not live… This leads to solving the issue of optimization of conversions, thanks to short term work and constant iteration, where we must move to evaluate the impact, sacrifice in terms of people and time, as well as empirical evidence of these tests, to define what will put more effort.
After that, Peep recommends that the best way to make a successful test structure is through an implementation strategy. It should not start by simply copying good practices from theory, or from companies that are leaders in the sector.
To begin with, it is necessary to first find the reasons why the company is not achieving its results, and then develop a system of hypotheses to solve this problem.
The process of optimizing a digital asset starts by detecting the specific conversion problems that the company has. For this, three points mainly:
1. Which and where are the main problems of the company
2. Create a hypothesis that results in these problems
3. Create a priority list for testing and solve these problems.
Peep suggests not to start reviewing the data from top to bottom because there are multiple data of the business today, which can make us lose focus on what really matters. It is in this sense that at the end of the process, the only data that really matters is the one that talks about the business objectives. The one that has a real and direct impact on the achievement of the company’s objectives.
The flow of analysis must start with a technical review to ensure that everything is working well in these terms. Here, it is possible to find simple problems by devices or browsers, which prevent the increase of results for the company.
Secondly, there is the giving of a grade to the tests you have.
The aim is to clarify the relevance, clarity, motivation, and friction that this hypothesis may have. After that, a “digital analytics” analysis should be done to understand which audience is being impacted by this test.
Finally, I continue with my optimization prioritization class and test which I find fascinating. It is suggested that before starting to run, you learn to be methodical and have a clear process, and then solve the problem. I will be writing down in the next few days what I have learned during the week.