The Staffing Challenge
As marketing pioneer John Wanamaker famously declared, “Half the money I spend on advertising is wasted; the trouble is I don’t know which half.” The good news is that it is relatively easy to prove the value of an optimization program compared to other marketing initiatives. Testing is an objective way to measure the effectiveness of your marketing campaigns. However, to set up an effective optimization program, you have to overcome several obstacles.
In my many past stints setting up optimization programs at different companies, one of the biggest challenges was hiring the talent required to carry out the program. Considering most optimization programs started out with one or two people in the company with the goal of finding opportunities to improve website conversion, I had to beg, borrow, and steal to get any resources and usually used free testing tools from the web.
When the program is small, which means two to three tests per year, it is feasible for one person to carry out multiple functions. But when the program starts to scale, you will need a team with different and varied talents. However, finding skilled team members to execute the program is incredibly difficult. This is due primarily to the type of skills required to execute an optimization program properly. At minimum, you need the following skills:
Many optimization initiatives originate in the marketing organization. Yet, few marketers understand the complexity of what is required to develop seemingly easy asks. Finding, managing, and setting a long-term career path for an employee is not easy. In fact, it was one of the biggest challenges in managing the program.
Many organizations attempt to think outside the box by “borrowing” resources in an effort to save costs. This approach has its challenges. As I have experienced with shared or borrowed resources, you lack control, which leads to compromise on the design, development time and quality of the campaign. Ultimately, someone else is writing the performance review of your borrowed resource, not you!
I suggest that you determine the core competencies in your program and allocate your best resources to them. Also, consider outsourcing missing or less efficient functions. By allowing the firm to find and manage the skill set you need, you and your team can focus on what is really important. This improves employee morale, retention and overall program effectiveness. Working with a firm also gives you access to a team of talent that offers insights and best practices across industries. You are essentially hiring a team for the cost of two or three full time employees. This approach worked very well for several of my large-scale optimization programs. The value to me was that my team was not burdened with low-value tasks, could focus on designing the next winning test and leveraged learnings from peers that lead to improved website, email, and digital advertising conversions.
The measurement process
In my previous blog post “Why Testing is Easy to Start but Difficult to do Well; Part 1: The staffing challenge,” I discussed the pitfalls of trying to shortcut adequately staffing a testing and optimization program. But staffing is not the only challenge. Companies are often faced with sloppy processes or lack of measurement. You may say, “The tools I use have a pretty sophisticated looking statistical reporting function.” True. However, a tool is only as good as the way you use it. Several factors impact a tool’s overall effectiveness. Ineffective measurement kills optimization programs.
Consider sample size. When do you consider a test is good enough to stop? Many marketers would stop and celebrate once they observe that their recipe is beating the control with confidence. That’s one way to do it, perhaps not the most scientific way.
But here is an example of how this approach can backfire. Joe Marketer conducted a test for two days and determined that the results were already statistically significant. He was so excited that he sent a company-wide email announcing the win. However, on day three, the result was still significant, but not for the announced win. He announced the win before fully understanding the outcome. Now he has lost credibility within the company. Does this story sound familiar?
In addition, few organizations take the time to determine and to set expectations of what a successful test will look like before launching. This is usually not a problem until it is. Most test results are multi-dimensional. One set of metrics may indicate that the test is a winner while another set indicates it is a loser. A predefined winning KPI would eliminate the question of whether a test is a winner or not.
Not enough effort is spent on analysis. This is one reason some organizations are unable to see the benefit of a test win after implementation. Simply calling a test a winner because the tool says it’s winning with confidence is not sufficient. Often, when you dig deeper, you may be surprised. Your testing tool may be reporting the accurate calculation, but it may not be filtering out bots, QA traffic or the conversion didn’t happen on the page you altered. Bottom-line is, trust but verify. Don’t take the result on your tool literally, dig a little deeper to understand why the test won. You won’t be disappointed.
The biggest mistake I find organizations make on their optimization program is as soon as they see benefits they start to use it as a revenue generation rather than a learning and improving tool. This all sounds good, but if you do this, you are destroying the soul of the program. Be warned. But, “Why?” you ask, “Isn’t this adhering to the mantra, if you can’t measure it, you can’t manage it?” No, let me explain why.
As soon as you set your revenue goal, the incentive for the team changes from learning to finding the winner. Innovation will decrease because the team wants to bet on sure wins rather than learning and understanding why a test does or does not win. Things that don’t need to be tested will now be tested to prove wins and juice the goal. Be careful when assigning team revenue goals. Rather, the program should be measured on test innovation, quality and volume. These are harder to measure, but they are the correct way to measure the team.
If you aren’t doing it, you need to do it now. Once you convince your boss of the value, you should ask for help. Trying to scale organically is like brewing beer at home. Sure, you can do it as hobby. But to make it a business, you need help, preferably from people who have done it successfully before.