Whether the objective is to increase sales or reduce costs…the channel is online or offline, we are often faced with more ideas than we have time or budget to test. Since each marketing campaign is an opportunity to learn and improve results, it’s important that we are not overwhelmed with the concept of continuous testing.
Over the years, I have used a simple rubric to prioritize testing that can be applied to any marketing organization (at least I haven’t found one yet that it can’t) where there is an abundance of testing opportunities. Following is a summary of the process:
- Collect a comprehensive list of all testing ideas from the entire marketing team. Do not exclude any ideas and welcome input from key stakeholders in other departments such as sales, operations, finance, etc.
- Gather a team to represent each discipline from the list of ideas. Have the representative explain the concept and rationale for their recommended tests to the entire team.
- Collaborate, debate, vote and ultimately score each test in the following areas (with the maximum potential score of 9):
- Impact Potential (1=Minor, 2=Medium, 3=Major)
- Time to Implement (1=Long, 2=Medium, 3=Short)
- Cost to Implement (1=Major, 2=Medium, 3=Minor)
- Total the scores and sort the results from highest to lowest to determine your priorities.
Ultimately, the testing ideas with major impact potential, short time to implement and minor cost to implement become the highest priorities (unless of course, someone from the C-Level team determines otherwise). The above recommended scoring system can of course be weighted and/or altered to fit your organizations goals.
Has anyone ever used this or a similar method to prioritize testing? If so, please feel free to share your experience and recommendations.