Proven Digital Marketing Testing & Optimization Strategies from 3 Great Brands


Share on LinkedIn

Here’s a CliffsNotes summary of a session I just sat in on at Oracle OpenWorld 2014 – great content thanks to a panel of digital marketers from three great brands including: Epson, Shutterfly, and Apollo Group (which owns the University of Phoenix brand).   The goal, discuss some of the challenges and key learning from digital optimization efforts in 2013 and 2014. According to research from Gleanster’s 2014 Marketing Automation survey, Top Performing organizations are 6 times more likely than Everyone Else to focus on testing, optimization, and analytics as a top five priority for marketing in 2014. Appropriately, the theme for the presentation: run, test, discuss.

How do you move from a promotion-heavy marketing execution, which almost always drives initial traction from a target audience, followed by a rapid drop-off in engagement?  The answer the panel gave – you personalize, optimize, and individualize the digital experience. Ideally, personalization should be building an incremental dialogue with prospects that enhances a relationship and not just an engagement. This is easier said than done. In fact, this hodge-podge of best practices that suggest we should move to personalization is just that – a suggestion. But how do you do it? That was the key question posed to the panel.  It was interesting to learn that despite the nuances of the different industries represented (internet, hardware, education) there were quite a few similarities in their answers.

What are the business drivers and challenges that forced your organization to re-think how you do digital?

  • Landscapes and channels change rapidly, forcing marketing to adapt or die. Generally you know where your organization is falling short by benchmarking competitors and similar industries.
  • Declines in customer conversion triggered more investigation for Shutterfly. It’s easy to look at real-time data in a silo, but if you back up and look over longer periods of time you might actually see trends in context.
  • Conversion isn’t the only issue – it may be a byproduct of other issues. Those could be within marketing’s control or outside of their control. Measuring conversion provides a threshold for investigating a wide variety of issues to test and optimize; the product, services, customer desire, customer behavior, environmental forces, etc.
  • Many marketers are still constrained by legacy email platforms that require extensive IT resources (even for email templates) to support and maintain.  Marketing needs more control and ownership over campaign execution and measurement without depending on IT.
  • Marketers should question how they are measuring ROI. It’s easy for the organization to value metrics that are no longer relevant just because they were relevant 10 or 20 years ago. Emails that are promotion-heavy may meet performance metrics and targets, but do they actually drive the right outcome? For example, if a campaign drives an initial open that later drops off heavily, maybe measuring opens isn’t the right metric.
  • Not all clicks are measured equally.  Are you actually tracking which links are being clicked in emails.  This can pose a huge problem for marketers, because canned reporting may not differentiate for link quality.  Did recipients the link you wanted them to convert on or did they click a menu link.  Lack of understanding about what people are clicking on and why is still a huge challenge for digital marketers.

Can you provide some insight about the architecture of your email marketing program?

  • The panelists discussed the evolution of data and analytics.  What started as an exercise in optimization quickly blossomed into the realization that the right data wasn’t actually available or additional data could provide even better engagement.  That led to the use of progressive profiling, where data from prospects and customers is incrementally captured over time through multiple touch points.  Progressive profiling can help marketing uncover new segmentation and targeting criteria that may not already exist. One example is product-specific data on buyers – you know what they bought, but do you know why?
  • Very rarely does a decline in a metric lead to a single, simple answer. If customers become less engaged you need to question a variety of hypotheses. Are we suffering from increased competition in the inbox? Do email opens on mobile devices have an impact? Are you over-emailing customers? Are there creative executions that aren’t working? Is it the send time? From a program standpoint, it means the testing and optimization methodology must be very flexible and scalable.  You want to measure metrics that can universally be applied across other campaigns or channels.
  • Mobile optimization was a core focus for all panelists. Massive increases in volume of email opens via mobile led to the realization that email templates MUST be responsive.  Bottom line, move to responsive… yesterday.

How did you set up the programs so you could iterate and test continuously?

  • It’s a challenge with trigger campaigns because there’s so much to measure from a correlation standpoint. One thing that helped was phased testing. Link each phase to a desired outcome that can be measured, and then use small ad-hoc tests to establish if it’s even possible to run a test in a campaign.  Is the data even available?
  • How to identify and test hypotheses? Don’t take a linear structured approach – test 1, test 2, test 3. Try to test multiple hypotheses to speed things along.
  • Create a data-mart for campaign data. Use end-user-friendly BI platforms like Tableau and MicroStrategy. Give end users easy access to campaign data.

How do you drive change internally when culture and legacy tools tend to impede innovation?

  • This is a huge challenge because often transaction-based emails are managed by IT, and it’s difficult to take ownership of these in marketing. Take advantage of changes in the organization that may shift the balance of power.
  • The business case for change is key for every organization represented on the panel. If you can show why and how the change can impact the organization positively, it’s a lot easier to get stakeholders involved.

Republished with author's permission from original post.

Ian Michiels
Ian Michiels is a Principal & CEO at Gleanster Research, a globally known IT Market Research firm covering marketing, sales, voice of the customer, and BI. Michiels is a seasoned analyst, consultant, and speaker responsible for over 350 published analyst reports. He maintains ongoing relationships with hundreds of software executives each year and surveys tens of thousands of industry professionals to keep a finger on the pulse of the market. Michiels has also worked with some of the world's biggest brands including Nike, Sears Holdings, Wells Fargo, Franklin Templeton, and Ceasars.


Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here