Analytics Club – Talking Optimization Post #9 by Gary Angel

0
31

Share on LinkedIn

Kelly,

It’s been hugely fun to do this this series (and the one on Personalization with Jim – who still owes me a post!) – so thanks. I don’t know about you, but the back-and-forth sure seems to make it easier to write this much content. As I mentioned offline, I’m hoping we can put all this together into a Whitepaper for Q4 as a little something to remember our initial Analytics Club foray by.

Before we wrap up, though, there are a few final issues I wanted to get your thoughts on.

I’ll start by picking up a thread from your last post regarding how optimization programs measure themselves. “Wins” as a metric is one of those seemingly obvious KPIs that, on closer examination, is clearly misguided if not unintelligible. In a world where you’re paying a “best-practices” consultant to suggest tweaks to your site, wins might be a reasonable measure of whether your consultant knows what they are talking about. But as a measure of testing program, it fails to capture impact and it fails completely in situations where programmatic testing a la my video example are on the table.

I see where measuring “learnings” is a natural reaction to that. In theory, a metric like “learn” rate would cover programmatic testing. That being said, it doesn’t feel like a great metric either. As with wins, it’s missing a measure of impact and, frankly, it feels too soft. I’ll grant that a testing department should measure learn rate since it largely reflects whether an experiment was well-conceived and well designed, but it’s not a metric that you can use to justify your existence.

I’m guessing that optimization (no surprise) is like most other real-world problems – too complex to capture in any single KPI. But if you were running a testing team would you focus on measuring success via impact (baselining conversion, satisfaction, functional efficiency, etc. and then measuring improvement) or measuring success via operational metrics (win rate, learn rate, etc.)? And if you were doing baselining, how would you fit programmatic measurement into that scheme?

 

My second question is around the role of the creative agency in testing and how the enterprise can optimize that relationship. Most of our clients do rely on agencies for their creative. And heaven knows we have no dog in this hunt since we don’t/can’t/won’t touch creative. But they sometimes use different agencies for the big design stuff (new Website) and for testing. No doubt that’s to get better responsiveness and maybe lower cost for testing. Still, it seems problematic to me. I suspect you’d agree that the best testing programs dramatically reduce or even eliminate the need for those big waterfall re-designs. So what type of creative agency should an enterprise look for to support an optimization program and what type of relationship?

Going to a model where testing spans the gamut from small tweaks to large experiences seems to put a lot of stress on the traditional agency relationship and I’m curious what you’ve seen work well.

 

I’m also going to ask you, in your final piece, to provide your own summary of some of the key themes and take-aways. Here, as a starting point, is my attempt at a broad summary along with a few points I particularly wanted to call out.

  1. There are types of testing that don’t require analytics (best-practices optimizations, content strategies), but even these will generally be better if driven and informed by analytics.
  2. Testing and Analytics have all sorts of overlaps and interdependencies including creative briefs, segmentation strategies, testing prioritization, and programmatic testing to support variation.
  3. The current state of most testing and most optimization consulting is absymal – with experts peddling best practices that lack analytics rigor, real segmentation, and necessary customization. This type of expert driven testing can produce a round or two of mildly effective improvements but then tends to run out of steam.
  4. Too much focus is put on image and design content – too little on text content which is the real messaging. Lack of text content is a inexcusable limiting factor on many optimization programs and testing strategies.
  5. Creative briefs and segmentation are critical ingredients of a properly ordered testing strategy and are almost always under-baked and under-served by testing and analytics teams in most enterprises.
  6. A good creative brief involves the deep integration of segmentation and VoC into analytics and provides the creative team with a robust picture of the target audience and what drives their decision-making.
  7. Programmatic testing is a key component of a good optimization program and it can fundamentally change the way analytics and optimization teams interact with the rest of the organization. By providing controlled variation, programmatic testing can solve a range of problems that simply aren’t addressable by analytics alone.
  8. To support analytic-driven and programmatic testing, it’s essential that optimization programs have their own budget – and that optimization teams be tightly integrated into the analytics team.

Whew! That’s a pretty long list, but it hardly captures the richness of the discussion. Great job.

Well, that’s about it from my perspective. I’m going to leave the last word on this series to you – to bring us home and wrap up what you think the big themes are from our discussion. Maybe (don’t feel compelled to go along if you’d like to take it another direction), you can list five things that companies should do right now to have a better optimization program.

Oh – and by the way – since X Change is now upon us (booking fast too!) and you and I are both deeply involved in planning, what Huddles would you particularly recommend for optimization folks?

See you one last time – same Bat Channel, same Bat time!

Previous Post

Republished with author's permission from original post.

Gary Angel
Gary is the CEO of Digital Mortar. DM is the leading platform for in-store customer journey analytics. It provides near real-time reporting and analysis of how stores performed including full in-store funnel analysis, segmented customer journey analysis, staff evaluation and optimization, and compliance reporting. Prior to founding Digital Mortar, Gary led Ernst & Young's Digital Analytics practice. His previous company, Semphonic, was acquired by EY in 2013.

ADD YOUR COMMENT

Please use comments to add value to the discussion. Maximum one link to an educational blog post or article. We will NOT PUBLISH brief comments like "good post," comments that mainly promote links, or comments with links to companies, products, or services.

Please enter your comment!
Please enter your name here