{"id":113200,"date":"2014-08-24T19:13:19","date_gmt":"2014-08-25T02:13:19","guid":{"rendered":"http:\/\/semphonic.blogs.com\/semangel\/2014\/08\/analytics-club-talking-optimization-post-7-by-gary-angel.html"},"modified":"2014-08-24T19:13:45","modified_gmt":"2014-08-25T02:13:45","slug":"analytics-club-talking-optimization-post-7-by-gary-angel","status":"publish","type":"post","link":"https:\/\/customerthink.com\/analytics-club-talking-optimization-post-7-by-gary-angel\/","title":{"rendered":"Analytics Club – Talking Optimization Post #7 by Gary Angel"},"content":{"rendered":"

Terrific post<\/a>. I think your five steps capture a large and important part of how enterprises ought to think about the integration of testing and analytics. Everything, from the deep integration of segmentation to the use of customer satisfaction impact is, I think, dead-on.<\/span><\/p>\n

In fact, I think we\u2019ve done such a good job covering this that I want to delve into a slightly different topic; namely, the way that a testing program can be used by an analytics team (and an organization) to answer fundamental questions that are impossible to cull out of any analytics method, no matter how sophisticated.<\/span><\/p>\n

Here\u2019s an example of a case that came up recently in conversation with a client. This client has a fairly large number of Websites that each support a separate brand. The sites are structured in a fairly similar fashion but are independent, have separate content, and are built by quite a variety of agencies and creative teams. In the last few years, one of the questions the client has been asking is around the value of short-form video on these sites and the best strategy for integrating that video. Good question, right?<\/span><\/p>\n

But if you let each brand and creative team decide how they\u2019re going to integrate video, you\u2019re quite likely to end up with a situation where many strategies are untested and a few are over-tested. With no coherent plant to test video integration, it is HIGHLY likely that you\u2019ll lack data to actually answer the best-practice question.<\/span><\/p>\n

Here\u2019s the dirty little secret about analytics \u2013 it requires variation. It doesn\u2019t matter how powerful your statistical techniques are, you can\u2019t analyze what isn\u2019t there. Big data, machine-learning, advanced algorithms \u2013 it doesn\u2019t matter a whit unless the data has enough variation to answer the questions.<\/span><\/p>\n

So let\u2019s think about video. Right off the bat, it seems to me that I\u2019d like to be able to tell my creative teams:<\/span><\/p>\n

    \n
  1. What types of video are most interesting and impact by customer segment<\/span><\/li>\n
  2. Whether video should be implemented on it\u2019s own page or integrated<\/span><\/li>\n
  3. Whether video should be on the home page and, if so, in what area<\/span><\/li>\n
  4. Whether video should be auto-play or not<\/span><\/li>\n
  5. Whether video should be sub-titled<\/span><\/li>\n
  6. What\u2019s the optimum length of a video by type and segment<\/span><\/li>\n
  7. What\u2019s the best strategy for integrating calls-to-action into a video<\/span><\/li>\n<\/ol>\n

    I\u2019m sure there are other important questions. None of these questions can be answered by a single set of videos and a single navigations strategy. With a single strategy, the best \u2013 the absolute best \u2013 that analytics can do is tell you which audiences that single strategy is more or less effective for. Useful, but simply not enough to create best practices. But suppose you have five sites with different implementations \u2013 can you answer the best practice questions? Well, it seems to me you can take a stab at it, but there are some severe limitations. First, you can only analyze the actual variations that have been implemented. If none of those sites used auto-play, you can\u2019t analyze whether auto-play is effective. Second, you have to hope that you can achieve a high-degree of comparability between the sites. After all, you can\u2019t test the impact of different video integration strategies unless you can compare placements, engagement, and impact on an apples-to-apples basis. That\u2019s hard. Really hard. Sometimes flat-out impossible. The whole point of a careful test is that you\u2019ve created a true control group. That control group makes it possible \u2013even easy – to measure the impact of variation. Without a true control, you\u2019re only taking your best guess.<\/span><\/p>\n

    If you really want to develop a good set of best practices around something like video, you need to develop a comprehensive test plan: a test plan that includes different types of content for different audiences, different lengths, different integration techniques, different locations. Then you have to test. And analyze. And test.<\/span><\/p>\n

    So in my video example, a good test will include developing different types of video for each customer segment and use-case, trying two or three creative approaches to each video (remember the creative brief!), modifying each type into two different lengths, testing video in various site placements including early to entry, late to entry and key pages, testing auto-play and various other video configurations, and testing different call to action strategies across different permutations of length and type.<\/span><\/p>\n

    Whew!<\/span><\/p>\n

    This is certainly daunting \u2013 especially when it comes to expensive form content like video (these same tests are ridiculously easy for text and are almost all applicable to article form). Which, to me, highlights an important organizational truth. Some enterprises choose to treat testing as a pay-to-play service. For some things, that works fine. But guess what, it also means that this kind of comprehensive programmatic testing to establish a cross-brand or cross-business-unit best practice will NEVER GET DONE. No single brand is ever going to invest in the necessary content or the necessary experiments to create the variation necessary to answer the analysis questions. I\u2019m not against making testing and analytics departments get some of their funding from their stakeholders on a pay-to-play basis. That\u2019s a good discipline. But if they don\u2019t have independent budget to attack this class of problem, it doesn\u2019t seem to me that anyone ever will address these bigger questions.<\/span><\/p>\n

    What do you think? What\u2019s your view on the best way to budget a testing organization?<\/span><\/p>\n

    This whole question, it seems to me, is the flip side of the close relationship between analytics and testing. Testing plans should be developed at least in part to answer analytics questions that require more variation. When you create these kinds of tests, you\u2019re learning with every variation. Wins and losses aren\u2019t what\u2019s important because every test yields analytic knowledge that then feeds back into broader strategies.<\/span><\/p>\n

    To me, this kind of testing ought to be a significant part of any good enterprise testing program \u2013 and it\u2019s driven by the analytics team because they are the ones who have the best sense of the questions they can\u2019t answer without more variation. What do think? Are testing departments and testing consultancies out there doing a good job of this kind of testing? I sure don’t seem to see many cases where testing programs (even where they are led by expensive professional testing consultants) have definitively answered this type of open-ended best-practice question.<\/span><\/p>\n

    In one sense, all testing is really just a way to create variation for analytics. But a testing program that\u2019s designed to establish best-practices around some kind of content or tool fills a unique niche that is separate from testing that\u2019s designed to optimize a single path or problem. How much of this kind of testing should organizations be doing? How can they decide what variables to test to create a comprehensive plan? What\u2019s the right way to balance this kind of focused testing-program with analytics testing driven by specific problems and issues? And finally, how, organizationally, can you work with agencies or creative teams when this kind of focused testing program is what you\u2019re trying to build?<\/span><\/p>\n

    That\u2019s a heap of questions, so I look forward eagerly to the next installment of Kellyvision!<\/span><\/p>\n

    Gary<\/span><\/p>\n","protected":false},"excerpt":{"rendered":"

    Terrific post. I think your five steps capture a large and important part of how enterprises ought to think about the integration of testing and analytics. Everything, from the deep integration of segmentation to the use of customer satisfaction impact is, I think, dead-on. In fact, I think we\u2019ve done such a good job covering […]<\/p>\n","protected":false},"author":7331,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":[],"categories":[128,91],"tags":[],"_links":{"self":[{"href":"https:\/\/customerthink.com\/wp-json\/wp\/v2\/posts\/113200"}],"collection":[{"href":"https:\/\/customerthink.com\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/customerthink.com\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/customerthink.com\/wp-json\/wp\/v2\/users\/7331"}],"replies":[{"embeddable":true,"href":"https:\/\/customerthink.com\/wp-json\/wp\/v2\/comments?post=113200"}],"version-history":[{"count":0,"href":"https:\/\/customerthink.com\/wp-json\/wp\/v2\/posts\/113200\/revisions"}],"wp:attachment":[{"href":"https:\/\/customerthink.com\/wp-json\/wp\/v2\/media?parent=113200"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/customerthink.com\/wp-json\/wp\/v2\/categories?post=113200"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/customerthink.com\/wp-json\/wp\/v2\/tags?post=113200"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}