Fact: Rich people drive more BMWs than poor people. But only a fool would conclude from this that driving a BMW will make you rich. Unfortunately, this is exactly the kind of logic many companies use to make business decisions regarding their web sites. Using advanced segmentation tools now so readily available, they chase what I call “epiphenomenal effects” and draw from them, at best, silly and, at worst, costly conclusions.
The basic idea behind segmentation is simple: Let’s define a particular kind of visit or visitor and then look at the traffic and page-consumption patterns for these particular segments, comparing them against each other and/or looking for particular site features. Often, they result in findings like “a visitor of feature X is Y percent more likely to do Z on my site.” Powerful analytical conclusions can be made as a result, and site optimization based on segmentation can have a dramatic impact on online business success.
‘And the answer isn’t just to get rid of the “unengaged” visitors.’
Segmentation has, thus, been one of the most popular recent trends in web analytics. All the newest versions of analytical tools available (HBX, WebTrends, SiteCatalyst) emphasize the speed, ease and flexibility of their segmentation capabilities. Back when segmentation requests took days to process, and cost extra money, businesses put care and thought into designing the segment and determining the appropriate questions to be asked of the data. Now that segmentation can be done in seconds, results and findings all too often are misleading and analytically flawed.
For example, a successful online travel company’s primary success metric is online trip bookings. So the company created a segment of converted visitors. Finding that this segment had a significantly higher number of page-views of the company’s “registration” page, the company redirected all its pay-per-click traffic to the site registration page. This did increase registrations but, unfortunately, had little effect on bookings. The company abandoned the strategy after a few months.
This is because the initial finding (buyers were more likely to preregister) is epiphenomenal—secondary—to the underlying cause (buyers are more engaged and more engaged visitors are more likely to register). While the finding is true, it is not actionable (at least in this fashion). But to discover that event X (registration) is “screened-off” from action Y (bookings) is not always simple or trivial.
Epiphenomena can occur less directly or obviously. Take another example. A publishing web site introduced a new, interactive tool and the organization wanted to know if it was a good driver for traffic. So the company did a visitor-based segment analysis and found that, in fact, visitors who used this tool had far higher page-consumption (an important key performance indicator for any publisher). Up to now, the tool was buried deep within the site, so management decided to take up valuable real-estate on the home page and put up a link to this tool. After three months of decimal-percentage click-through rates from the home page and no perceptible impact on engagement, the link was removed.
What happened? On the one hand, this should be actionable because there was no obvious circular logic, the KPI being seemingly independent from the segment criteria. However, the observed effect was also epiphenomenal for the following reason: One had to be somewhat engaged with the site to use that tool, and engaged visitors, by definition, have higher page-consumption than average.
By asking for visitors who used that tool, the company was, in effect, asking for visitors who were more engaged and excluded unengaged or single-access visitors (so common nowadays). The more requirements are placed on the visitor to be included in the segment, the more “engaged” the visitor is and the more likely a visitor is to exhibit a particular success event. And the answer isn’t just to get rid of the “unengaged” visitors (and how would you define that, anyway?); someone unengaged today might be engaged tomorrow.
Determining whether an effect is epiphenomenal or not can, thus, be tricky if there is no obviously circular logic. One method we use to determine the importance of an observed effect in segmentation is to compare metrics at different segmentation-scales (page view, visit, visitor).
Looking again at my example of the web site and the tool, you might find that a particular tool is used 2.3 times per session, based on a page-view segment. Expanding this to a visit-based segment could tell you that visits including this tool had 35 page views per visit, thus diminishing the effect of the 2.3 pages per visit and suggesting that engagement with the tool might be epiphenomenal. Similarly, the tool might exhibit 2.2 visits per visitor at a visit-based segment, but a visitor-based segment could reveal that these visitors made 20 visits to the site on average. The same publisher used this methodology to great effect and has been able to identify real drivers for visitor loyalty.
So the next time you’re zipping around in Omniture Discover, creating segment after segment and filter after filter, take a deep breath, think the logic through and make sure you’re not driving that BMW.