Attribution has been the talk of the town for years, but despite its prominence conversations about it rarely mention a more important aspect of measurement—incrementality.
Essentially, incrementality describes how much lift any form of advertising or marketing provides to whatever metric you’re measuring. Whether it’s conversion rate, revenue, cost per acquisition, or any of a number of other things, you should know where your marketing drives improvement and by how much.
Before deciding how much of your budget to attribute to each platform, you have to know that your methods are actually driving those increases. Once you know that your marketing is boosting your bottom line, how do you optimize performance for the most appropriate media across multiple channels?
A 2017 survey by the Data & Marketing Association and Winterberry Group found that nearly two-thirds of U.S. marketers had made attribution a higher priority over the past year, and that trend will likely continue. If you’re one of those companies spending as much as hundreds of thousands of dollars figuring out your attribution formula, are you doing it well? Is the return worth the investment? Have you done an incremental analysis of each channel or tactic to determine whether your marketing works before investing in the attribution of those marketing touchpoints?
It’s not that attribution doesn’t deserve the attention it gets. In fact, it’s a highly effective technique when implemented properly. The problem is that a lot more goes into making attribution successful than you might have initially thought. For example, did you make sure your data is clean enough to provide accurate insights?
If your analytics data tells you most or all of your website traffic is direct-only traffic, you might believe people are coming straight to your site without engaging any advertising or marketing. Your attribution solution, then, would be to reduce or eliminate one or more digital ads and focus more on driving direct traffic, perhaps via TV or out-of-home advertising. But that stat could also mean your landing pages are redirecting visitors in a way that strips the digital ad’s parameters, disguising each session or user as a direct visit.
An interactive feature you might enjoy:
Cleaning your data—that is, making sure each campaign’s data is easily discernible—allows you to accurately measure, in increments, how each campaign on each channel contributes to your overall campaign’s success and, hopefully, your bottom line. However, you still have to decide how best to measure that incrementality.
That might include placebo tests where you measure the activity an ad generates against the activity a nonpromotional PSA generates. If the ad generates more website visits, purchases, or revenue than the PSA, then maybe it deserves the credit you’ve attributed to it. Maybe it deserves more. If there was no difference, or if the ad generated less of your chosen metric, then it’s probably not worth the credit.
Taking it a step further, clean data will allow you to separate out the visitors who browsed your site, added items to their carts, and then left, regardless of what channel got them there. Should you retarget those nonpurchasers or not? Are they going to come back and purchase regardless of your efforts? An incrementality study can help you decide.
By splitting these visitors into two subgroups, a test and a control, you can serve one a round of retargeted ads featuring the products they viewed or added to their carts. This is the test group. The control group never sees a retargeting ad. Comparing their return-and-purchase rates will help you decide whether retargeting these visitors is something you should continue investing in, invest more in, or cut out entirely.
The challenge is that not all incrementality studies are created equal. Therefore, there is no holy grail of formulas to ensure success every time. You have to understand the many different things you can measure and then find the most appropriate metrics for your business, brand, and products. The time, budget, and expected outcomes will always vary, and even when a hypothesis doesn’t pan out at first, it might under different circumstances.
The Case for Incrementality
The biggest challenges in this arena arise in companies that do not have a testing mindset and are unwilling to invest in incrementality and attribution as ongoing processes. Doing something once or twice a year won’t generate enough information to determine user behavior or the influence of emerging consumer trends and technology.
Remember the cart abandonment retargeting strategy I mentioned above? If one test suggested that retargeting was largely ineffective, then you have a choice to either test again or abandon the strategy. If you test it again, you might find that retargeting social media ads works much better than retargeting display ads. Even further testing will help you tweak both strategies for optimal results by only retargeting consumers who are likely to respond.
That said, consistently bad results should cement your decision to cut out certain strategies. Repeat testing is essential, but after poor performance across multiple tests, you can’t keep doing the same thing and expect different results. That would be as wasteful as initially staking your marketing campaigns on blind attribution techniques.
With the insights gained from clean data and incrementality studies, marketing teams in every industry can benefit from attribution. It shouldn’t be as cut-and-dried as “attribution never works” or “attribution is my everything, and nothing else matters.” The middle ground is the most effective route. Find out which channels and media actually deserve extra credit, and then attribute that credit to them.
Andrew Richardson is head of analytics at Elite SEM.