“You Can’t Always Get What You Want”: Why your Adobe Analytics (SiteCatalyst) segments return the wrong data and how to get what you need.
The Rolling Stones were right: “You can’t always get what you want.”
But if you have Adobe Analytics (SiteCatalyst) and understand segments, “you just might find, you can get what you need.”
It’s been my longstanding recommendation that it is critical to validate the dataset for any segment you create before using or publishing the data. This advice comes from years of seeing people proudly report “actionable insights” based on incorrect assumptions, derived from incorrect and misleading data. The root cause has often been data from segments.
To understand segments, you have to understand “containers,” persistence, and how the include and exclude functionality works.
In the Definitions section of the Segment Builder, there is a pull-down menu (labeled “Show) that allows you to select one of the available “containers” (Hit, Visit or Visitor).
The Hit container, formerly the Page View container, indicates that you want data for only those server calls (hits/pages) that meet the conditions you have defined in the segment.
The Hit container is straightforward until persistence comes into play. Adobe’s default setting for eVar (conversion) variables is to persist data through the end of the visit. In example 1 below, eVar1 is set to the value “C.” Because eVars persist and Page 1 set eVar1 to “C,” Page 2 and Page 3 would both return data for a segment that includes pages where eVar1=C. Pages 2 and 3 have value of C. in eVar1 even though that value is not explicitly set on those pages .
This same segment would not return data for Page 2 and Page 3 in visits where they were not preceded by a page that set eVar1 to “C.”
The Visit Container:
Selecting the Visit container tells the query to return all data from visits where the condition is met in any server call (page view, click action, etc.) within a visit.
Segments can get confusing when you combine a Visit container in an “include” segment and use the “does not equal” condition.
Let’s imagine you want to return data for all visits where the home page is not seen.
If you wrote a segment for visits where Page Name does not equal “Home,” you would almost inevitably end up with data you didn’t want.
Each hit within the visit is evaluated individually. If any of the hits meet the segment’s conditions, all data for all hits within the visit are returned (even those that did not meet the segment’s conditions.)
If a user visits the Home page and an Article page in the same visit, the segment would evaluate the Home page and the condition would return “false.” The condition is not met because Page Name would equal “Home” on the home page. It would then evaluate the article page. The article page would meet the segment’s criteria (true), and all data for that visit would be returned.
To get the desired result you would have to exclude visits where Page Name equals “Home.”
Although this logic isn’t intuitive, the following segments will return different results:
- Include: Visits where Page Name does not equal Home
- Exclude: Visits where Page Name equals Home
The Visitor Container:
The Visitor container works much like the Visit Container, but it is visitor centric and thus pulls cross-visit data. In other words, if the visitor had three visits in a selected timeframe, all data for all that visitor’s activity would be returned even if only one “hit” in one of the visits met the segment criteria.
If you are a rock guitarist and have a Marshall stack (that goes to 11), you will be able to play loud, but that doesn’t mean you will sound good. ;-)
Segments are the analytics equivalent of the Marshall stack. It’s powerful, but only good if you take the time to learn how to make music.
Check out my music at https://www.reverbnation.com/matthewcoen
Anyone who’s ever had to test analytics tags can definitely relate to Lionel Richie singing “I know is sounds funny but I just can’t stand the pain”.
Quality Assurance for Analytics tagging can be tough and time consuming. It is particular difficult to isolate test data in production or any environment where multiple users and on the site.
Here’s a quick method for testing that, once set up, will leave you time to pull out your record player and spin your favorite Commodores album.
- Identify a set of pages and actions to be tested.
- Make sure you include all the actions associated with your KPIs (Key Performance Indicators).
- Determine the “flow” of your testing.
- Home Page > Content Page A > Action >Etc.
- Define a test “Campaign Tracking Code” (a code appended to a URL to identify the traffics source).
- Add the Campaign Tracking Code to the first page of you test’s URL
- Either manually or through an automated tool, if you have one, execute the pages and actions of your test.
- Be sure to execute the steps in order
- NOTE: Do not take any other actions from that browser on you site for at least 35 minutes (assuming your analytics tool considers a visit over after 30 minutes of inactivity)
- Create a segment that includes all the pages and actions that occurred in the visit where the Campaign Tracking Code you defined in step 3 was present.
- In Adobe Reports and Analytics (Omniture, SiteCatalyst) be sure to use the visit container.
- Using the segment from Part 2, run reports for each page and action in your test.
- If you have access to a tool like Adobe Report Builder use it.
- Once you have validated that your test is working and your site is correctly tagged, save the results to compare against in future tests (regression, etc.)
- If you used a tool like Report Builder you can automate the validation phase of the test as well.
i. Save the correct results set
ii. Automate the pulling of the new test results
iii. Add Excel functions to compare your benchmark with the current test results.
Boom! Spend a little time on the “Nightshift” making it solid like a “Brick House” and tag testing will be “Easy Like Sunday Morning.”
Want to hear some of my music?
We all like to think our websites and online marketing rock, but when a significant portion of your sales occur offline (especially if you sell big ticket items or services) how do you prove it?
This is a method I developed (years ago) that links offline sales back to online activity. I’ve been told it rocks, but you can be the judge.
Set a persistent cookie
- If you don’t already have a cookie that uniquely identifies users, create one. NOTE: Your cookies and web analytics tools should never capture PII (personally identifiable information). For this process the point is mute, as you don’t have any PII anyway.
Get and email address at the point of sale
- At the point of sale (close of deal) get the buyer’s email address and add it to your CRM system. If you sell big-ticket items or services, or have a rewards program you probably already do this.
Send purchasers an email with a link back to your site that includes a Campaign Tracking Code
- Example: http://www.mysite.com?cid=olp
- Identify a compelling reason for them to click the link (special offer, warranty registration, loyalty program bonus, owner specific content, etc.)
Count repeat visitors coming in with the offline purchase Campaign Tracking Code
- When a visitor comes to your site with the offline purchase Campaign Tracking Code in the URL, check their cookie. If it is not their first visit, they went to the site prior to purchase. Thus it’s reasonable to say the site influenced the sale.
- Set an event variable (SiteCatalyst) or a custom variable (Google Analytics) to count “Offline Sales”.
- Set and eVar (SiteCatalyst) or the second parameter of the custom variable (Google Analytics) passing in something like: “Influenced” or “Not-Influence”
Putting it all together
The final step involves a little math. You won’t get everyone to open the post-sale email or click on the link, but that’s OK. You just need a good sample and the percentage of “Influenced” sales
Once your sample is large enough, you will be able show the extent to which your online marketing is driving your offline sales and the ROCKS!
You can check out my music here:
In the recording studio, isolation can be very important. Isolation techniques keep the guitar amp (volume set to 11) from bleeding into the drum microphones and vise versa. Good isolation gives you greater control and more sound shaping options during the mixing process.
In analytics, isolation or segmentation, is critical as well. This is especially true while testing the integrity of your analytics implementation.
This quick tip will help you standardize and simplify your analytics Quality Assurance, by isolating the activity of those people testing data integrity from those responsible for other testing in the same environment (development, QA, production, etc.)
The data input process:
- Document a clear set of repeatable actions and specify the order of the actions
- Go to XYZ page through a link with a predefined QA tracking code appended (?cid=qa_regression_test)
- Search for “Antidisestablishmentarianism”
- Compete the Contact Us form
- Have the testers execute the documented process
- Note the number of testers, times at which the procedures where executed, etc.
- Validate the expected data is returned in your analytics tool/s
The Tip (using SiteCatalyst nomenclature, though this can be done in any tool with segmentation capabilities):
- Create a segment for “visits” where your specified tracking code (qa_regression_test ) was passed in.
- Note: Always validate your data set when first creating a segment.
When you apply you’re newly created segment you will have isolated the data passed in specifically from your testers and eliminated the “noise” from anyone else that may have been working in that environment. This technique is particularly valuable when testing in a production environment.
Want to hear some of my music? Go to: http://www.reverbnation.com/matthewcoen
If you want your analytic systems to rock, like guitars, you need to know how to “tune” them. For that reason, I’ll stray from addressing the more philosophical aspects of analytics and get technical.
Many companies use multiple promotions (internal banner ads) on their sites and want to understand their performance.
You can use things like internal tracking codes and path analysis to do this, but these methods can be time consuming, error prone and often don’t work well when you are dynamically serving promos.
When dynamically serving promos, displaying multiple promos on one page or when tactics like carousels are used, the number of impressions a promo gets will significantly affect the number of clicks and successes.
What we need is and automatable, internal “Click Rate” report or as media calls it, a Click Through Rate report. Click “Through” is misleading, as the fact that a user clicks doesn’t mean they made it through. We’ll save that topic for another time.
To create a Click Rate report, we’ll use a list variable.
There is a lot of confusion related to variable types in Adobe Reporting and Analytics (Omniture, SiteCatalyst).
When it comes to Reports and Analytics, it can be difficult enough to understand the intricacies of s.props and eVars, let alone specialized variables like the list variable. The fact that there is more than one type of list variable further complicates things.
What is a list variable? It’s a variable that allows you to pass in multiple delimited values (separated by a comma for instance) and run reports on each value separately.
In the early days of SiteCatalyst there was only one variable that would accept a list (s.products). Today I’m going to focus on s.list1, s.list2 and s.list3.
List variables persist like eVars (conversion variables) but with one major difference, how they persist.
If you set and eVar on a page and then set it again on a subsequent page, the subsequent page will “overwrite” the value set on the first page. When an event variable is set, the last value set in the eVar will get credit for an event. This is not the case with the s.list variables. Each value set in a list variable persists until its persistence expires.
Less talk… more rock.
To set up an internal click rate report, do the following:
Set up the variables
- Have support enable a list variable (s.list1, s.list2 or s.list3).
- You only have three so use them sparingly
- NOTE: If you want to change the list variable’s name to something like “Promos”, you’ll have to do it in admin where the menu is customized.
- Have support set the persistence expiration to “on page”.
- This essentially means that the variable does not persist
- Have support set up the delimiter you want to use for that List Var (I like to use a comma)
- Set up an event variable for “Impressions”
- Set up an event variable for “Clicks”
- OPTIONAL: Set up a “Clicked Promo” eVar
Tag your site
- On the page containing the Promos, pass all the promo names to the list variable and set the “Impression” event variable
- s.list1=”Promo_Name_1, Promo_Name_2, Promo_Name_3”
- When a promo is clicked, pass the clicked promo name into the list variable and set the “Click” event variable
- s.list1=” Promo_Name_2”
- OPTIONAL: s.eVent17=”Promo_Name_2”
Set up the reporting.
- Create a calculated metric called “Click Rate” (or whatever makes sense to you)
- Click Rate =Clicks/Impressions
- Run the ListVar1 (Promos) report
- Select your “Click Rate” calculated metric as your metric.
If you set up a “Clicked Promo” eVar you will also be able to calculate conversion to other events as the eVar will persist (through the visit unless otherwise specified)
The last and most important step is to use the data to optimize your promos.
A basic 12 bar blues tune is in 4/4 and has 3 cords. This project is more like a jazz tune in 9/8 with 3 key changes in each phrase. Don’t hesitate to hit me with questions if you have any.
Rock On – Matt Coen
Close your eyes and imagine that you’re on stage in front of 50,000 screaming fans. You are getting ready to rock their faces off. Your Marshall stack is set on 11, you applied your eyeliner and half a bottle of Aquanet to your hair (at least that’s how we did it in the 80s). You fire up the accordion and start belting out a thrilling version of the Beer Barrel Polka.
What? That isn’t the way you expected your “rock ‘n roll fantasy” to play out?
If you want to rock and audience, it’s often helpful to play a rock song (Weird Al notwithstanding).
If you want your analytics to rock, the data you produce and the analysis you do has to be focused on the desired business outcome. When your measurement and analytics efforts fail, it’s not always because the data is incorrect or the analysis is poor, but sometimes because you are providing the right answer to the wrong question.
In his book “the 7 habits of highly effective people” Stephen Covey talks about “beginning with the end in mind”. This concept is critical in selecting the right metrics to measure. Incorrectly defining the problem is the analytics equivalent of opening up a rock show singing “Role out the barrel.” You might do it flawlessly but the outcome may not result in thousands of screaming fans (though it might involve a different type of screaming).
At this point you may be asking yourself “self… how do I figure out what the right question is?”
This is where things can get a little tricky. If you ask the typical stakeholders what metrics they want, they’re probably going to do one of 2 things:
- Say they don’t know and ask you to tell them
- Ask for the metrics they’re familiar with
In the situation is easy to put the blame on the business owner for not knowing what they need, however, I tend to think the problem isn’t that they gave you the wrong answer, is that you asked them the wrong question.
A better question to ask might be “what are you trying to understand?” or “what are you trying to accomplish?” As analytic professionals, we should be able to help them find appropriate metrics to answer their questions once the questions have been correctly defined.
There are of course certain questions we simply can’t answer with the tools that we have. If this is the case, we need to explain the situation and look for proxy measures that can give us enough directional information to work with (this may also be an excuse to lobby for those cool new analytics toys we’ve been wanting.)
If you want your analytics to rock, make sure you’re providing the right answer to the right question.
The Huge Insight Report:
In most analytics systems the “Huge Insights Report” is found next to the “It Worked” report and the “Why Our Results Didn’t Really Suck” report. These reports are most often needed when project objects were not clearly defined, KPIs and targets where not set and best practices where not followed. These reports are invaluable because there’s a meeting with the VP, CEO, Chairman of the Board and someone from an undisclosed “Three Letter” government agency in 90 minutes and we need to prove (or at least imply that) the project was successful.
Ah… if only the “Huge Insights Report” actually existed. Since it doesn’t, we have to hire the right people, know our tools and spend time searching for “Huge Insights”. While we’re at it, we can forget about gathering the “low hanging fruit”. If there really was any, someone else would have eaten it by now.
What Would Woody Do?
I recently watched a documentary about the famed OSU-UofM football rivalry. The film recounted Woody Hayes falling asleep from exhaustion after watching game film all day.
The typical football game lasts about three hours (including halftime), why on earth would the COO of the football team (the coach), his assistants and players spend days watching film? The answer is simple, to learn, get better and win. Woody Hayes (love him or hate him) new what it took to win.
How much time and energy does your organization spend “watching film”, in other words, learning from your analytics tool? Do you have smart senior people in your organization focused on optimizing your marketing dollars?
Until analytics vendors figure out the “Huge Insights Report”, if you want to beat your competition, you might want to ask “What would Woody do?”