Obtaining valuable, actionable insights requires accurate data. Without accurate data, your insights will be mediocre at best and damaging at worst.
But the process of collecting accurate data can be complex. This complexity comes because websites frequently change, creating a moving target for analytics professionals trying to track web behavior.
And as companies grow their digital strategy, the volume of analytics tags tracking user behavior increases. Your website could contain thousands of tags, making managing an analytics implementation even more challenging.
However, the complexity involved in accurate data collection doesn’t have to keep you from obtaining reliable insights. This complexity becomes manageable when you apply test automation to your analytics implementation.
Read on to learn how.
What Is Test Automation for Analytics Data?
In any discipline, the objective of test automation is to build a library of automated tests to run against your program/implementation whenever you make an update, verifying that everything is still in working order.
The same concept applies when verifying the accuracy of your analytics implementation. Specifically, test automation for analytics data entails the use of automated tests to run against your analytics implementation following a change to check for tracking errors such as missing tags, missing variable data, incorrect formatting of variable data, and other errors.
These tests help you ensure the integrity of your implementation by locating tagging errors and validating your data to ensure accurate data collection.
The benefits of applying test automation include:
- A more efficient way to ensure quality data
- Increased accuracy of data
- Greater confidence in analytics data for decision-making
So how exactly can you apply test automation to analytics? You will need to have access to or build a solution that can carry out the following functions:
1. Scan your digital properties for analytics technologies
2. Apply tests (or rules) against those technologies
Scan your digital properties for analytics technologies
Your implementation is spread out across your website or app, so you will need a solution that is able to crawl these resources and identify/test data collection technologies (i.e. tags).
There are two types of scans you will want to conduct on your site, each focused on different resources of your digital property.
The first type: Scans of individual pages to identify analytics tech
The second type: Scans of user paths to ensure proper functionality and data tracking
Scanning Pages. In order to conduct tests at the page level, you will need a solution that can scan your website page by page. These page-level scans will give you a clear understanding of the current state of your analytics implementation, including all the tags present. The results from these scans will allow you to accomplish the following:
- Ensure proper data collection through appropriate tagging placement and functionality.
- Validate variable presence and formatting to ensure the data is usable for analysis.
- Locate and remove tags that are non-essential deadweight.
ObservePoint’s Web Audits are one example of a solution that can scan batches of pages across your site and verify analytics tech is installed properly.
Feature Highlight: Web Audits
Companies like Hewlett Packard Enterprise, NBCUniversal, and Carnival Cruises use ObservePoint Web Audits to scan their site and discover what technologies are gathering data. Each audit scans a given number of pages, cataloging the discovered technologies and aggregating those into an easy-to-consume report.
Scanning User Paths. Using automation to scan the different user paths on your site will allow you to maintain the analytics tracking of your site’s most important customer experiences. These customer experiences should be a critical part of your analytics strategy, because they are likely the portions of your website driving the most traffic and/or conversions, and analytics can help you improve both of those metrics.
Automatically scanning your website’s user paths will allow you to accomplish the following:
- Ensure critical customer experiences are available and functioning properly.
- Ensure your event-based analytics tracking for those experiences is collecting accurate data.
ObservePoint’s Web Journeys are an example of a solution that can scan user paths on your site and verify analytics technology is installed properly.
Feature Highlight: Web Journeys
The ObservePoint Web Journeys feature makes it possible for companies like Johnson & Johnson, Overstock.com, and Suncorp to test their most important web experiences. Web Journeys replicate a site’s user journeys, such as shopping carts or user logins, from start to finish, and send alerts if anything prevents the path from completing or if the analytics are not tracking the activity.
Apply tests (or rules) against those technologies
Once you have a solution that can scan batches or sequences of pages, you need to stack on a solution that can apply tests against what those scans find.
The core of test automation is the ability to create and apply tests against your implementation on a broad scale. In the context of analytics, these tests (called Rules in the ObservePoint product) validate that your analytics implementation is meeting your expectations.
Your analytics strategy is unique to your business. As a result, you must create rules that verify your implementation (specifically the tags and variables) meets the requirements of your strategy. Let’s consider tags and variables separately.
Tags. The term “tags” refers alternately to:
1. The snippet of code you install on your site that fires analytics requests
2. The analytics requests themselves
When discussing test automation, the most accurate interpretation is that the tags are the requests, because a single snippet of code can fire multiple requests, and when testing you will be validating the requests, not the code snippet itself.
You know when and where your tags should be firing, and with an automated testing solution you can set up custom testing rules that will give a pass or fail depending on whether or not the tag meets your expectations. The results of these tests will help you continually ensure your tags are firing properly in the correct locations and giving you accurate data as a result.
As an example, the most basic test you might carry out would be to verify that your primary analytics vendor fires whenever a page loads. But you can test with as much granularity as necessary—for example, you could test that your analytics vendor fires when visitors submit a form on all pages in a specific category.
Variables. Analytics variables allow you to capture the important metrics and dimensions you use for analysis. As such, you will want to test that your analytics implementation is consistently and correctly assigning the right values to variables.
Your analytics implementation can pull variable values from myriad locations, such as the data layer, the DOM, or the window object. Regardless of where these data points are pulled from, you will want to verify that your analytics solution captures them correctly.
Usually variable testing will involve some application of regular expressions (RegEx) matching in order to verify that your variables match a specific pattern. Below are some examples:
- To verify variable cd173 is always set and matches a legitimate SKU number, you could use the following RegEx: ^[a-z0-9A-Z]{10,20}$
- To verify prop 79 is always set and contains the visitor type (such as new_visitor, returning or logged_in), you could create a RegEx that matches one of those values, rejecting any others.
Your test automation solution should be able to report whenever a variable test fails to meet your expectations.
Now that you know some of the basics requirements of an automated testing solution, you will want to consider where your testing efforts will pay the greatest dividends.
Resource Allocation
Once you’re on board with the idea of analytics testing with an automated testing solution, you many feel that you need to test and evaluate every nook and cranny of your site.
That isn’t the case.
While an effective automated testing solution would allow you to run tests on your entire site, an all-inclusive testing strategy of your entire site is not optimal.
Websites are large, and running comprehensive tests would take excessive time and resources to execute. Additionally, running all-inclusive tests would return vast amounts of data to sift through.
Instead, you should take a more targeted approach that makes test automation work specifically for your website’s goals. Your website likely follows the 80/20 rule, in that roughly 80% of your website’s revenue comes from about 20% of your website’s functionality. This 20% of your website is where you will want to focus your testing efforts.
Focus Testing on Your Most Important Pages and Paths
Once you’ve located the web assets that bring in the most revenue, you will want to prioritize ongoing testing and monitoring on these high-revenue assets with an automated testing solution. By prioritizing your testing, you will be able to obtain the greatest possible return on testing and avoid wasting time and resources.
So how exactly can you efficiently prioritize your efforts? By:
1. Prioritizing your scans
2. Prioritizing your tests (rules)
Prioritize your scans
You don’t have time to sift through loads of low-priority data. As a result you will want to minimize the number of scans while maximizing their impact.
Consider audits. You should not have more than one or two audits per domain, and your daily audits should generally not consist of more than 100 pages. These audits should give you enough information to gauge the health of your implementation.
For journeys, you will need to determine the specific needs of your business and your threshold for risk. As a rough guideline, you should prioritize running daily testing journeys on your top 5-10 critical customer experiences, but you could easily have more than that if your site is especially large or you have a high number of critical customer experiences. Any other non-critical journeys should be run infrequently or as you deem necessary.
Keeping your testing strategy within these recommendations will help you avoid experiencing data overload and allow you to maintain the functionality of the most important portions of your website.
Prioritize your tests (rules)
Your most important web assets likely contain the most robust portion of your implementation. You probably have the most vendors and variables working on those assets. While you could test any of these assets, you don’t want to test all of them. You need to prioritize.
One way you could approach prioritizing tests would be to first prioritize by vendor.
Say you have two analytics solutions installed on your site: a paid solution and a free solution. You would want to prioritize testing of your paid solution because of the expected value there.
After prioritizing vendors, you would then need to identify the most important instances of data collection to test.
For example, should you test the metrics on your primary conversion path or on your Contact Us path? This decision will be based on the relative importance of each experience.
Once these tests and their associated expectations are set, how will you notify the appropriate stakeholders when errors appear? You will need to set up the appropriate alerting mechanisms.
Note: ObservePoint’s automated testing solution gives you complete freedom to cater your analytics tests to the pages and paths on your site that are driving the most revenue.
Set Up Alerting Mechanisms
The whole purpose of using automation to test your analytics implementation is to help you efficiently locate and fix errors when they crop up, which allows you to consistently maintain accurate data collection.
Consequently, you should set up a system to notify you whenever one of your tests fails. These notifications will alert you immediately whenever your analytics implementation isn’t performing as expected, creating a feedback loop between your implementation and the stakeholders responsible for that implementation.
Notifications come in many formats. They can be as basic as an email outlining the issue, or as integrated as a system that submits a Jira ticket to the person who oversees the technology that failed. The more integrated with your team’s workflows, the better.
Once you’ve set up your reporting and notification procedures, you will want to optimize the timing of your tests.
Note: ObservePoint has integrations that allow you and your team to receive custom notifications through various applications, such as Slack, Jira, and others.
Perform Timely Tests
As stated before, the goal of test automation is to test an implementation after a change to verify that nothing went sour. The timing of your tests is just as important as the tests themselves.
Conducting tests after a change (or release) is often known as release validation. There are a few different ways you can approach release validation, ranging from good to best.
Good: Running Tests
A good way to test your implementation is to manually initiate your tests before and after updates. The biggest pitfall to this approach is that it requires someone to kick-off each test. When you have multiple teams all making changes to a site, it may not always be clear when an update was made, leaving gaps in your testing.
Better: Scheduling Tests
A better way to test your implementation is to schedule your analytics tests to match your release schedule. Simply schedule your tests to run at specific times, and as long as your releases stay on a regular schedule, you shouldn’t have gaps in your testing.
The biggest issue with this approach is that releases aren’t always on a regular schedule. Multiple teams— sometimes spread across different time zones—handle different aspects of a site and make updates on their own schedule. If one of these teams were to perform site updates and the next test wasn’t scheduled to run for hours, an undetected error could cause you to lose data and revenue in the meantime.
Best: Triggering Tests
The best way to test your implementation is by triggering tests automatically whenever someone pushes a new release to your site. You can trigger tests either when someone makes an update in your tag management system (TMS), to your website (such as through a CI/CD tool like Jenkins), or both.
By triggering tests, you can make sure that you always have timely tests, regardless of team, technology, or time zone.
Note: Preferably you would carry out any of the above approaches in your staging environment before pushing live to production. But if you’re unable to get access to the staging environment, you should run tests whenever a new update goes live.
Automated Analytics Testing Will Be Your New Best Friend
Through the use of test automation, you can efficiently validate your analytics implementation on an ongoing basis, which will result in more accurate data now and into the future. Maintaining this accurate data with testing automation will enable you to confidently make data-based decisions that positively impact your customers and your bottom line.
You need to be able to trust your data, and applying test automation to your analytics practices will help you do just that. Learn more about how ObservePoint can help you with your analytics testing.