It’s a rainy Monday morning but despite the grim weather, the office is chatty and energetic. You’re sat at your desk, minding your own business when your analytics partner, Ken, rolls his seat up to yours. Something is very wrong. He’s sweating and trembling at the same time as he stares you dead in the eye and says “we’ve got an anomaly”. Silence. You can hear a pin drop. You feel the eyes of other teams on you. Not the copywriters, they’re already cowering beneath their desks. “It’s okay.” You say, taking a sip of coffee—no cream, no sugar— “I know what to do.”
Pretty good right? When I’m not desperately trying to break into the self-published crime novella market, I spend a lot of my time dealing with these whacky things we call ‘anomalies’. After reading this article, you too can keep a cool head when faced with one of these pesky critters.
If you want to resolve anomalous inconsistencies, you’ll have to start by identifying what caused it in the first place. Detective hats on please, we’re off in search of clues.
First things first
Keep your eyes peeled for drops or spikes in your trended series graph, an outlier in your data points, or a pre-set alert notifying you that a metric has reached a certain threshold.
Once you know there’s an anomaly and you’ve got your prime suspect cuffed in the backseat, it's time for the interrogation. For starters, what type of anomaly is it? DataScience.com [i] has kindly separated the different types of anomalies you may encounter into 3 categories:
- Point Anomalies: If one object can be observed against other objects as an anomaly, it is a point anomaly. Think of outliers in a scatter plot.
- Contextual Anomalies If the object is anomalous within a defined context. For instance, within a specific time period you may expect to see consistent dips at the end of every month, if one month you notice a larger dip than usual, you may consider it anomalous.
- Collective Anomalies: If some linked objects can be observed against other objects as anomalous. Individual objects can’t be anomalous in this case, only a collection of objects. For example, when someone is trying to copy data from a remote machine to a local host unexpectedly it is an anomaly that would be flagged as a potential cyber-attack.
Now that we understand what anomalies are and can spot them in a line-up thanks to our data visualisation software (with help from some in-built anomaly detection tools), we can now work out what causes them. Below, we’ve listed some of the most common scenarios that lead to anomalous data in the web analytics world:
- Extremely high bounce rate
- Extremely low session duration
- Sudden rise in visits
- Drop in visits
- Major increase in transactions
Good, let's then take a look at the three steps needed in order to properly dive into the data.
Following the trail
Step 1: Time period
Check the time period of your data or when the anomaly was first detected. A lot of anomaly detection tools base their algorithms on historical time series data. Your tool will then detect any current data points that do not conform with previous data and will count that as an anomaly. Google Analytics uses this method for their Anomaly Detection tool.
Seasonality is the usual reason why e-commerce sites see a large difference in visitors during certain months. For example, during the week of Christmas people tend to be more concerned with how long their turkey takes to cook or what to gift their partner than your newest blog post on the winners of Love Island. These types of searches will cause a surge in visits for recipe blogs or transactions on gift sites, but a fall in other categories that are less of a priority at that time of year. Another common cause for fluctuations in your site’s traffic may be a change in who is able to access it. For instance, if a new local site is created then you may see an increase in the overall number of visits to your domain.
A surge in visits can also occur if a recently launched marketing campaign has been successful. For example, if you had bid for some paid search adverts which then garnered more publicity for the site and helped it climb the search engine rankings. These kinds of fluctuations are known as contextual anomalies and are another key reason why checking the time period of your site’s data is very useful. After the unusual results have been flagged as anomalies in your dashboard, you can compare the data to your business’s recent marketing activities and easily explain the change in traffic.
The reason for changes in a site’s data can be incredibly varied depending on the industry it’s operating within, so it is very important to have a complete understanding of your (or your clients’) business when analysing anomalies. After all, humans are far better equipped to understand the nuances of the marketing decisions that led to the unusual data than a machine that simply records it.
Step 2: Check correlating metrics/dimensions
For non-time series data, you may detect an anomaly if it falls out of your normal bucket of data points. When it comes to spam/bot traffic, we can usually spot the abnormal behaviour if we notice very high bounce rates and a very low session duration (~1 second). This suggests that a “user” has gone onto your site for only 1 second before exiting it. To really make sure that this is spam data, check the location of traffic and the hostname data. This will give you an idea of whether a lot of traffic is coming through one server (as this is usually how spam works).
Step 3: Make use of your in-tool capabilities
Next, we’re going to talk about Contribution Analysis—a part of Adobe Analytics that uses Adobe Sensei (machine learning) capabilities—which can be used to make identifying the contributors of anomalies a faster and simpler process.
“Contribution Analysis is an intensive machine learning process designed to uncover contributors to an observed anomaly in Adobe Analytics. The intent is to assist the user in finding areas of focus or opportunities for additional analysis much more quickly than would otherwise be possible.” >Adobe Analytics, 2019 [ii]
This machine learning analysis is only used in Analysis Workspace (previously in Reports & Analytics) and does take up a lot of computer power. For this reason, Adobe does limit how many times you can run a Contribution Analysis (depending on which account you have with Adobe).
After running one of these, you will be presented with a dashboard with all kinds of dimensions and metrics that Adobe has produced for you, including a Contribution Score for each variable (a metric that shows how much it has contributed to said anomaly). Although Contribution Analysis is a very good way of breaking down where your anomaly has come from, it doesn’t explain the cause of it. It can help find correlations between certain variables, but finding the cause is still a manual process. Also, it is good to remember that this analysis is machine generated and is therefore best used alongside some other knowledge base (you will understand the website and their customers targets far more intimately than the machine can).
Textbook detective work
Dealing with data anomalies is still a largely manual process, as they can be caused by multiple contributors that cannot necessarily be identified by relying on databases alone. The complexity of each individual case makes it difficult for machine learning to 1) detect AND 2) analyse the reasons why we see a spike in our timeline. However, hopefully these steps will provide you with some useful tips for actually dealing with anomalies once they have been detected.
There you have it. Case closed. Three steps closer to solving your data woes!