I recently wrote about performing additional calculations on Google Analytics data in order to derive more precise information. One of the problems most of us have is being forced to use the metrics and reports designed for us by the web analytics vendors. In most cases this is fine because what they give us is, generally speaking, useful. However, there are many occasions when this is problematic.
Firstly, a metric can be useful in some cases and misleading in others. Angie Brown, a member of the WAA standards committee, has an excellent blog [http://rich-page.com/win-at-web-analytics/revealed-when-bounce-rate-sucks/] showing problems with the bounce rate in this regard.
A more systemic problem is that some of the things we have wanted to measure could not be measured with web technology as it was in the 1990’s and we designed web metrics around these limitations. Technology has moved on, but the metrics haven’t. In most cases we were dealing with limits in browser technology. Anyone who tried to do web development over the last 15 years will have been driven crazy by browser compatibility issues. I am not aware of any browser which implements the standards for HTML, CSS and JavaScript perfectly, even today. They all fail in some way and they all fail differently. Often a work-around for one browser will make things worse for another. The net result is that much of what is theoretically possible in web design is simply too labor-intensive (and expensive) to design for. In this respect Google’s browser systems are state-of-the-art, and show what can be done, but the rest of us don’t have the time or budget their developers do.
The programming language browsers run is called JavaScript. JavaScript is not implemented the same way on all browsers, even today. JavaScript issues are the reason that few analytics systems directly measure the time spent on a page. Instead they infer it from the time of request for the next page. Most web analytics systems use JavaScript to gather their data. A script runs inside the browser and sends data back to the server. Closing the web page stops that script. It is theoretically possible to get the JavaScript to send information as the page is closed using an unLoad() function. However, the performance of the unLoad() function was (and still is) an area of huge variation. It may even work (or not work) differently depending on how the page is closed. Because of this a Bounce was defined as a 1-page visit, rather than a visit of a short duration. It simply wasn’t possible to reliably measure time spent on a page. It was because of this it was decided a bounce was not a visit. We could only calculate visit duration when we had more than 1 page, and since a bounce was defined as being only 1 page, a bounce could not be a type of visit.
In other cases we cannot measure important phenomena because they are not represented directly by the things we can measure – movements of the mouse and clicks. To get around this we have infer these measurements them from others. Engagement is a good example. Engagement is a psychological state, not an empirical behavior, so we can’t record it directly. While we could infer it from other metrics, systems rarely do because there is no agreed definition for what constitutes engagement. This is because the user behavior which would constitute engagement legitimately differs from site to site. It is therefore down to each site to decide what constitutes engagement and then determine how to measure that. Even engagement surveys, such as that from cScape [www.cscape.com], which covers engagement in over 1,000 sites, have to allow each participant to define engagement their own way. A key part of cScape’s survey is examining just how companies define engagement. If you’re not sure yourself, it’s a good starting place for ideas.
Engagement is critical. Before someone can be exposed to a site’s sales pitch they must engage with that site. Before they engage they assess a site to decide if they’re going to stay. I call this the “scanning stage” and retaining visitors during scanning is the first priority in every visit. During this stage they are not really reading, they are just skimming over things quickly. What works in selling products or services does not work in retaining during scanning.
I recently wrote about additional calculations you could do around Average Duration in Google Analytics to separate out bounces from visits as an attempt to get a clearer differentiation between scanning and engaged visitors. Some people responded by stating I was factually incorrect and that Google Analytics uses the unLoad() function to measure the time people do spend on each page. While Google Analytics could do this, it does not. Elisabeth Diana, from Google’s Global Communications & Public Affairs, explained to me why Google chose to merge bounces with visits:
“We really tried to listen to what the majority of our users were saying, because they are the ones who live and breathe the tool. The GA team had heard different opinions about average time-on-site and visit calculations from our users, and while there were some different schools of thought on the issue, most GA users who provided feedback wanted bounces included in the visits and time-on-site metrics.”
In my article I explained how to do work around this manually, but Elisabeth pointed out a better way:
“If users prefer the calculations you outline in your article, we now have a way to address their preferences – Advanced Segmentation. Creating a segment that includes page depth greater than 1 will yield site-wide averages and visit numbers that are in alignment with your ideas.”
If you haven’t played with segmentation and custom reports in Google Analytics yet, I suggest you have a try. You can get some great insights with it. I have taken the power offered by segmentation to further refine my examination of the visit.
The process of getting someone to engage has typically been regarded as being about landing pages. The view has been that someone assesses the landing page in order to decide whether to engage with the site or not. If they like the landing page, they engage; if not, they bounce. The problem I have always had with this is it is too one-dimensional. If someone views 10 pages on my site, but does it inside 30 seconds, I can’t think of them has having engaged with the content. But neither have they bounced. The landing page did its job, but they still never engaged.
Segmentation provides the ability to handle the person who skims a bunch of pages quickly then leaves. We can create a segment for “scanning visitors.” It depends on the site what constitutes moving through it too quickly to really engage, so I suggest you experiment. Use segmentation to create a test for visitors who read more than one page in less than x-many seconds and try experimenting with different times to see what happens. How quick is too quick is up to you.
My own research on my clients’ sites revealed a surprising factor. The percentage of scanning visits was roughly the same as the percentage who bounced. For example, if a site had a 25% bounce rate, another 25% would have skimmed then left. I suspect this may be true for many sites. What it suggests is that many people do not decide whether to engage with a site by simply looking at the landing page – they go into the site and skim a few pages quickly. Examining the correlation between these visits and their referring search phrases suggests the ones who are bouncing from the landing page are the people who were misdirected. These people can instantly see this is not the type of site they were looking for. People who have come to the site from a relevant search phrase tend to be the ones who skim a few pages before leaving. In other words, these people are saying “OK, this site is obviously relevant to my search, but is it the right site for me? Do I want to hang around and give it some of my time?” I guess this is similar to picking up a magazine on the news stand and skimming a few pages before you decide to buy. The cover tells you it is potentially of interest, but only looking inside will tell you if there’s anything you actually want to spend time with.
You could argue about whether skimming a few pages quickly constitutes engagement or not, and I think it depends on why you are looking. My purpose is usually to determine how the site is converting visitors into customers (or leads). For that purpose I need metrics which measure each of the processes a visitor goes through. First they look at their landing page and decide whether to leave or stay. It is therefore the task of landing pages to convince them to stay and I can use the existing bounce rate metric to measure landing page effectiveness. In some cases it is obvious to the visitor from the landing page whether they should stay or leave. But it looks like a significant portion can’t decide from simply looking at the landing page. These people then skim a few pages quickly in order to get a better feel for the site. We have been treating these people as if they were engaging to the same degree as committed visitors. This means we have not been building components into our content pages to hook them and get them to engage. Remember – for these people the decision to stay or leave is not being made on the landing pages, it is being made on the pages you also use to sell your products.
This adds a new phase to the conversion funnel. Between bounce and engagement there is an intermediate zone. It may be there are as many people in that zone as bounce, so this is a significant audience. Getting these people to engage is as important as holding onto new arrivals on the landing page. Ask yourself how effective your main content pages would be if someone was going to assess them in only a few seconds. Do they sell your site at a glance or do they work only when you actually stop to read the content?
The starting point is to play with segmentation and determine if scanning visitors are a significant portion of your visitors. Your landing pages may be so effective it’s not an issue. Either way it is clear that the world is more complicated than a simple distinction between bounces and visits.