Analyzing the visibility of Earth Corporation’s products at the shelf
Earth Corporation uses Tobii eye tracking to visualize the true intentions of consumers and deepen their consumer insights.
In this article, I’m inviting you to geek out with me about Areas of Interest (AOI), because whether your eye tracking data is collected using screen-based systems,
wearable eye trackers, VR or webcams, you need to understand the consequences of the AOIs you create for the insights you generate.
Bear in mind, this conversation is important even if you aren’t the one doing the eye tracking analysis; the interpretation of results from agencies and industry standard advert visibility metrics is highly dependent on how those AOIs were created. If you really want to understand just how much attention your products, content or ads are getting, then keep reading!
Today, most eye tracking tools don’t automatically recognize objects such as an ad on a webpage or a bottle of shampoo on a supermarket shelf. That functionality is coming, but for now we typically need to draw a boundary around an object, or region of space, to answer questions like “how much time was spent fixating the ad” and “how many shampoo bottles were looked at before ours?”. These bounded regions are called Areas of interest.
Intuitively, it might seem that the best way to create this bounding object is to trace around the edge of the ad frame or bottle, and in my experience that’s what most people do. But it’s a little more complex than that we need to factor in the accuracy of both human vision and the eye tracker.
Humans have a static visual field or around 120°-140°, although the quality of vision is pretty poor at the very edges. Peripheral vision is good for detecting change and motion, but not so much for object recognition and reading. For those kinds of tasks, we’re more interested in two regions called the Fovea and Para-Fovea, which are approximately 2° and 10° in diameter at the center of your visual field. This physiology is the whole reason eye tracking works; we need to keep moving our eyes to reposition the Fovea to use our high-resolution and color vision.
But here’s the thing. Even a 2° of visual angle can include multiple objects, especially when they are viewed at a distance. On the screen I am looking at right now, the Fovea’s diameter will easily cover 1-1.5 words over 2-3 lines of text. We tend to base everything in eye tracking on the center of the Fovea, but this means those fine crosshairs do not really represent everything that can been seen by the viewer. It also means that AOIs with pixel specific boundaries risk either over or underestimating the number of fixations on an AOI depending on how the boundary is drawn. While we’re on the subject, tracing around all the fine intricacies of an object to create the AOI is mostly unnecessary, a simple geometric shape like a rectangle or circle is usually good enough.
So how do you decide on the size of your AOI? Well, as with all things vision related it depends on the distance it is viewed from, the context it is viewed in and, perhaps most importantly, your research design. If you have well-spaced AOIs with neutral regions in-between you’re probably better making your AOIs larger than the object by 2°. If, as in most real-world studies, you don’t have this luxury, then I would recommend drawing AOIs that are 2° smaller than the object so that you are only reporting fixations that were definitely on the object!
Deciding on how to draw AOIs is not a new challenge. But the reason it’s on my mind is that I’m in the process of analyzing my first full webcam eye tracking study. Everything I’ve said so far is true for research grade infra-red eye trackers that typically can achieve accuracy levels between 0.2° and 0.5°. With webcam eye tracking, however, you are typically looking at accuracy levels of 1° to 2°and lower. In my current study I have one participant (who I’ve excluded) with accuracy of less than 10°! For them, I am hard pressed to say whether they were even looking at the screen, let alone the images I was showing them!
This is why it’s essential to think about how the data will be analyzed before you collect it. The design of your study is critical here and needs to factor in the accuracy of the tracker you are using. But this then raises a difficult question for some industry standard measures which continue to rely on pixel level accuracy for both the viewers and the trackers.
Consider the example below. Three different fixation patterns are shown, with the circle size representing the gaze duration. Each pattern represents a completely different interpretation based on what we now know about AOIs, and yet most ad visibility metrics would report the area of interest shown on the right as having received same amount of attention — but the accuracy of webcam tracking might be less than the diameter of the smaller circles! In other words, it pays to dive into the way these metrics are calculated, before you take the insights too seriously.
So, the take-home of this article is quite simple. Understand the accuracy of your participants and eye trackers first! Then factor this into your experimental design by planning your AOIs before you collect a single data point. And if you’re a recipient of eye tracking insights, get ready to ask the difficult questions to make sure the attention being attributed to your product or design is deserved!
Earth Corporation uses Tobii eye tracking to visualize the true intentions of consumers and deepen their consumer insights.
We spoke with Nicole Fink of King’s Hawaiian about their first eye tracking study on consumer attention to new and existing package designs.
Unicharm, who are well experienced in using cutting-edge technologies to conduct market research, decided to use eye tracking and VR research.
Subscribe to our stories about how people are using eye tracking and attention computing.