Human beings: we’re not as capable as we’d like to think. Oftentimes ineffective design is not really that bad as such, it’s just that it gives our species too much credit.
Here’s a famous example of our visual deficiencies. Have a look at the video below.
Did you spot the bear the first time you watched that? I certainly didn’t. We’re so focussed on counting the passes that we’re oblivious even to a sizeable, not to mention thoroughly unrealistic, dancing bear. If someone had explained the video to you in theory and then asked if you thought you would have seen the bear, you probably would have said that yes, you would have. And understandably so. I mean how could you not spot a dancing bear?
Here’s another well known example, this time via the wonder who is Derren Brown, and based on an experiment undertaken as part of a study entitled Failure to detect changes to people during a real-world interaction (Simons, D.J., Levin, D.T., 1998).
Again, if someone put such a scenario to you, you would probably say that you, or indeed anyone, would surely spot that the tourist asking for directions had changed. It’s just ridiculous to suggest anything else. And yet, there you have it: a lot of people didn’t notice.
These examples illustrate two related concepts. The dancing bear is an example of inattentional blindness, which is a failure to detect an unexpected change (the dancing bear) within your field of vision, due to your attention being consumed by other tasks (counting the number of passes).
The tourist experiment is an example of change blindness, which is a failure to detect a change (the tourist swapping for another person) due to a visual disruption (the door). Often, the disruption could be a momentary distraction, or a loss of attention (a blink, a loud noise, etc). The crucial point here is that change blindness requires a comparison to memory; something has changed whilst you were not looking at it. Therefore, when you go back to look at the changed thing (the tourist), you have to remember how that thing looked before the disruption, in order to realise that it has now changed. (The moonwalking bear, in contrast, is happening whilst in plain sight. It’s just that your attention is consumed.)
Let’s talk about your very own retina display. (By which I mean, your actual retina.) As many people know, inside the retina there are rods, and there are cones. The rods are used primarily in low light (such as at night, outdoors), and are not sensitive to colour. But in daylight, or artificial light, as far as your vision is concerned it’s mostly about the cones.
How many cones are there, exactly, and how are they distributed across the retina? Prepare yourselves, because here comes a diagram.
The numbers in red indicate the density of cone receptors. We can see that within the fovea, at the centre of the retina, the number of receptors drops off dramatically as we move toward the outer edges. Then, once we’re beyond the edges of the fovea, the drop is even steeper, relatively speaking.
All of this is to say that your vision is profoundly better at the centre of the retina (the fovea), where there are more receptors. There are six million cone cells in each eye, but they are far more densely packed within the fovea. The fovea is 1% of the retina, but your brain’s visual cortex devotes more or less 50% of itself solely to processing your foveal vision.
But it gets worse. Whereas the cells in your fovea map 1:1 to the neuron cells that begin the processing of visual data, outside of the fovea multiple receptors connect to a single neuron. So that’s more data (the receptors), being funnelled down the same bandwidth (a single neuron). There’s going to be some data loss. In computing terms, this is a lossy compression; we essentially have JPEGs for vision.
So then: We’re likely to miss seemingly obvious things when we’re concentrating on something else (inattentional blindness). Our short term memories aren’t the best, and so momentary distractions will mean that we can miss even the most obvious of changes (change blindness). To make matters worse, our vision is far more effective in the tiny 1% of our retina called the fovea than it is on the periphery.
No wonder we miss so much.
You’re probably wondering where the UI part comes in. Well, I went looking for a study to illustrate how these deficiencies manifest themselves in human-computer interaction, and I found this: The Case of the Missed Icon: Change Blindness on Mobile Devices (Davies, T, Beeharee, A, 2012).
The study is particularly interesting because it focusses on mobile. These issues have long since been illustrated on desktop computers, but not much work has been done in the mobile domain. To quote the authors themselves
“In a mobile context, research into change blindness is limited. One could argue that the display size of a current standard smartphone does not allow for attention towards changing items to be lost. This assumption is based on an expected greater coverage of the smaller device using foveal rather than peripheral vision.”
The study conducted two experiments with 17 male and 12 female participants between the ages of 18 and 24.
In the first experiment, the participants were presented with a menu of icons arranged in a grid, in the style of an iPhone. A number of visual disruptions where then invoked at a random interval. Those were a) no disruption; b) a flicker; c) a change in orientation; and d) a push notification appearing on screen. Simultaneous to the disruption, one of the icons in the menu was changed. Did the participants notice?
As you can see from the results, the disruptions caused fewer changes to be detected. This is an example of change blindness. The study also points out that the position of the changed icon in the menu made little difference to whether or not the change was detected.
The number of icons did affect the detection rate, however. More icons in the menu decreased the rate of detection. Cognitive load: it’s a real thing, people!
In the second experiment, participants played a driving game. Their primary task was to control the speed and direction of the car whilst avoiding collisions with oncoming traffic, and collecting stars to gain points. However, they also had a secondary task: to adhere to the changing speed limit.
A change in speed limit was displayed to the participant in two ways. Either an icon with the new speed limit would appear for 3 seconds, and then disappear (called ‘direct insertion’), or there would be an icon indicating the current speed limit visible at all times, which would update itself whenever there was a change in speed limit (called ‘gradual change’). Both types of icon could be presented at either the top or the bottom of the screen.
The average response time, for those notifications which were noticed at all, was 3.877 seconds. Not what I would call speedy, especially given that the participants are playing a game, in which reaction times are a factor. (If time were not a factor, the consequences of taking more time to notice a change would be less severe.)
This second graph demonstrates how many notifications were noticed. In all, 34.5% of notifications went completely unnoticed. Why? It’s the dancing bear; the participants were so focussed on the primary tasks that they missed an otherwise noticeable change right in front of their eyes.
The study goes on to suggest things a user interface designer can do to mitigate these problems, such as reducing simultaneous on-screen activity; amplifying changes (with movement, colour, or, if you really must, shaking); reducing the complexity (and thus the cognitive load) of the UI; and positioning changes close to or within the user’s foveal vision.
There are many design principles in the field of human-computer interaction (HCI) which are implicitly understood by good designers, and the ones identified above are no different. We already understand that there’s the potential for users to miss on-screen changes.
However, we tend to brush these concerns aside when we’re pushed to do so by other factors. We know that these principles are sound, and we often adhere to them, but we don’t know why they’re sound, so we allow ourselves to break them. I think that the reason we’re able to do this is partly because these concepts always seem a bit hypothetical without knowledge of the evidence.
The study cited here is but one small piece of a mountain of such evidence, particularly when you include desktop computing as well as mobile. So the next time you find yourself thinking, ‘But surely the user will see that‘, remember: we’re not as capable as we’d like to think.
If you’re after some good reading material on this and other HCI / UX matters, I thoroughly recommend Jeff Johnson’s Designing With the Mind in Mind. The clarity of his explanations, particularly of the retina, was extremely helpful in writing this entry.