The art of mime has been around in some form for millennia, although when it comes to contemporary depictions in popular culture, mimes seem to be almost universally hated. But they still have something to teach us. Scientists at Johns Hopkins University have brought mime into the laboratory for a series of experiments exploring how the human brain fills in perceptual gaps. When a performer mimes an action on an unseen object, we form a kind of visual representation of that object in our mind, even though there is no physical object there. The implication of its physical presence is sufficient, according to a recent paper published in the journal Psychological Science.
“Most of the time, we know which objects are around us because we can just see them directly,” said co-author Chaz Firestone of JHU’s Perception & Mind Laboratory. “But what we explored here was how the mind automatically builds representations of objects that we can’t see at all but that we know must be there because of how they are affecting the world. That’s basically what mimes do. They can make us feel like we’re aware of some object just by seeming to interact with it.”
Firestone’s research to date has focused on a couple of key questions in cognitive psychology. First, how do people come to possess basic intuitions about the physics of the objects around us? If we see a precariously stacked pile of dishes, for instance, we worry about the possibility it might topple over, breaking the dishes.
The other question is, how do people perceive objects even if those objects (or parts of those objects) are not, technically, casting light onto our retinas, which our brains then translate into the visual image that we “see”? Firestone gave Ars this example: “If you were to view your neighbor behind a slatted fence, you get a fully coherent impression of the whole person, even though you’re only seeing pieces of the person through the slats of the fence.”
There are also common illusions where people perceive lines and other details that are not part of the physical image reaching their eyes. They’re similar to the “Kanizsa triangle” and “Kanizsa square” illusions created by the late Italian psychologist and artist Gaetano Kanizsa, who was interested in illusory (subjective) contours that visually evoke the sense of an edge in the brain. (A recent study found that, like humans, cats are susceptible to the Kanizsa square illusion, suggesting that they perceive subjective contours much like humans.)
The mime project combines those two questions, and Firestone recruited one of his undergraduates, co-author Patrick C. Little (now a graduate student at New York University) for the experiments. They are based in part on the well-known Stroop effect: write the word “red” in blue ink, for example, and then ask subjects to tell you the color of the ink. They will be slower to respond because they must reconcile the mismatched text (“red”) with the actual blue color of the ink. According to Firestone, people also can’t help recognizing an object being mimed, even when there is no physical object present—another example of how the brain fills in the gaps in our perception.
Firestone and Little conducted three versions of their online experiment, involving 360 participants. In the first, subjects watched video clips showing a person miming a collision with an invisible wall or a step up onto an invisible box. Firestone himself features in the videos, although his performance doesn’t have the narrative elements—using gesture and body language to tell a story—that are the hallmarks of true mime. “I am literally running into a real wall, and then we’re eliminating the wall, so that all [subjects] see is what the wall is doing to me,” he said.
After the action, a black line would appear in either a horizontal or vertical orientation, in the same spot on the screen where the invisible wall or box would have been. That means the line either matched or did not match the orientation of the mimed surface. Subjects were instructed beforehand not to pay attention to the miming, then were told to indicate the orientation of the black lines when the lines appeared.
Firestone and Little found that their subjects responded much faster when the black line’s orientation matched the orientation of the mimed surface. That’s an indication that those mimed surfaces were actively represented in the subjects’ minds. “People are responding faster to a vertical line because that’s the orientation of the wall that they’re inferring,” said Firestone.
But what if the subjects were responding to the vertical position of the actor? In order to keep the experimental focus on the inferred wall, the team conducted a second version of the experiment. Subjects watched videos in which the actor had been replaced by a round, rigid disc bouncing off an invisible wall—rigid in that it doesn’t deform when it bounces, like a tennis ball would. Unlike the human actor, the ball never changes its shape or vertical orientation, so this version of the experiment removed that potential confounding factor.
Firestone and Little’s third iteration played with the variable of time. It repeated what was done in the second version, except it changed the time when the black lines appeared. In the second experiment, the lines appeared a few hundred milliseconds after the disk bounced off the invisible wall. That arguably could have given subjects enough time to anticipate what “should” be present in the video causing the disk to behave that way. Eliminating the delay removes the possibility of such anticipation—another potential confounding factor.
All three versions of the experiment produced similar results. “Very quickly people realize that the mime is misleading them, and that there is no actual connection between what the person does and the type of line that appears,” said Little. “They think, ‘I should ignore this thing because it’s getting in my way,’ but they can’t. That’s the key. It seems like our minds can’t help but represent the surface that the mime is interacting with—even when we don’t want to. This suggests that miming might be different from other kinds of acting. If the mime is skilled enough, understanding what’s going on doesn’t require any effort at all. You just get it automatically.”
As for practical applications, the work might be relevant to designing more effective AI systems related to vision. “If you’re trying to build a self-driving car that can see the world and steer around objects, you want to give it all the best tools and tricks,” Firestone said. “This study suggests that, if you want a machine’s vision to be as sophisticated as ours, it’s not enough for it to identify objects that it can see directly. It also needs the ability to infer the existence of objects that aren’t even visible at all.”