Andy Weir’s latest, Project Hail Mary, is a good book that you’ll almost certainly enjoy if you enjoyed Weir’s freshman novel The Martian. It’s another tale of solving problems with science, as a lone human named Ryland Grace and a lone alien named Rocky must save our stellar neighborhood from a star-eating parasite called “Astrophage.” PHM is a buddy movie in space in a way that The Martian didn’t get to be, and the interaction between Grace and Rocky is the biggest reason to read the book. The pair makes a hell of a problem-solving team, jazz hands and fist bumps and all.
Project Hail Mary
But the relative ease with which Grace and Rocky understand each other got me thinking about the real-world issues that might arise when two beings from vastly different evolutionary backgrounds try to communicate. PHM‘s otherwise solid commitment to science leans a bit here on what we might call the “anthropic principle of science fiction,” after the more well-known general anthropic principle. To wit: Rocky and Grace can communicate well with each other because it serves the story, and if they couldn’t, the book would be shorter and less interesting.
I get it—that’s how storytelling works. I don’t want to sound like a bitter basement-dwelling critic throwing shade at a bestselling science fiction author. But PHM is like The Martian in that it’s about solving problems realistically. From my nerd basement throne, it feels like the softer sciences of linguistics and anthropology (or perhaps xenolinguistics and xenoanthropology) don’t get the same stage time as their more STEM-y counterparts like physics and relativity.
Indeed, Grace quickly builds a workable level of rapport with his alien counterpart:
I pull the jumpsuit on. I’ve decided today is the day. After a week of honing our language skills, Rocky and I are ready to start having real conversations. I can even understand him without having to look at the translation about a third of the time now.
The acquisition of a wholly alien language is treated like a math problem—a series of steps that, if completed in the right order, guarantees comprehension. Our two intrepid interstellar explorers find each other in the void and start cooperating. They link their ships and figure out the pure mechanics of communication. Both use sound, though Rocky communicates with chords and whistles. Grace has a laptop with a copy of Excel and some Fourier transform software, while Rocky has an eidetic memory, so the physical layer of communication is easily handled. They quickly work out equivalent words for things around them, like “wall” and “star” and “Astrophage.” They also knock out “yes” and “no” through some quick pantomime, followed by “good” (or at least “I appear to approve of this”) and “bad” (or at least “I appear to disapprove of this”).
After some more language learning, we come to this particular passage—the passage that pushed me over the edge into writing this piece:
Proper nouns are a headache. If you’re learning German from a guy named Hans, you just call him Hans. But I literally can’t make the noises Rocky makes and vice-versa. So when one of us tells the other about a name, the other one has to pick or invent a word to represent that name in their own language. Rocky’s actual name is a sequence of notes—he told it to me once but it has no meaning in his language, so I stuck with “Rocky.”
But my name is actually an English word. So Rocky just calls me the Eridian word for “grace.”
How, exactly, does one build the necessary cognitive scaffolding—in a period of time measured in weeks—to explain “grace” to an alien that may or may not have the emotional wiring to even conceptualize the word? And if the alien does have an equivalent word, how do you know with any amount of certainty that the word means the same thing? “Grace,” after all, is a squishy concept involving morality and value judgments. A huge array of other concepts have to be settled with equivalencies before you can even begin to understand whether or not, when the alien says “grace,” it means the same thing to each speaker.
All of which made me wonder whether the language learning portrayed in PHM was, well, realistic.
Head games with foreigners
Sci-fi does offer us many other visions of alien communication. I’m no linguist, but I do read a lot—and one of my favorite authors is the inimitable C.J. Cherryh, arguably one of the last living grand masters of science fiction. Cherryh’s specialty genre might best be described as “anthropological SF,” due to her academic grounding in archaeology and mythology. She has a knack for writing alien characters that finely manage the balance between being interesting and also truly, believably alien—not just in form, but in motivation and emotion. And Cherryh’s work, to pick on her as an example, posits a hell of a lot more difficulty in communicating with aliens.
Though her body of work stretches back to the 1970s, Cherryh’s knack for getting the alien/human interface right is shown off to great effect in her Foreigner series of novels, which features a lone human translator living among an emotionally incompatible race of aliens called atevi. Atevi are superficially much like humans—a bit taller, different skin color, but they’re bilaterally symmetric humanoids with two arms and two legs and a head. They look more or less like us, and that’s how the problems start.
Humans show up at the atevi home world accidentally, since it’s the only habitable refuge a failing and lost human colony ship can reach with the ship’s remaining supplies. Although the atevi are barely past the steam age, the two peoples have a peaceful and productive first contact. Things go really well for a few years—humans start to integrate into atevi society and freely share their technology.
Then, suddenly, war breaks out. Neither side really understands why. Shortly before the small human population is annihilated, though, a ceasefire is reached. The two sides take a step back to try to figure out why they started fighting, and working the issues out for the reader takes most of the first book in the series.
The top-level reason, as it turns out, is that even though both races thought they were communicating with each other, the semantic equivalencies they built were completely misaligned—and one race’s idea of “friendliness” was another race’s idea of “pure lunatic insanity.” Cherryh dwells on the idea that language is at least partially a product of phylogeny. When you and an alien use a word, your individual understanding of that word hinges on a whole host of factors that you share with others of your species, but you and an alien may not have such context in common at all. Both races, human and atevi, were acting logically from their own point of view—and both races, from the other’s perspective, were responding to logical acts with apparent psychotic craziness.
Think of how much of human language and understanding is built on inherent, foundational concepts—our biology and base perceptions are part of the fundamental structures of language. If I as a human am attempting to somehow talk with another human with whom I have no language in common, we can still build certain assumptions into our attempts at communication. Even if I’m speaking to a member of an isolated or uncontacted culture, we both will have similar underlying biological drives. If we can figure out each other’s word for “love,” for example, I don’t have to explain what love is. We both just know. Dig far enough down and we’ll always find some semantic bedrock on which to build a conversation.
Actual understanding isn’t just a process of establishing equivalencies—it’s a much more complex web of ferreting out the underlying concepts behind the words and checking how (or even if!) those concepts map to their counterpart concepts on the other side. Sometimes—often, in fact, with Cherryh’s atevi—no useful correlation is possible. Atevi biology and evolution has produced fundamentally different emotional drives than human evolution produced. Atevi don’t feel love or friendship, instead they have their own emotional response based around hierarchical grouping. It’s just as powerful and fulfilling as love and friendship—and it serves the same useful function of creating and enforcing cooperation and societal cohesion—but actually explaining what it feels like in human terms is impossible. (There’s so much more to Cherryh’s atevi, by the way. If you’re in the mood for a cracking good read, I suggest you grab the first book in the series and dig in!)
So which take on extraterrestrial language acquisition hews closest to reality?
I say potato, you say 🎶🎶🎶
Again, I don’t know. I’m just a guy who writes about farts on the Internet. I needed to call in the big guns—actual smart people with actual degrees in the talky sciences.
Dr. Betty Birner is a professor of linguistics and cognitive science at Northern Illinois University. Dr. Birner’s specialty is the field of pragmatics—which she summarized for me as the difference between the words someone says and the intention behind those words. Pragmatics includes the study of how we as speakers of a language use inferences about intent—inferences sometimes built on inherent assumptions about context, which themselves can stem from biological underpinnings—to overcome language’s ambiguities.
The question I put to her was this: going by our current understanding of how and why human languages operate, do we think it would be practical—or even possible—for two divergently evolved sentient beings from different worlds to learn each other’s languages well enough in a short amount of time (perhaps as little as a week) to usefully converse about abstract concepts and to be reasonably assured that both beings actually understand those abstracts?
I asked Dr. Birner if she could help me understand the commonalities that show up between human languages and what separates language (which requires structure) from communication (which is something most animals manage to do without language). It’s a hard subject to nail down, but Dr. Birner cited the work of Dr. Charles F. Hockett and pointed out that there are several broadly accepted criteria for what makes a language a language. One of those criteria is the concept of syntax.
“Human languages all have syntax,” she explained. “You have distinct pieces that you can put together in different orders to get different effects. Even sign languages have this. No animal communicative system has it.”
“How can I possibly ask an alien, ‘What’s your word for friendship?'”
Beyond syntax, another feature of language is the concept of displacement. “I can talk about things,” she said. “Distance, and time, and place. Your dog can’t do that. Your dog can scratch at the door to communicate that she wants to go outside, but she’s not going to be able to say she wanted to go outside yesterday.”
Dr. Birner’s own specialty field of pragmatics has its own take on what makes language, language: a thing called the cooperative principle. “The basic notion is that when we communicate we are cooperative in some very fundamental ways,” she said. “We say the right amount. We say things that are relevant. We say things we at least believe to be true. So, we have all these assumptions also about the other person being cooperative, and if we didn’t believe they were trying to be cooperative in all of these ways, communication just couldn’t work.”
I pointed out that as a lay person, the most interesting part of this to me is how we selectively break the cooperative principle all the time—for humor, or for sarcasm, or whatever. Breaking the cooperative principle imparts its own messages, and Dr. Birner agreed.
“Absolutely—that’s part of the cool thing about the cooperative principle,” she laughed. “We violate it! So there’s a maxim of the cooperative principle that says, ‘Say only what you believe to be true.’ Well, we violate that all the time in metaphor. ‘You are the light of my life.’ Well, no, you’re not a bunch of photons. Clearly this is false. But you infer away about what I actually meant. So, yeah, how do we know that anything that’s true of human language is true of an alien language?”
You gotta Noam when to hold ’em
Dr. Birner also (perhaps inevitably) brought up Noam Chomsky, the world-famous linguist. He’s responsible for, among other things, the idea of some form of universal grammar existing in humans. We can speculate about whether or not Chomsky’s theories are true for humans, but can we safely extend those theories to cover hypothetical sentient alien life that evolved in a completely different environment?
“He [Chomsky] has a notion,” she said, “that there is an innate biological instinct for language in human beings—that language is instinctive to me in the same way that spinning a web is instinctive to a spider. He almost single-handedly killed behaviorism back in the fifties, because in behaviorism the notion was the child is born as a blank slate, and Chomsky said that it would be absolutely impossible to acquire something as complex as human language if you are really a blank slate.”
This was starting to sound a little familiar from previous Wikipedia trips. “There was, like, a monkey experiment built in here at some point, isn’t there?” I asked.
“Yeah, and people have done that,” she responded. “They’ve raised chimps in their homes as though they were their own children, and the chimps do not acquire language—yet, a child will soak it up effortlessly. So, Chomsky has this notion that there is an innate ‘universal grammar’ that tells a human infant what is and is not a possible human language. You can see where this fits in with the notion of an alien language, because presumably an alien wouldn’t have the same kind of universal grammar.”
“You would assume—I guess you would assume—I don’t know, we can make the rules be whatever we want,” I replied. “But you would assume that for an alien to evolve into roughly analogous sentience with a person, there would be something equivalent to that.”
I was feeling a little over my head, but I plowed on. “Would you assume that?”
“I would assume things like symbolism—the symbolic nature of language,” Dr. Birner replied. “There were a lot of people who assumed that that was one of the great cognitive leaps that made human language possible—the notion that we can represent one thing as something else, and that’s what language is.” We represent things in the world with words, and once you know that words represent concrete things, it’s a short jump to realizing that words can represent abstract things, too.
Symbolic representation isn’t necessarily straightforward, either. “A philosopher named Quine had this notion,” she said. “If you’re out in the field with somebody, and they point and say ‘Gavagai!’ and you look and you see a rabbit running through the field, you assume that gavagai in their language means rabbit. But how do you know it doesn’t mean ‘brown,’ or ‘tail,’ or ‘leg,’ or ‘fur’—”
“Or, ‘Look at that!'” I said.
“Yeah, all of this other stuff is present, but we have this whole object notion,” she said. “We have a notion of what constitutes a distinct object, and we assume that the word corresponds to that whole object as a default. Would the alien have that notion?”
“So,” she continued, “I think you’re asking exactly the right question when you’re asking not just how could we ask about an abstract notion. There’s one level, which is, ‘How can I possibly ask an alien, what’s your word for friendship?’ But do they even have a concept of friendship, and how could you ask about that?”