The Story of a Self

The seductive dangers of self-mythologizing

It takes time for a self, with all its flaws and peculiarities, to bend itself out of the universe. It begins with us recognizing our image in the mirror. Our caregivers tell us stories about the past and the present, what’s happening around us, and what we had to do with it. We begin to contribute to these little stories about ourselves. We realise we’re goal-directed—we want things and we try to get them. We grasp that we’re surrounded by other minds that are also goal-directed. We understand ourselves to be a certain category of human—a girl, a boy, working-class—of whom others have specific expectations. We have power and have done things. These pockets of story memory slowly begin to connect and cohere. They form plots that become imbued with character and theme. Finally, in adolescence, writes the psychologist Professor Dan McAdams, we endeavour to understand our life as a “grand narrative, reconstructing the past and imagining the future in such a way as to provide it with some semblance of purpose, unity and meaning.”

Having undergone its adolescent narrative-making process, the brain has essentially worked out who we are, what matters, and how we should behave in order to get what we want. Since birth, it’s been in a state of heightened plasticity that has enabled it to build its models. But now it becomes less plastic and harder to change. Most of the peculiarities and mistakes that make us who we are have become incorporated into its models. Our flaws and peculiarities have become who we are. Our minds have been made up.

Then the brain enters a state that’s valuable to understand for anyone interested in human conflict and drama. From being model-builders we become model defenders. Now that the flawed self with its flawed model of the world has been constructed, the brain starts to protect it. When we encounter evidence that it might be wrong, because other people aren’t perceiving the world as we do, we can find it deeply disturbing. Rather than changing its models by acknowledging the perspectives of these people, our brains seek to deny them.

This is how the neurobiologist Professor Bruce Wexler describes it: “Once [the brain’s] internal structures are established they turn the relationship between the internal and external around. Instead of the internal structures being shaped by the environment, the individual now acts to preserve established structures in the face of environmental challenges, and finds changes in structure difficult and painful.” We respond to such challenges with distorted thinking, argument and aggression. As Wexler writes, “We ignore, forget or attempt to actively discredit information that is inconsistent with these structures.”

The brain defends our flawed model of the world with an armoury of crafty biases. When we come across any new fact or opinion, we immediately judge it. If it’s consistent with our model of reality our brain gives a subconscious feeling of yes. If it’s not, it gives a subconscious feeling of no. These emotional responses happen before we go through any process of conscious reasoning. They exert a powerful influence over us. When deciding whether to believe something or not, we don’t usually make an even-handed search for evidence. Instead, we hunt for any reason to confirm what our models have instantaneously decided for us. As soon as we find any half-decent evidence to back up our “hunch” we think, “Yep, that makes sense.” And then we stop thinking. This is sometimes known as the “makes sense stopping rule.”

Not only do our neural-reward systems spike pleasurably when we deceive ourselves like this, we kid ourselves that this one-sided hunt for confirmatory information was noble and thorough. This process is extremely cunning. It’s not simply that we ignore or forget evidence that goes against what our models tell us (although we do that too). We find dubious ways of rejecting the authority of opposing experts, give arbitrary weight to some parts of their testimony and not others, lock onto the tiniest genuine flaws in their argument and use them to dismiss them entirely. Intelligence isn’t effective at dissolving these cognitive mirages of rightness. Smart people are mostly better at finding ways to “prove” they’re right and tend to be no better at detecting their wrongness.

It might seem odd that humans have evolved to be so irrational. One compelling theory has it that, because we evolved in groups, we’re designed to argue things out lawyer-style until the optimal way forward emerges. Truth, then, is a group activity and free speech an essential component. This would validate the screenwriter Russell T. Davies’s observation that good dialogue is “two monologues clashing. It’s true in life, never mind drama. Everyone is always, always thinking about themselves.”

Because our models make up our actual experience of reality, it’s little wonder that any evidence which suggests they are wrong is profoundly unsettling. “Things are experienced as pleasurable because they are familiar,” writes Wexler, “while the loss of the familiar produces stress, unhappiness and dysfunction.” We’re so used to our aggressive model-defending responses—they’re such an ordinary part of being alive—we become inured to their strangeness. Why do we dislike people we disagree with? Why do we feel emotionally repulsed by them?

The rational response, when encountering someone with alien ideas, would be to either attempt to understand them or shrug. And yet we become distressed. Our threatened neural models generate waves of sometimes overwhelming negative feelings. Incredibly, the brain treats threats to our neural models in the much same way as it defends our bodies from a physical attack, putting us into a tense and stressful fight-or-flight state. The person with merely differing views becomes a dangerous antagonist, a force that’s actively attempting to harm us. The neuroscientist Professor Sarah Gimbel watched what happened when people in brain scanners were presented with evidence their strongly held political beliefs were wrong. “The response in the brain that we see is very similar to what would happen if, say, you were walking through the forest and came across a bear,” she has said.

So we fight back. We might do so by trying to convince our opponent of their wrongness and our rightness. When we fail, as we usually do, we can be thrown into torment. We chew the conflict over and over, as our panicked mind lists more and more reasons why they’re dumb, dishonest or morally corrupt. Indeed, language provides a stinking rainbow of words for people whose mental models conflict with ours: idiot, cretin, imbecile, pillock, berk, arsehole, airhead, sucker, putz, barnshoot, crisp-packet, clown, dick, divot, wazzock, fuckwit, fucknut, titbox, cock-end, cunt. After an encounter with such a person, we often seek out allies to help talk us down from the disturbance. We can spend hours discussing our neural enemies, listing all the ways they’re awful, and it feels disgusting and delicious and is such a relief.

We organise much of our lives around reassuring ourselves about the accuracy of the hallucinated model world inside our skulls. We take pleasure in art, media and story that coheres with our models, and we feel irritated and alienated by that which doesn’t. We applaud cultural leaders who argue for our rightness and, on encountering their opposite, feel defiled, disturbed, outraged and vengeful, perhaps wishing failure and humiliation on them. We surround ourselves with “like-minded” people. Much of our most pleasurable social time is spent “bonding” over the ways we agree we’re right, especially on contentious issues. When we meet people who have unusually similar models to us, we can talk to them nonstop. It’s so blissful, reassuring ourselves like this, that time itself seems to vanish. We crave their company and put photos of them—arms across shoulders, smiles in beams—on our fridges and social-media feeds. They become friends for life. If the circumstances are right, we fall in love.

It’s important to note, of course, that we don’t defend all our beliefs like this. If someone approached me and argued that the Power Rangers could beat the Transformers in a fight, or that every bipartite polyhedral graph with three edges per vertex has a Hamiltonian cycle, it would have little effect on me. The beliefs we’ll fight to defend are the ones which we’ve formed our identity, values, and theory of control around. An attack on these ideas is an attack on the very structure of reality as we experience it. It’s these kinds of beliefs, and these kinds of attacks, that drive some of our greatest stories.

Much of the conflict we see in life and story involves exactly these model-defending behaviours. It involves people with conflicting perceptions of the world who fight to convince each other of their rightness, to make it so their opponent’s neural model of the world matches theirs. If these conflicts can be deep and bitter and never-ending, it’s partly because of the power of naive realism. Because our hallucination of reality seems self-evident, the only conclusion we can come to is that our antagonist, by claiming to see it differently, is insane, lying or evil. And that’s exactly what they think of us.

But it’s also by these kinds of conflicts that a protagonist learns and changes. As they struggle through the events of the plot, they’ll usually encounter a series of obstacles and breakthroughs. These obstacles and breakthroughs often come in the form of secondary characters, each of whom experiences the world differently to them in ways that are specific and necessary to the story. They’ll try to force the protagonist to see the world as they do. By grappling with these characters, the protagonist’s neural model will be changed, even if subtly. They’ll be led astray by antagonists, who’ll represent perhaps darker and more extreme versions of their flaw. Likewise, they’ll learn valuable lessons from allies, who are often the embodiment of new ways of being that our hero must adopt.

But before this dramatic journey of change has begun, our protagonist’s neural model will probably still be convincing to them, even if it is, perhaps, beginning to creak at its edges—there might be signs that their ability to control the world is failing, which they frantically ignore; there might be portentous problems and conflicts which rise and waft about them. Then, something happens . . .

Good stories have a kind of ignition point. It’s that wonderful moment in which we find ourselves sitting up in the narrative, suddenly attentive, our emotions switched on, curiosity and tension sparked. An ignition point is the first event in a cause-and-effect sequence that will ultimately force the protagonist to question their deepest beliefs. Such an event will often send tremors to the core of their flawed theory of control. Because it goes to the heart of their particular flaw, it’ll cause them to behave in an unexpected way. They’ll overreact or do something otherwise odd. This is our subconscious signal that the fantastic spark between character and plot has taken place. The story has begun.

Typically, as their theory of control is increasingly tested and found wanting, the character will lose control over the events of the story. The drama they trigger compels the protagonist to make a decision: are they going to fix their flaw or not? Who are they going to be?

From The Science of Storytelling: Why Stories Make Us Human and How To Tell Them Better by Will Storr, published March 2020 by Abrams Press. Copyright © 2020 Will Storr. All rights reserved.

About the Author

Will Storr

Will Storr is an award-winning journalist and novelist whose work has appeared in the Guardian, Sunday Times, the New Yorker, and the New York Times. His books include Selfie: How the West Became Self-Obsessed and The Unpersuadables: Adventures with the Enemies of Science (The Overlook Press/Abrams Press).

View Essays

Leave a Reply