As Stubborn as a Brain
Anatomy of the human brain. From ‘The anatomy of the brain, explained in a series of engravings’ by Sir Charles Bell, 1802 / Wellcome Collection
Science, World + People

As Stubborn as a Brain

The Fallibility of Human Reasoning
Miłada Jędrysik
time 14 minutes

When the human mind starts to believe in something, it’s hard to get it to stop; facts and logical argumentation don’t do a lot. If we add to this our selective memory, which is full of untrue memories, we now have a psychological sketch of Homo sapiens, the so-called ‘wise man’.

It was supposed to be a breakthrough in medicine. The leadership of the Karolinska Institute, Sweden’s most important medical centre, would certainly be pleased that they had managed to recruit Dr Paolo Macchiarini. This Swiss-born Italian had developed an innovative method of implanting an artificial trachea. He was handsome and smooth-tongued, with a golden touch.

It’s just that he was also a fraud. As Carl Elliott wrote in the New York Review of Books, the dazzlingly tanned surgeon is the anti-hero of one of the greatest scandals in the history of contemporary medicine. His patients died one after another, often in agony, when their bodies rejected the plastic organs. One of the harshest critics of Macchiarini’s ‘method’, Professor Pierre Delaere of the Catholic University of Leuven, told Elliott: “If I had the option of a synthetic trachea or a firing squad, I’d choose the last option because it would be the least painful form of execution.”

Meanwhile, Macchiarini carried on cutting and implanting, and the Karolinska Institute—the same institution that awards the Nobel Prize in Physiology or Medicine—remained silent. Worse still, it disciplined critics of the handsome surgeon’s methods.


Breaking news! This is the second of your five free articles this month. You can get unlimited access to all our articles and audio content with our digital subscription.


The gravity and effectiveness of the fairy-tale Macchiarini fed to those around him is best shown in the story of his former fiancée, Benita Alexander. Rings appeared, and a wedding was planned that would go down in history—Pope Francis himself would bless the union in Castel Gandolfo, Andrea Bocelli would sing, and among the guests would be Putin, the Obamas and Russell Crowe. The house of cards collapsed when two months before the ceremony a well-wishing friend sent Alexander a link containing the information that on the planned wedding day, the Pope would be making a pilgrimage to South America.

How could an NBC TV producer, a person who was certainly competent and world-wise, believe in this kind of rubbish? Love is blind, you’ll say. But in that case, what about the leadership of the Karolinska Institute?

It’s not just love that’s blind. Homo sapiens have a really very large problem—with our heads.


I like to listen to discussions between philosophers and scientists, even though they broadcast on different frequencies. The former place the general over the specific; they speak of humanity, its place in the universe, about good and evil. The latter shun the general, believing that the devil is in the details. They put forward hypotheses, look for confirmation, but generally have to admit at every step that they know rather less than more. After all, it’s hard for somebody who works on quantum mechanics or the human brain to believe that he has already consumed all knowledge. For both fields of science are only at the beginning of their path to learning the mysteries of matter and ‘mind’. I write ‘mind’ in inverted commas, because… well, hang on a second.

I like to listen to discussions between philosophers and scientists because both the one and the other want to understand (now without the inverted commas). They aim to bring two opposite ends of a dispute close to each other, giving us hope that someday the two ends will finally meet, or at least move closer. Here the ball is in the scientists’ court, because they’re the ones who have to supply incontrovertible evidence that things are how they are (aside from ethical questions, where they can only provide material for deliberations).

Our “Przekrój” colleague Tomasz Stawiszyński runs a fascinating discussion series in Warsaw’s Teatr Powszechny, to which he invites philosophers, scientists and experts. He’s a philosopher himself, so his clashes with representatives of the ‘hard’ sciences are particularly delectable. When he’s visited by Dr Paweł Boguszewski, a neurophysiologist from the Nencki Institute of Experimental Biology, to talk about the capabilities of the human mind, you can see how differently the disciplines look at humanity. The scientist has no problem with the declaration that “we are biological robots”. The philosopher is outraged (maybe a little bit to spice up the discussion, but maybe not).

“But how can that be, in light of what happens to our exceptionalism with regard to the animal world? What about consciousness, which after all is value added, it can’t just be the product of processes going on in our mind?”

“And why not?” Boguszewski asks.

“Then what about free will?”

“In fact, free will isn’t so free at all. Gazzaniga and Libet’s research has already shown that we make decisions unconsciously.”

That’s how the conversation goes.

In a famous experiment in the 1980s, Benjamin Libet discovered that consciousness of decision-making is delayed in relation to the decision itself. In other words, consciousness is a recording of processes that have already taken place. Developing Libet’s research in 2008, John-Dylan Haynes’s team could even predict through observing neuron activity whether a participant in the experiment, asked to press one of two buttons, would choose the left or the right—before the culprits themselves even knew.

In turn Roger Sperry (and later, Michael S. Gazzaniga) carried out research on patients who were lacking (due to accidents or surgical procedures, used for example in cases of severe epilepsy) a connection between the left and right hemispheres of their brains. Their article in Nature describes the difficult lives of such people: “Standing in the supermarket aisle, Vicki would look at an item on the shelf and know that she wanted to place it in her trolley—but she couldn’t. ‘I’d reach with my right for the thing I wanted, but the left would come in and they’d kind of fight,’ she says. ‘Almost like repelling magnets.’” That’s how one of the patients described the first months after her operation.

The results of such research on free will may arouse reservations about the methodology, but nothing so serious as to definitively refute the results. The greatest number of opposing voices—as it’s easy to imagine—are in religious circles (including among scientists who don’t rule out the existence of a higher being, because, of course, not all scholars believe that life is only ‘a form of existence of protein’).


That’s barely half of the bad news: it’s not only free will that’s failing us, but also reason. We’re not entirely homini sapienti, ‘thinking people’. Research by psychologists has already made it into the mainstream, primarily that of Daniel Kahneman and Amos Tverski, who demonstrated that the human mind is subject to all kinds of illusions—cognitive errors. The list of such errors is long. For example, when we accuse our partner of saying “you always” or “you never”, we draw attention only to those events from our memories that aroused strong emotions in us. We tend not to remember the “but you sometimes”. That’s why in Sweden, when resolving disputes over who bears the bulk of the household duties, a game is played in which each participant receives magnets to stick onto the refrigerator, to be used when they complete a given task. In this way, subjective imaginings are verified by hard data.

Already 400 years ago, Francis Bacon summed up in his Novum Organum what contemporary psychology has only now classified and researched: “The human understanding when it has once adopted an opinion (either as being the received opinion or as being agreeable to itself) draws all things else to support and agree with it. And though there be a greater number and weight of instances to be found on the other side, yet these it either neglects and despises, or else by some distinction sets aside and rejects, in order that by this great and pernicious predetermination the authority of its former conclusions may remain inviolate.”

Our memory is also, in fact, useless. In her book The Memory Illusion, Julia Shaw, a forensic psychologist and lecturer at London South Bank University, presented the results of research that in recent years has brought psychologists, neurologists and neurobiologists significantly closer to uncovering its mysteries. It turns out that the conviction that memory is infallible is false. Not only is there, in reality, no support for the belief that we remember our own birth. In fact, about half of what we remember is the creation of our imagination; collecting scraps of stories from others, filling in the gaps in the sequence of events. In her research, Shaw works on subjects including inducing false memories—persuading people of untrue events from their past. It turns out to be astoundingly easy.

But the bad news still hasn’t finished. It turns out that, in fact, people don’t think logically at all. In 1966, Peter Wason devised his famous test. It involves four cards on a table. Each card has a letter on one side and a number on the other. The visible faces of the cards show E, K, 2 and 7. Which card, or cards, do you need to turn over to test the truth of the proposition that if a card has an E on one side, it has a 2 on the other?

Fewer than 10% of people solve this question correctly.* But it turns out that if we add to the question a simple little ‘does not’ (“…if a card has an E on one side, it does not have a 2 on the other?”), correct answers dominate. Then the correct answer is E and 2, and we choose it because they were both mentioned in the question. We are guided by our intuition, not mathematical logic.


Why are people such faulty products? According to the theory of evolution, it makes no sense. After all, if you look around you, we’ve definitely come the furthest in comparison with other forms of life. But if we accept that the mission of every species is to pass on its genes, and thus the strategy is to live long enough to bear and raise children, then maybe some explanation can be found. Because in the end, despite various strange beliefs and series of mishaps, human civilization is developing rather well. We can even speak of a certain progress, since I’m writing this sitting on a comfortable couch in a dry and well-lit space, rather than scratching with a stick in a dark, damp cave.

For example, why do we always seek some kind of reason in the events that surround us? Because a reason can suggest to us whether an event is potentially dangerous for us. In 1950, the psychologist John Garcia decided to research whether conditioned reactions— like those of Pavlov’s dog—also have a similar basis. It turned out that they do. Rats associate sweet water, to which a nausea-inducing agent has been added, with the abdominal pains that follow. Yet when you give them sweet water to drink and then deliver an electric shock, they don’t start to fear sweet water. This happens because avoiding sweet foods is an evolved mechanism: evolution has carved out ruts in our thinking.

And because we’re unable to comprehend the causes of every event in this infinitely complex and still undiscovered world, we often choose answers that aren’t too difficult and, God forbid, unprovable. If lightning strikes, the gods are angry; if we’re afflicted by allergies, it must be gluten. We take mental shortcuts.

In 2017, two French scholars, the anthropologist Dan Sperber and the cognitive scientist Hugo Mercier, published The Enigma of Reason, where they present their theory of argumentative reasoning. They believe that during the course of evolution, reason has naturally been shaped so that we will survive and pass on our genes, but for this it needed a more specific function, namely the ability to communicate with others and convince them that we’re right. Why? Because it’s only when we agree that we can attempt to reach a higher level of development. As the poet Vladimir Mayakovsky wrote: “What’s an individual? No earthly good. One man, even the most important of all, can’t raise a ten-yard log of wood, to say nothing of a house ten stories tall.

This would also explain why we do so badly at Wason’s test. Sperber and Mercier stress that we’re lazy in our thinking; Kahneman wrote that most people don’t bother to think through a problem. Family differences of opinion on a micro scale, and social media on a macro scale, demonstrate perfectly that we don’t really want to think, and show how far we allow emotions to act in our name. Yet if we perform Wason’s test using not cards but situations from everyday life, its solvability dramatically increases. This may confirm the thesis that we are evolutionarily predisposed to solve problems in relations with others, not to solve mathematical puzzles.

Among the arguments that the French scholars present as evidence of their—note the name—“argumentative” theory, there is also the argument that one of the most common cognitive errors is confirmation bias. This means that, whether in family quarrels or political disputes, we enthusiastically reach for arguments and evidence that can confirm our thesis, and minimize the significance of contrary evidence.

Sperber and Mercier speak more broadly about myside bias. Take a random argument from Facebook. Who can put their hand on their heart and say that they try to play devil’s advocate and really listen to their opponent’s arguments? We’re great at justifying our own side. fMRI scans actually show that sticking to your guns activates the regions responsible for feeling pleasure, while changing your mind triggers those associated with unease and even disgust.

This would explain why the scientists of the Karolinska Institute, called upon to evaluate Dr Macchiarini’s work, turned their negative emotions on those who called his achievements into question.

An equally drastic example of wallowing in cognitive error is the case cited by Sperber and Mercier, pregnant with consequences, of the failure of Bertillon, the famous criminologist in the Dreyfus affair, who insisted to the end that the Captain had written the spy’s note, even though a layman could spot that the real author was somebody else. Bertillon ‘rationalized’ this to himself by stating that Dreyfus intentionally faked his handwriting, so people would think that somebody wanted to incriminate him. But when he was presented with a sample of the real spy’s handwriting, he admitted that Dreyfus had obviously faked the note, to incriminate the other person. In Bertillon’s ‘defence’, we can only point out that at that time, antisemitic emotions had made off with the reason not only of the detective, but also of tens of thousands of his compatriots.


Free will isn’t so free, memory is useless, logic lies in ruins. Well, and so what? Our memory is collective anyway, like our knowledge. Google remembers the capital of Suriname for us, and our household members remember that we’re running out of toilet paper.

In most cases, collective intelligence is better at problem solving than an individual on their own. The Wason test is solved correctly by under 10% of individual participants, but as many as 80 percent of small groups. A comparative analysis of English-language Wikipedia entries (the largest language group) and Encyclopaedia Britannica, showed that Wikipedia has fewer errors.

But ethical systems and the judicial systems built on them—even if the concept of free will is key for determining someone’s guilt—are built primarily to protect society against the effects of bad acts. If somebody kills somebody else, the law seeks to keep them from ever repeating this act; whether by imprisonment, isolation in a special centre, or taking their life. If someone has done evil, they should pay: with their own blood, compensation, apologies. From this point of view, it’s not so important to what degree a person is responsible for their own actions. At the same time, our knowledge of psychological disorders has shifted the arbitrary boundary beyond which we do not hold them responsible. Just think how far we have come since the Middle Ages, when human courts sentenced animals to the same punishments as people.

Sperber and Mercier themselves hope that their theories will be most useful in public debate and in politics. Rather than forcing people to understand Cartesian logic (which I struggle with to this day), it would make sense first of all to learn how to disagree properly and to deliberate. Because the more of us there are, the higher the chance that from the jungle of self-biased views, some kind of collective wisdom will emerge.

But perhaps that’s only theirside bias. After all, scientists are people too.

* You need to turn over two cards: the E and the 7.

Also read:

How Do I Live Without You?
Nature, World + People

How Do I Live Without You?

The Surprising Connections Between Species
Mikołaj Golachowski

Wolves help forests grow, old ladies save bumblebees, and whales increase the fish population despite eating them. The connections between various species are subtle and often surprising. Even humans, at some point during evolution, merged with certain bacteria and viruses that are intimately assimilated with our bodies today.

Ozyorsk is a city in Chelyabinsk Oblast in Russia, where plutonium factories used to operate. Radioactive particles are still present in the water and earth to this day. The locals often complained about chronic pain, tiredness and problems with their circulation, digestion and immune system. However, doctors failed to find any explicit links between the ailments and radiation. They didn’t detect any cancerous changes typically caused by radioactivity. And because the symptoms didn’t fit the diagnostic criteria, patients were sent away feeling neglected and betrayed.

Continue reading