Nice To Err
i
Illustration from the archive
The Other School

Nice To Err

Szymon Drobniak
Reading
time 10 minutes

If it were possible to eliminate all the errors in the world, take into account all circumstances and predict all consequences, then science would become unnecessary, and life would be boring and repetitive. Without mistakes, there would be no evolution or diversity. We would either be at a standstill, or completely extinct.

Let’s imagine a tall, shallow glass case. Along its narrow base, there is a row of several small, equal-sized compartments. Above the compartments are rows of wooden pegs driven into the back of the case in a pattern of repeating quincunxes––four pegs making up the corners of a square, and a fifth in its center––so that each row is offset from the neighboring rows by half the distance between the pegs. The front of the case is covered with a glass pane, the sides are wooden panels. At the top, in the very center, there is a small hole with a protruding funnel.

"Bean Machine," photo: Matemateca (IME USP)/Rodrigo.Argenton; CC-BY-SA 4.0
“Bean Machine,” photo: Matemateca (IME USP)/Rodrigo.Argenton; CC-BY-SA 4.0


This device is called a “bean machine” and requires a handful of beans to be poured into the display case through the hole at the top. Instead, let’s imagine tipping in a large quantity of metal balls. A shining stream pours from the funnel, the balls hit the first wooden pegs––and thus begins a phenomenon that is hard to explain. The balls are quickly distributed throughout the case, clattering as they hit the wood. It’s absolute chaos, the balls striking the pegs at random, appearing to leap from one side of this strange device to the other. When the balls reach the bottom, they land in one of the compartments. But they’re not distributed randomly, as one might expect from observing the chaotic swarm of bouncing metal particles. Instead, the balls fall into an orderly pattern at the base: most of them accumulate in the central section, and their number gradually decreases as the compartments get further from the middle. The outer compartments contain only a few balls.

How does this strange wooden device “know” how to separate the balls in such a spectacularly even way? If thousands of balls were poured into a version of the device with several dozen compartments, the outcome would be similar. Not only would the largest number of balls collect in the middle, but their level in subsequent compartments would form that same familiar, harmonious shape––a bell curve, also known as Gaussian distribution. Its very “emergence” from the seeming turmoil is an extraordinary thing. But underlying this observation is something much more fundamental.

Information

Breaking news! This is the third of your five free articles this month. You can get unlimited access to all our articles and audio content with our digital subscription.

Subscribe

Apparent Chaos

Every time a metal ball hits a wooden peg, two things can happen: it will bounce either to the right or to the left. If everything has been assembled correctly, none of the bounces will be “rigged”: each time, the ball has the same chance of continuing in either direction––it’s a completely random event. However, the combination of many such events is more predictable. There is a relatively small probability that a ball will make its way through the device always bouncing in the same direction, and therefore end up in one of the outer compartments. It will most likely experience about the same number of bounces to the right as to the left, and will eventually land in one of the middle compartments. Magic? Far from it! It’s simply the confluence of many random “decisions” made by each ball, producing a “centered” result. This has quite interesting consequences: if the opposing directions of bounce were valued as representing “success” and “failure,” the individual balls could make a random choice between the correct and incorrect directions. The combination of many right and wrong decisions would, on average, produce a neutral but nevertheless variable result, extending to the right and left of center. It would of course be possible to disrupt the randomness of the device’s operation––for example, by tilting it slightly to the right or left. The final arrangement at the bottom would be shifted to one side, but the balls still wouldn’t group exclusively in the outer compartment. Each time the ball bounced off a peg, there would still be a risk of error, creating a long “tail” of compartments filled with fewer and fewer balls, extending from the direction in which the device is tilted.

The bean machine was invented in the 1870s by British researcher and polymath Francis Galton, Charles Darwin’s cousin. Among the numerous topics of Galton’s research was the issue of randomness and its consequences for the inheritance of human and animal characteristics. He constructed the bean machine (also known as the Galton board) to show that making seemingly erroneous, chaotic decisions can result in the creation of something orderly, visible only at the level of the entire human or animal population. His research laid the foundations for modern statistics, where one of the most important goals is to analyze how measured objects or individuals differ from each other and what these differences mean in the context of the entire measured group. The darker side of Galton’s achievements was his deep belief in the social value of eugenics––in fact, it was he who proposed this term to describe the policy aimed at promoting desirable human pairings, perpetuating the presence of the “right” characteristics in the next generation.

But let us return to statistical error. It always crops up when a large population of objects or individuals behaves in a more or less random way. Even if their behavior is governed by predictable rules, the sheer number of objects involved generates inevitable variability in the final result: this is statistical error. Anyone who speaks to a scientist––especially a biologist––will begin to suspect that they are actually an explorer of errors, an analyzer of chaos and unpredictability. The word “error” comes up time and again: scientists are always explaining errors, correcting results for errors, eliminating errors from tested patterns. It might seem surprising that a field which ought to be characterized by certainty and acuity is so full of mistakes. But the truth is, science is the art of error navigation, and scientists are essentially qualified specialists who have been trained in tracking and cataloging these errors. The philosophy of constructively sifting through the ubiquitous randomness and chaos actually paved the way for one of the fastest developing fields of our time: metascience. But one thing at a time.

Hasty Conclusions

As with the Galton board, where the process governing the behavior of the balls is unknown until a whole legion of them are passed through the device, there is little that can be said in science about the hidden truth that governs the entire reality under the microscope. One can only feel one’s way, making individual, imperfect measurements (conducting a carefully planned experiment, for example), and then considering the likelihood that the outcome was the result of an error, versus the likelihood that it describes a common property. If only four balls are passed through the bean machine, there is a high probability that they will fall into four different compartments. Moreover, the chaotic nature of this process will prevent them from landing in the four middle compartments. Perhaps they will land somewhere one third of the way across the device, to the left of the center. Does this mean that the process that determines the bouncing of the balls is not “fair”––that it doesn’t follow the 50:50 rule, randomly directing the balls to the right or left, but favors the left-hand side? Not necessarily. If a scientist were to draw this conclusion, they would be making a big mistake. This mistake is so important that it even has a name: a “type 1” error. It might manifest itself in the overly hasty announcement of a discovery (“Galton’s board works asymmetrically!”), when the reality is completely different. This mistake would have significant consequences, because not only would the result of the experiment be categorized prematurely as evidence for the way a process works, but it would also be based on a pathetically small sample of just four balls.

Would it help to pour a thousand balls through the device? Yes and no. It would enable much greater confidence in the outcome of the experiment, meaning much less error in the announcement of the scientific discovery. But error will never be eliminated completely. Even if a titanic effort was made to test a million balls, the shadow of this error would remain. The task of the researcher is to decide at what point the error is insignificant enough (and the research sample large enough) that it can be accepted, and the experiment deemed a trustworthy source of knowledge about a selected fragment of reality. So when the media is raving about a new discovery that is revolutionizing some field or other, it doesn’t mean that scientists are one hundred percent certain about the nature of the subject of that research. Usually (or perhaps very unusually?), the scientists have found that by drawing these conclusions as opposed to others, they are only slightly wrong. This should not induce panic––in fields concerning human health and life, the limits of acceptable error are so strict as to make this error almost impossible.

Rooting Out Errors

Even in situations where the statistical error rate is reduced to an absolute minimum, error still lurks in the data––except this time it is expected, even desired. This type of error is familiar in biology: basically, everything studied in biology can fall into the category of “error.” But in this context, it doesn’t mean a “mistake.” Living organisms are constantly “erring,” deviating from some hypothetical norm. In evolutionary biology, this is called variability; it is the favored measure by which to assess the suitability of a given organism or its characteristics for research. Without variability, everything would be the same, and react to external factors in the same way. Evolution would stop in the blink of an eye; populations of invariable clones would either all live never-endingly boring, identical days, or completely cease to exist. This “erroneous variability” of life can be hard to explain. It involves genes that are yet to be discovered, environmental factors that are yet to be explored, and all sorts of other considerations that people may not even have thought of measuring or describing. This is where the error lies. Any revolutionary scientific discovery will be loaded with it. It is not so much about the imprecise measurement of a parameter, but about the impossibility of taking into account all the circumstances that could lead to one particular result rather than another. So are we just stumbling around blindly, and every scientific revelation is just the ephemeral achievement of a local laboratory, a specific research group that hit the jackpot, discovering something that others (for some reason!) couldn’t find?

If we see science as a sequence of loud gunshots interrupted by long periods of silence, then yes. However, this simplified––and rather pessimistic––way of thinking raises considerable doubts. Where would general knowledge about the world lie between these individual revelations from various fields? This frustration––combined with the desire to distill pure truth from a chaotic flood of research results of varying quality––has given birth to the new, fascinating branch of science known as metascience. Empirical scientists focus on refining methods and protocols so as to minimize statistical error during experiments. Metascientists go to the next level, taking these meticulously published results and extracting their very essence, revealing and explaining details of reality that are inaccessible to scrupulous empiricists. Metascience is therefore a synthetic approach, a field that takes the available research and synthesizes it into more general patterns and discoveries.

Since the birth of the modern scientific method, a multitude of research papers have been published around the world. One could spend their entire life (and career) sifting through this ocean of observations and results and still not get to the bottom of everything. But metascience isn’t just the arduous reviewing of existing research. It also allows scientists to reach a consensus and eliminate the inevitable––especially in biology––“error” of variability. It removes the curse of how hard it is, when repeating a biological experiment, to obtain exactly the same result. Metascience can rein in this additional level of chaos, explaining it, often giving the variability a name and indicating its source. It can also expose practices that are not entirely transparent, such as publications that focus solely on revolutionary, groundbreaking, and shocking results, omitting those that are less significant but equally important.

The democratization of science, and the increasing pace at which scientific discoveries are being made, will inevitably shift the emphasis and priorities of the scientific community. More importance will be attached to the repeatability of research results and building a scientific consensus than to the newsflashes of experimental scientists––which, like fireworks, quickly fade. The metascience revolution will most likely turn out to be less spectacular than that of the Enlightenment, when the modern research method was born. But it might take science to a new level, where “error” will not only cease to be a waste product, but will become a valuable research material in itself. After all, it is variability––so readily lumped in with other sources of error––that lies at the heart of many fields of science, especially those that study us humans. Fundamentally variable, different from each other in so many ways, and yet similar and predictable.

Also read:

Great Chemistry
i
Pieter Brueghel (the younger), "The Alchemist"; source: WikiArt (public domain)
Science

Great Chemistry

Aleksandra Woźniak-Marchewka

While the alchemists never did manage to discover the elixir of life, science is in their debt for the discovery of oxygen, as well as the saying “only the dose makes the poison.” Their chemical investigations and ethos laid the foundation for modern science. 

In his book The Forge and the Crucible, the Romanian philosopher and scholar of religion, Mircea Eliade, notes that both craft and alchemy are based on transforming material. In both cases, the human becomes a demiurge who speeds up time. The blacksmith heats up iron, gives it a new shape, forges it into something. The alchemist in turn changes the properties of a metal to obtain a new substance. The craftsperson and the alchemist here become creators, using fire to transform ores, or as Eliade puts it, the embryos of Mother Earth. The person who learns this art observes closely the transformations that take place in nature, and then tries to replicate them and use them for the student’s own purposes. 

Continue reading