The CBS sitcom “The Big Bang Theory” is, among other things, particularly remarkable for its many references to physics, science, the “geek culture” it portrays, and even subjects like history or philosophy, the first scientific allusion of course already being its very title. So I thought it might be fun to research some of them and explain them here. I tried not to assume too much prior scientific knowledge beyond basic arithmetic, not even simple algebra. Perhaps that also means I will have to ask better-informed readers for their patience. This is intended to be the first of several parts (probably three or four). Here it goes:

**1. Free fall and basic classical mechanics **

Let’s begin with a scene from “The Gorilla Experiment” (Season 3, Episode 10): Penny, the only non-scientist main character of the show, wants to surprise her physicist boyfriend Leonard by trying to understand what he’s working on. She therefore asks his string theorist roommate Sheldon to tutor her in physics, but is quickly lost:

Sheldon: Now, remember, Newton realized Aristotle was wrong, and that force was not necessary to maintain motion, so let’s plug in our 9.8 meters per second squared as a, and we get force – earth gravity – equals mass times 9.8 meters per second per second. So, we can see that m x a equals m x g, and what do we know from this?

Penny: We know that… Newton was a really smart cookie… Oh! Is that where Fig Newtons come from?

Sheldon: No. Fig Newtons are named after a small town in Massachusetts… No don’t write that down! Now, if m x a equals m x g, what does that imply?

Penny: I don’t know.

Sheldon: How can you not know, I just told you! […]

In the 17th century, Galileo Galilei and Isaac Newton founded classical mechanics, which is concerned with the movement of objects. This is generally seen as the birth of modern science in general and physics in particular, and some of it is already taught to middle school kids (most of whom, of course, forget all about it), so it makes sense to start a physics course there. Or *would* make sense, as Sheldon begins his tutoring with the ancient Greeks. Nevertheless, the physics of antiquity as found e.g. in the writings of Aristotle provides us with some prominent spokespeople for prejudices and wrong intuitions a beginner in modern physics might share, so that’s also not necessarily a bad idea. One thing that both Aristotle and even some people today believe is that for an object to maintain motion with a constant speed and in a constant direction, a force must continually act upon it. This seems to be somewhat confirmed by everyday experience: If a ball rolls over the floor, it gets slower and slower, until it stops at some point. You may wonder how the ball manages to move at all once it has lost contact with your feet, but the Aristotelian explanation, or rather, what I was once told was the Aristotelian explanation and have never bothered to actually check, is that there is air moving behind the ball that continues to propel it for a while.

Anyway, Newton realized this was wrong: If you set something in motion in the vacuum of outer space, it *would* continue to move in a straight line, with constant speed, for all eternity. In fact, if it was a spaceship and you where in it, there would be no way to experimentally prove that you are the one moving, rather than the rest of the universe moving in the counterdirection while your ship is staying still. Instead, the role **force** plays in the universe is to *change* the motion of an object, either by changing its direction or its speed. Physically, both of these are called **acceleration**, even though in everyday language, the term seems mostly reserved for changes of speed only. The reason the ball stands still after a while is that the ground it moves on provides **friction **as a counterforce, slowing it down until it has zero velocity. The resistance a given object provides against acceleration by a force is measured by its **mass**. Hence Newton’s formula that F = m x a – force equals mass times acceleration.

The next thing to consider here is that the **weight **of an object depends on its mass. That can be seen from everyday experience: A ball is lighter than an elephant, and it is easier to accelerate the ball to some speed you want it to have than it would be for the elephant – one can be done by the smallest expenditure of muscle force, the other not so much. In fact, the weight of an object is *proportional* to its mass, meaning that the weight can be calculated as “mass multiplied by some constant number”. The constant number is known as “g” and depends slightly upon the region of earth you are in, but, as Sheldon states, it’s roughly 9.8 meters per square second (we will get to what the unit means in a moment) in the USA and any part of the world that has about the same latitude. Hence, the weight of an object is m x g. (As a sidenote, what most people call their weight actually *is* their mass, but since the two depend on each other, it doesn’t matter much in your everyday life, at least as long as you remain on earth and the factor g stays roughly the same. On the other hand, if you went to moon, the factor between mass and weight and hence your weight would change, while your mass itself would still be the same.) But “weight” is nothing but an expression for “force that pulls something down on earth”, and when there is no other force to counter that something’s weight (such as the material stability of a platform you’re standing on), we get the process commonly known as “falling”. Therefore, the m x g must in this case be equal to the m x a which gives the force by Newton’s law (more usually called “Newton’s second axiom”): m x a = m x g. Since mass times “something” is equal to mass times “something else”, “something” must equal “something else”, so the acceleration a of a weighty object is equal to g, i.e. 9.8 meters per square second. The unit “meters per square second” is the same as “meters per second per second”, i.e. if you experience an acceleration of 9.8 meters per square second, you get 9.8 meters per second faster every single second that passes.

The interesting thing here is that this overthrows another intuition of both the classic Greeks and many modern-day people: An object that falls upon earth gets faster and faster, but the acceleration is the same, no matter the exact mass or weight it possesses. That means, contrary to what one might have thought, *heavier objects need exactly as long to fall down from the same height as lighter ones*. At least that is true as long as other forces don’t play a significant role – e.g. air resistance of course makes a feather fall slower than a stone, even though both would be equally fast if there was no air on earth. Still, Galileo Galilei famously managed to demonstrate our new insight by dropping a canonball and a wooden ball from the Leaning Tower of Pisa. They hit the ground at the exact same moment and provided one of the first experimental confirmations of classical physics.

**2. Abelian groups **

In Season 6, Episode 3, “The Higgs Boson Observation”, Sheldon receives a package from his mother containing all the scientific (and potty training) diaries he kept in his childhood. His plan to comb these journals for some Nobel prize worthy discovery eventually leads to him hiring a research assistant, but first, Penny asks if she could help with that. In response, Sheldon turns on his trademark condescension:

Sheldon: Really? You can assess the quality of my work? Okay, uhm… here! I wrote this when I was five years old.

Penny: “A proof that algebraic topology can never have a non-selfcontradictory set of Abelian groups.” (Pause, sarcastically:) I’m just a blonde monkey to you, aren’t I?

Sheldon: You said it, not me.

The idea of Sheldon having kept journals on mathematical discoveries since he could barely use a potty might be inspired by 19th century German mathematician Carl Friedrich Gauss who did just that (and alongside Archimedes, Isaac Newton and Leonhard Euler is one of the usual candidates people name for the title of “greatest mathematician who ever lived”). Aside from that, for the purposes of this article, there is a problem with this dialogue: The title of Sheldon’s work is mathematically meaningless. That’s rare for “The Big Bang Theory”, where usually every scientific reference corresponds to something in the real world of physics, mathematics, etc., and even if it is a tad sloppy (we might come to some examples in later installments of this series), it does at least make *some* sense.

Nevertheless, every single one of the expressions “algebraic topology”, “non-selfcontradictory” and “Abelian groups” does have a meaning: Algebraic topology is a mathematical subfield, not some kind of object mathematicians study, which is the reason why Sheldon’s title is senseless. (It should be noted, however, that a **topology ***is* a well-defined kind of mathematical object, as well as the name of the mathematical field that studies said objects.) Entire graduate level textbooks have been written about the subject, and neither the space of a blog nor and my knowledge of it are sufficient to tell you much interesting about algebraic topology, so I will just point to Wikipedia. “Non-selfcontradictory” is pretty, uhm… self-explanatory, so only one thing remains to further dissect:

To understand what an **Abelian group **is, think back to your childhood maths lessons: You most likely learned how to add two numbers during that time, such as 4+5=9 or 84+9=93. But at some point, you must also have been taught that addition obeys certain **arithmetical laws**, even if you don’t remember that term: First of all, it is **commutative**, i.e. the order in which you add two numbers does not matter: 4+5=9, and so does 5+4. 84+9=93, and so does 9+84. And so on. Secondly, it is **associative**. If you want to add three numbers rather than two, e.g. 4+5+84, you may interpret this as first adding the first two numbers 4+5=9 and then adding the third number, 84, to the result and get 93. This is commonly denoted as (4+5)+84, putting the operation that is done first into brackets. Or you may add the last two numbers first, obtaining 5+84=89, and then add the result to 4, getting to 4+89=93, a strategy that would be denoted as 4+(5+84). As you see, the result is the same, and that is what is meant by associative. (Of course, combining this with commutativity, you get that you can actually add the numbers in any order you like and get the same result.) Finally, consider the integers, i.e. the natural numbers that we get by counting, 0, 1, 2, 3, 4, … combined with the negative numbers -1, -2, -3, -4, … Out of all these numbers, 0 stands out for its property to be a **neutral element **of addition, meaning that if you take any integer and add 0 to it, you get the same integer. Moreover, if you take an integer, such as 4 or -1, there is an **inverse element **that you can add to it and get 0, i.e. the neutral element. E.g., if you add -4 to 4, you get 0, and if you add 1 to -1, you get 0.

Now, if mathematicians encounter something that has some structure, like the above properties, one thing they might do is abstract from whatever concrete context they have found it in and define the structure itself as a new mathematical object, providing a kind of “logical template” to investigate an infinity of similar things. To see what that means, let’s recapitulate what we know about the integer numbers: We have an operation called “addition” that takes two of them (like 4 and 5) as “input” and spits out another one of them (like 9, the result of 4+5). This kind of thing is called a **binary operation**. And we know that this operation obeys the laws of associativity and commutativity, it has 0 as a neutral element, and every integer number a has an inverse element in -a (recall from 6th grade that -(-a)=a, e.g. -(-4)=4).

Are there other examples of binary operations that obey some of these laws? Well, one that satisifies most of them is found by considering **permutations**: Imagine you are a street con artist working a shell game: Whenever it is played, three shells are being shuffled on a table, with a small coin being placed under one of them. After you are done, the victim of the fraud will make attempts to guess which of the shells the coin is under, all of which are going to be futile thanks to your sleight-of-hand skills. One day, you are bored and, to kill time, try to classify all the ways you could potentially do the shuffling. At the start of the game, the shells are laid out in a row. To keep track of which shell is which, you number the shells from left to right: , , (“S” standing for “Shell”). After you are finished, they will still lie in a row, but their order will have changed, e.g. from left to right, might now be the first one, might lie in the middle and be the last one. Since you are only interested in classifying the end result, you might describe that in terms of a wizard having turned into , into , and into , as it would give you the exact same configuration. The wizard has many such spells in his arsenal, the only condition all of them have to meet is that, after he has worked his magic, all of the shells we had before must be in one and only one of the three conceivable positions again. E.g., it is not an acceptable spell to change into , but turn both and into , as this would correspond to the configuration , , which could not possibly arise from just rearranging , , . Now, let’s denote “turning into ” as , and correspondingly for all other shells, then the magic charm from above becomes , , . Another spell might be , , (leave as it is, then switch and by turning each of them into the other). You can easily imagine formulating these kinds of prescriptions for any number of shells, not just three. Mathematically, they correspond precisely to what is called “permutations”.

Now, what would happen if we executed the above two permutations in direct succession? First, we would get from , , to , , (still from left to right). Then the second permutation tells us to turn into , into and leave alone, so the end result would be , , – which would also be the result of applying the permutation , $S2 \rightarrow S2$ and $S3 \rightarrow S1$. In other words, what we did gave us *another* permutation, and thus, we can conceive of applying two permutations in succession as a binary operation taking both of them as input and yielding a new one. This composure of permutations has virtually all the properties that addition of integers had: There is a neutral element (just leave all of your shells where they are), there are inverses (just do a given permutation “backwards”, e.g. if it sent S1 to S2, send S2 back to S1 for your inverse permutation, and so on), it also is associative (because in the end, applying the new permutation to a given ordering of shells might still be conceived as applying the two permutations you composed it of in order – this one might be slightly harder to wrap your head around if you aren’t used to this). The one thing missing from the picture is commutativity: If you reverse the order in which we executed the two permutations at the beginning of this paragraph, you first get from , , to , , , and then to , , – a different end result from what we got before.

Now we are ready to understand the term “Abelian group”: A **group **is a structure with a binary operation where the properties of associativity, a neutral element, and inverse elements are present. Both addition of integer numbers and composure of permutations are examples. From these few prerequisites, you can already prove some simple properties that every group must share, e.g. that there can only be one neutral element and that each member of a group can only have one inverse, and then later proceed to more complex stuff. An ** Abelian group **is a structure that has all the properties of a group, but also possesses commutativity – like the integers with addition. Another example of an Abelian group would be fractions greater than zero with the usual arithmetic multiplication. As stated before, as long as we are only interested in the structure that the binary operation possesses, it actually doesn’t matter what kind of objects (numbers, permutations, in other contexts stuff like symmetries, geometric transformations and matrices) we are considering and you might as well just write down a bunch of abstract symbols like a, b, c, …, then assign the result of what happens when you apply the operation to any two of them (e.g. a composed with c is d) in a way that satisifies the properties necessary for a group.

**3. Münchhausen trilemma**

In “The Bad Fish Paradigm” (Season 2, Episode 1), Sheldon finds out that Penny, who has been dating Leonard, is insecure about her lack of formal education, so much so that she lied to him about finishing community college. Unable to keep a secret for himself, Sheldon opts for the, in his mind, second best option: Moving out of his shared appartement with Leonard. Here his how their dialogue goes:

Sheldon: Leonard, I’m moving out.

Leonard: What do you mean, you’re moving out? Why?

Sheldon: There doesn’t have to be a reason.

Leonard: Yeah, there kinda does.

Sheldon: Not necessarily. This is a classic example of Münchhausen’s trilemma. Either the reason is predicated on a series of subreasons, leading to an infinite regression, or it tracks back to arbitrary axiomatic statements, or it’s ultimately circular, i.e. I’m moving out because I’m moving out.

Leonard: I’m still confused.

The term “Münchhausen trilemma” was coined by German philosopher and sociologist Hans Albert in his 1969 book “Treatise on Critical Reason”. Albert was a professor at the University of Mannheim who had come under the influence of philosopher of science Karl Popper’s notion of **critical rationalism** in the 1950s.

Popper, who had closely followed the development of physics in the early 20th century, had seen the doctrines of classical mechanics (see above) partially overturned by Einstein with his theory of relativity and, even more radically, by the founders of quantum mechanics, which we might get to in a later entry in this series. Central parts of the scientific worldview that had, for several centuries, been confirmed by every single experiment were found to be wrong. In fact, some people in the late 19th century had believed that physics was an almost utterly completed endeavour and we were approaching a perfect understanding of the universe we live in. While strictly speaking, the new developments meant classical physics was a wrong theory, scientists and engineers continue to use it to the present day for physical calculations, ranging from construction jobs to astronautics. That is because it is *approximately* true in most of the realm of our everyday experience, where speeds don’t get close to the speed of light, masses are very large compared to those of the elementary particles our world is composed of, but gravity does not get *too* large (see the film “Interstellar” for further reference on the last point).

This led Popper to formulate a theory of how science should proceed, not by attempting to prove that certain theories are correct, but by scientists using their imagination to essentially *guess* audacious theories and then attempting to **falsify **them, i.e. using experiments and further theoretical work to find out what, if anything, is wrong with them, using the results to modify them into better theories that give a more accurate description of reality, attempting to falsify the new theories again to improve them even more, and so on, and so on. This way, Popper argued, while we would never arrive at an absolutely certain truth, we *could* be certain that, as theory building and critical examination of the theories go hand in hand, we get closer and closer to it. Rather than attaining absolutely “true” scientific knowledge, we thus have to go for “truth-similarity”. By Popper’s interpretation, the Greek philosophers Socrates and Xenophanes already held more or less the same view.

What does this have to do with the Münchhausen trilemma? Well, it is the starting point for Albert’s argument in favour of critical rationalism. The more traditional approach to science, going back to ancient Greek thinkers like Aristotle, demands a reason for every scientific claim that forces every rational person to accept it as true. Afterwards, all doubts about the claim’s validity have been erased and it becomes absolute knowledge. This is called the *principle of sufficient reason* (well, actually, there are several versions and formulations of this principle, but this is the one Albert refers to in the aforementioned book). But that immediately leads to a problem: What if someone is determined to ask “Why?” again and again, much like a three-year-old would? What if this person demands to hear a reason why your scientific reason is true, and then a reason for the reason for the reason, and so on? This leads to three alternatives:

- The questioning procedure does, indeed, go on forever, and you have to answer infinitely many “Why?”-questions. This is called
*infinite regress*and does not seem like a very satisfying state of affairs. - You enter a
*logical circle*where you have some statement that you take to be justified by another statement, but when you ask for its justification and the justification for any further reason that follows it, you ultimately get back to the first statement, leaving the entire construction hanging in the air. E.g., you may believe, “Pandas are God-like creatures.”, and ground that on the fact that “The great prophet Pandatagoras told us so, and he is absolutely trustworthy.”. When asked to give a reason to believe that, you reply that, “The great prophet Pandatagoras was sent to us by the giant pandas to spread their gospel, and a person chosen by them is absolutely trustworthy.” But why, proceeds your insistent questioner, is such a person absolutely trustworthy? “Because pandas are God-like creatures.”, you might reply and have now gone full (logical) circle. Obviously, that does not seem acceptable either. - The only remaining option is to
*dogmatize*some statements as “obviously true” and “beyond questioning”, and then build all of your logical reasoning on these, as Sheldon puts it, “arbitrary axiomatic statements”.

The last option was according to Albert, indeed how the Münchhausen trilemma (so named after the 18th century German Baron Münchhausen who claimed to have once pulled himself out of a swamp by his own hair) has traditionally been resolved: You can just “see” the truth of certain claims, which he calls the “revelation model of knowledge” (my translation, I am not familar with the English edition of his book). The most obvious example would be religions which claim that the truths in their holy scriptures are plain for everyone to see, thanks to the benevolence of some higher being that chose to reveal them to us. But that immediately leaves us with the problem on what grounds we believe that some particular piece of writing is or isn’t sent to us by some higher power, and also the question of correct interpretation.

But Albert also counts rationalism (which tries to comprehend the world purely by rational thinking) and empiricism (which tries to ultimately ground all human knowledge in sensual perception) in the form 17th and 18th century philosophers advocated them as variants of the “revelation model”: Rationalists like Rene Descartes claimed that some truths could be so clearly and evidently comprehended by the human mind that no room for doubt was left. The counterargument Albert gives is that there are many examples in the history of science where this hasn’t worked out. E.g., Aristotle and his disciples might have claimed that the premises of their physics are “immediately evident to be true”, but they were still shown to be wrong (see above). And if we have even one example where the feeling of something being “obviously correct” is wrong, it might be argued it can never again serve as a basis of absolute certainty about anything. Empiricists like Francis Bacon, on the other hand, believe you can only trust your sensual observations and then generalize them to more and more universal natural laws (e.g. you proceed from, “I dropped this stone and it fell down. And then did it again. And again. And again.”, to, “This stone always falls down when I drop it.”, to, “All stones fall down when I drop them.”, to, “Every object heavier than air falls down when I drop it.”). But this so-called **principle of induction**, Albert says, either has a theoretical basis in some rational argument, or it is some sort of theoretical dogma/immediately evident axiom itself, in which case you are back to where we were before, or you try to base your belief in induction itself on observations and thus induction, which is a logical circle. Either way, you haven’t escaped the Münchhausen trilemma.

A possible solution, Albert claims, is to embrace his and Popper’s method of critical examination, as it dispenses with the principle of sufficient reason altogether and replaces it by a continued process of trying to falsify old theories and find better new theories. I am not sure if a majority natural scientists would fully embrace critical rationalism, although it has certainly had some influence (Albert Einstein is reported to have sent a telegram to Popper that he “agreed about most things” with him). Neither does Albert’s argument seem quite uncontroversial among philosophers themselves (he has apparently clashed on it quite a bit with another German philosopher named Karl-Otto Apel). In any case, given that only one or two of Hans Albert’s around 40 books have ever been translated into English, one of the central terms of his philosophy making it on a mainstream American sitcom is a rather impressive feat.

**Next time: Cats! Magnets! Pasta! And so much more…**