At the beginning of the film “Inglorious Basterds”, Col. Hans Landa, the Nazi “Jew Hunter” and main antagonist of the story, pays a visit to a French farmer who is hiding a Jewish family in his home. In the course of their conversation, Landa mentions how most people are disgusted by rats for no apparent reason except an instinctive repulsion, while squirrels – close relatives of rats that, like their less popular cousins, can bite and transmit diseases – wouldn’t get the same hostile reaction, in fact, people like the farmer might find them adorable and put out a bowl of milk for them. He compares this to the racial hatred of National Socialism: The whole “persecution of Jews” thing isn’t about some discernible difference between them and other races or ethnicities, it’s just a matter of course for some groups of people to be naturally repelled by some other groups of people, just as with animals.
Now I get that, in the context of the film, this isn’t meant to be a sophisticated ethical discussion on the merits and discontents of… well, genocide, but a piece of intimidating dialogue meant to strike fear into the farmer’s heart, but I still think of this scene whenever I hear a certain type of argument that I find quite annoying, yet keeps appearing in discussions of topics ranging from politics to technology, and is put forward even by extremely smart people in some contexts. It goes along the lines of, “Well, in case A you demand some reason to believe or do X, but in case B, which is at least superficially similar, you believe or do X without having any good reason whatsoever.” This seems to appeal shamelessly to the laziness of one’s audience, to their unwillingness to either give up an unjustified distinction between cases A and B, or find some reason why these cases are and should be distinguished.There is, of course, an inconsistency to be resolved, but not necessarily the way the HLF proponent wants us to. Now, I realize that others may have written on this topic before me, and also given this kind of fallacy a less offensive name, but I got a blog to run. Let me still emphasize that any fallacious argument may be used to justify anything from mass murder to the wrong choice of furniture for your living room, and my nomenclature is meant as a dumbass pop culture reference, not as an attempt to equate everyone making these kind of points to a Nazi.
So let’s look at two of the most prominent examples:
- The most obvious case seem to be conservatives and religious fundamentalists who argue against gay marriage. Some of them will say something like, “Well, you claim that there is no rational reason why a secular state should let heterosexual couples marry but refuse that possibility to homosexuals, yet you would also discriminate against some peoples’ wishes to have a marriage, e.g. you would refuse someone the right to marry an animal, or the right to marry more than one person at the same time. Isn’t it time for you to admit that discrimination in accordance with certain traditional values is basic to the functioning of our society?” Now, usually this argument is presented in the more breathless format of, “If we allow ‘dem gays to marry, why can’t I marry a cow?”, which the other side of the debate will then (not usually correctly, I believe) interpret as equating homosexual love with sodomizing animals, and be greatly outraged by it. Hence I tried to somewhat improve the framing of the argument and make it sound slightly more sophisticated than it does 90% of the time. Nevertheless, it’s still a terrible point to make: I personally believe there are good, and rather obvious, reasons why people should not be allowed to marry a cow (e.g. along the lines of animal protection, or the legal safeguards usually associated with marriage, e.g. for divorce, making absolutely no sense here) and some weaker, but possibly good enough reasons against polygamy (regarding efficiency and the huge organizational clusterfuck I imagine it would produce). None of them are applicable to gay marriage. But assuming that you can show my arguments to be invalid, yes, perhaps I should be in favour of legalizing polygamy. (And, less likely, the wish to marry a cow.) Of course, a social conservative might turn the tables and argue that this also happens on the other side, namely whenever advocates for gay rights say something like, “You claim the purpose of marriage is producing children, but then why can the old, the infertile, or those not planning on children marry?” That might very well be an instance of HLF, but even so, I don’t see a resolution that a conservative gm opponent might like: He or she could 1) just bite the bullet and favour a ban on all marriages between people who can’t conceive children, and moreover advocate the nullification of all marriages that do not produce children after at most 10 years (or something). Then, of course, the conversation would immediately have to move on to adoption rights for gay couples. Alternately, they could 2) argue for the complete abolition of governmental marriage, which would then become a matter of the private life of the citizens and whatever private organizations or churches they decide to join. Some of them might allow gay marriages, others most certainly wouldn’t. Or they might 3) give some good reason for the above distinction, but I don’t see one, or even the possibility of one (Bible verses don’t count). Still, I think that the other two options are perfectly legitimate and defensible, and if opponents of gay marriage want to take these positions (although I don’t see such a thing really happening), they may have a point and should be applauded for their intellectual consistency. But I also think it’s a legitimate and defensible position to believe that a legal framework for the long-term union of two people might be a useful thing to have in a given society, and if you agree with that, the social conservative position on this matter seems indefensible.
- Now we get to the case where I believe the type of reasoning criticized here is advocated by some extremely smart people: What might be called the “Turing test argument”, as it – as far as I know – originated in mathematician Alan Turing’s 1950 essay “Computing Machinery and Intelligence”, that marks the beginning of modern thinking on artificial intelligence. Turing argued that a machine could be seen as intelligent in the same way as a human being once it had learned to play something called the “imitation game”: An interrogator communicates with both a representative of our species and a computer program, but without knowing which is which. The machine wins the game if the interrogator is unable to discern the human from the machine by conversing with and asking questions from both. This became later known as the “Turing test”. Now, advocates of “strong artificial intelligence”, or briefly “strong AI” believe that our brain is essentially a computing organ and that any of its functions can be simulated by a machine. And they often argue that, if an artificial intelligence can pass the Turing test, it should be seen as fully equivalent to a human being, including the necessity for fun moral questions such as, “If I shut down my computer, is it murder?” But an obvious objection might be that we might still just be talking about a machine that has learned to imitate human intelligence, as the name of the game implies, but does not possess human consciousness and can hence not be said to actually think. It just delivers us a good show of it. Turing’s reply:
This argument is very, well expressed in Professor Jefferson’s Lister Oration for 1949, from which I quote. “Not until a machine can write a sonnet or compose a concerto because of thoughts and emotions felt, and not by the chance fall of symbols, could we agree that machine equals brain-that is, not only write it but know that it had written it. No mechanism could feel (and not merely artificially signal, an easy contrivance) pleasure at its successes, grief when its valves fuse, be warmed by flattery, be made miserable by its mistakes, be charmed by sex, be angry or depressed when it cannot get what it wants.”
This argument appears to be a denial of the validity of our test. According to the most extreme form of this view the only way by which one could be sure that machine thinks is to be the machine and to feel oneself thinking. One could then describe these feelings to the world, but of course no one would be justified in taking any notice. Likewise according to this view the only way to know that a man thinks is to be that particular man. It is in fact the solipsist point of view. It may be the most logical view to hold but it makes communication of ideas difficult. A is liable to believe “A thinks but B does not” whilst B believes “B thinks but A does not.” instead of arguing continually over this point it is usual to have the polite convention that everyone thinks.
(Imagine all of this being read in Benedict Cumberbatch’s voice, if you like.)
Now, to be fair, Turing actually states a little later:
I do not wish to give the impression that I think there is no mystery about consciousness. There is, for instance, something of a paradox connected with any attempt to localise it. But I do not think these mysteries necessarily need to be solved before we can answer the question with which we are concerned in this paper.
That seems like a fair enough pragmatic viewpoint, but if you read modern AI researchers, you will frequently encounter the argument, “So, you don’t believe a program that could converse like you would engage in actual thought the same way you do. But when you ascribe consciousness to your fellow human beings, you base it on exactly the same evidence that I base my belief in a machine’s ability to think on, namely just them behaving as conscious beings. You have never actually processed any conscious thoughts but your own.” But this places us exactly in the HLF situation: Because I supposedly ascribe conscious thought to other people who engage in certain behaviour without good reason, I should be happy to do the same for machines without having a really rational justification for it.
The more plausible course to me would be to either a) admit I in fact do not know if other people have conscious thoughts (or even if some of them are capable of them, while others are walking robotic zombies, without me being able to ever find out who is who), and that I can only assume that they do in my everyday life as a matter of both ethical caution and me not wanting to be alone in this world, or b) – the viewpoint I would take – note that other people are much, much more similar to me than anything that would conventionally be called an AI: We have similar material composure and brain structure, and both of us resulted from a more or less “random” process of evolution, in the sense that there was no plan on the part of some engineer or other intelligent being to make us engage in acts expressing states of consciousness (leaving aside the question of God here, which would complicate matters further, although it is even briefly addressed in Turing’s essay). I do know that I possess conscious thought (or at least I think I know, some philosophers still seem to quarrel here) and that my behaviour expressing it goes together with actual mental states of mine, such as being hungry or sad or happy or… But I don’t know what precisely in the structure of my brain or the composure of my body is the reason I am a conscious being. So if I see a being with features very similar to my own, that came into the world through a very similar biological process as I did, without anything in this biological process predetermining that its outcome must be able to write sonnets, tell me it is sad, go to a party, etc., in short, another homo sapiens, I think I am much more justified in assuming it to be a conscious being than I would be with an electronic machine.
I don’t, BTW, claim to have refuted strong AI here. It’s possible that its advocates are right and consciousness is a feature of organized matter performing calculations, and it doesn’t matter whether they are done by the “meat” in my brain or a bunch of transistors. I am just saying that, as far as my tiny mind can see, a machine being able to win the imitation game would be a long way from proving that point of view.
The Hans Landa fallacy seems also somewhat similar to the slippery slope fallacy, without being quite the same. Both commonly appear when someone can’t give a really good reason or justification for their opinion in itself. That’s why I’d also want to somewhat distinguish it from counterexamples or unpleasant thought experiments: If you perform those, the premises of someone’s argument are consistently applied to a situation where she wouldn’t be willing to embrace the consequences, while in the case of HLF, you have given your counterpart no reason to believe you have any argumentative ground to stand on, and try to distract from that by pointing out some situation where she purportedly is in the same predicament. Or, in other words, HLF is the specific case when the only premise of the opposing argument you apply to your thought experiment/counterexample is the demand to have a logical justification to believe or do something, whether it’s banning gay marriage, regarding a machine as a conscious being, or anything else. Yet, this kind of premise seems much stronger than any real or fictional counterexample you can possibly hurl at it.