The practical fallacy
Students are not usually terribly fond of theory, especially when it comes to language teaching methodology: “I don’t need theory, I need practice”, they will tell me. If, then, I give in to their wishes and discuss actual classroom procedure, they will come up with sentences as these: “Of course, you need a sound grammatical basis before you can start communicating” or “The teacher should only correct the most important mistakes”, little realizing that this, of course, is theory, i.e. a general idea, a principle which is held to be true. The fact that these sentences may not be very profound or may not be true, does not matter, they are still theories. In other words, whatever you do in practical terms, there will always be an underlying principle, an idea, a theory, even if you yourself could not explain this principle, or even if you are not even aware that there is one.
The causal fallacy
A rather wacky version of this, not to be taken too seriously on the surface, but otherwise quite “true”, runs like this: “After all, it is the stork who brings the little babies. There used to be many more storks, and there used to be many more babies. Now that the number of storks has gone down, the number of babies has also gone down. Obvious, isn’t it?” Here, there are two phenomena, the decreasing number of storks and the decreasing number of babies, which are related, but they are only related in so far as they emerge together, not in any other way: one has not brought about the other. In technical terms, they are correlates, but they are not causally related. Although this sounds terribly obvious, it is a fallacy that is often encountered in academic research. If in language teaching, for example, you have two groups of students, one of which, over a limited period of time, is taught in an innovative way, the other is taught in a traditional way, and the results of the innovative group are better (or worse) than of the traditional group, this is often claimed to be the result of the method, i.e. the claim is made that the students have improved because of the method. However, it may be that the group have improved quite independently of the method, or even in spite of the method, i.e. the two may just correlate.
The realistic fallacy
Students (and other people), when talking about literature, often use ‘realistic’ as if it were a synonym of ‘good’. However, the Odyssey isn’t realistic, nor is Gulliver’s Travels, and the same can be said of Harry Potter and most comic strips. Whether these are good or bad is quite independent of the question as to whether they are (or aren’t) realistic. When you come to think of it, most literature is not realistic, nor does it pretend to be.
The circular fallacy
The argument approximately runs like this: “Machines will never be like humans, because they will never be able speak like humans, and machines will never be able to speak like humans, because they will never be like humans.” This, of course, is a simplified version, and is not likely to appear in just this form in any text, but it is often the underlying train of thought, i.e. the proposition to be proved is assumed to be true at some point in the argument, the conclusion follows from the premise. This fallacy is more difficult to avoid than it seems, and eminent scholars have fallen into the trap, or at least have been accused of having fallen into the trap. Darwin, for example, was accused of circularity of argument in his theory of natural selection: “Who survives? – The fittest! – How do we know he’s the fittest? – Because he survives!”
The usefulness fallacy
In the current debate about the reform of the university system, it is often demanded that universities teach their students something “useful”. Students can hardly be blamed for sharing this utilitarian view of the world. However, it is not so easy to decide what is useful and what isn’t, and even it if was it would be next to impossible to know what may be useful for a particular student at a particular time in his future career. These are rather “practical” problems, but the concept of usefulness can also be criticised for much more fundamental reasons: You just often do not know if something you are doing might turn out to be useful. Take the little yellow stickers most of us use. They were developed because something went wrong when a new type of glue was to be created. The new glue was so bad that it would have been thrown into the dustbin if it hadn’t been for the fact that one of the scientists was singing in a choir and all of a sudden realized that the imperfect glue would be just ideal for stickers to mark the different hymns to be sung on different occasions in his prayer book. The useful little product, which nobody had wanted, was the result of coincidence and failure.
Just in case yellow stickers do not convince you, here are some more “serious” cases: Galileo discovered the pendulum laws after observing a chandelier in the Cathedral of Pisa, Mendel discovered the laws of genetics observing the peas in his garden, Roentgen discovered x-rays when leaving his laboratory in the twilight, Viagra was developed as a cure for heart diseases, and when Rutherford succeeded in splitting the atom, he actually denied that any practical benefit could be derived from it. The prime reason for attempting to split the atom was to find out what was inside.
The scientific fallacy
The word scientific has almost magical qualities in modern society, and when we are told that something has been “scientifically proved”, this has the same effect a charm or a spell used to have in the older times. The problem is, of course, that scientists may be wrong, and that sometimes one thing which has been “scientifically proved” may differ from or even contradict another thing that has been “scientifically proved”, and we do not know if the first or the latter or both or neither are true.