Many mistakes in logic occur due to problems with our premises – the information or assumptions we use as our starting point when we begin to reason. But some are down to the way we put those premises together to draw conclusions – that is, they occur because we use bad logical form.

Learn more about logical form.
Hasty Generalization
A hasty generalization is an oversimplification of an issue. It occurs when we draw conclusions that are not adequately supported by the information/reasons we have—either we jump to conclusions too quickly, or we make sweeping generalizations that overreach the evidence.
Don’t buy a foreign car. My brother had one, and he had one problem after another. The carburetor went bad and his brake fluid leaked—he ended up spending a fortune on repairs. Foreign cars are crap.1
This “argument” jumps to conclusions, making a sweeping generalization about all foreign cars on the basis of one example (or a very few examples). It’s an error of inductive reasoning. One example is never enough information to draw a responsible conclusion about a whole class or category of things. (Maybe the brother’s problems had nothing to do with the car being foreign; maybe the car had 300,000 miles on it.)

Everything I am, you helped me to be.
A major problem in our society is that we often make hasty generalizations about not just things, but people.
Stereotypes are one common kind of hasty generalization that occurs when we see images or examples in the media—or even in our own experience—that lead us to make sweeping judgments about all people with certain characteristics, such as race, religion, age, gender, etc.

Next-level bad form.
Non sequitur
Hasty generalizations are errors in moving from premises to conclusions. Though they are based on evidence, they jump to conclusions before there is enough evidence to justify them. But sometimes people draw conclusions that don’t seem to have any grounding at all in the evidence. When there doesn’t seem to be any logical connection between the conclusion and the premises, we call this a non sequitur.
The term non sequitur is often used in casual conversation; it refers to a remark or idea that appears random or unrelated to its context.

In reality, this e-mail from J. Crew had nothing to do with puppies.
Non sequitur is Latin for “does not follow”; a random mention of puppies does not seem to follow if you’re talking about a sale on preppy, overpriced – though sometimes admittedly cute – clothing.
In logic, a non sequitur is a false conclusion, one which does not follow from the information presented. It may not be completely random, but it has only a superficial relation to the reasons that supposedly support it.
John should be good at tennis. He’s in great shape.2
John may be in good shape, but it doesn’t necessarily follow that he’ll be great at tennis. He could be uncoordinated and a complete klutz.
Post Hoc Fallacy
Drawing conclusions that do not truly follow from our premises is more common than you might think. The Post Hoc Fallacy is an extremely common error in reasoning that rears it’s ugly head just as often in the field of scientific research as it does in everyday life.
This fallacy is often referred to by the Latin phrase post hoc, ergo propter hoc, which translates to “after this, therefore because of this.” The fallacy consists in assuming that because A happened and then B happened, A must have caused B.
Imagine that you call your apartment’s building manager and ask him to fix a broken pipe. He tells you that you’ll be responsible for the cost of replacing the pipe. After all, he reasons, it must be your fault that the pipe broke—he’d never had a problem with the plumbing before you moved in.
While it’s possible that you did, in fact, do something to break the plumbing, the simple fact that it happened after you moved in certainly isn’t enough evidence, on its own, to allow the building manager to conclude this. It may have been just coincidence or bad luck; maybe the plumbing had been poorly maintained for years, but the pipe just happened to burst now.
The above example makes the error in reasoning obvious, but the post hoc fallacy can sometimes be more subtle. Let’s say a friend observes to you that whenever he goes to bed with his contacts in, he wakes up with a terrible headache. He naturally concludes that sleeping with his contacts causes headaches. But what your friend neglects to mention is that he goes to bed without removing his contacts whenever he comes home drunk. In this case, though its true that sleeping in his contacts is correlated with his headaches, it turns out that there’s a hidden third factor that’s producing both of these effects: 6 Jaegerbombs.

This will not end well.
Mistaking correlation for causation – that is, assuming that because two things occur together, or one after the other, there must be a cause-effect relationship between them – is a big no-no in statistics, as well as the sciences and social sciences.
Hidden Assumption
Non sequiturs are not connected to the premises that supposedly support them. But sometimes we draw conclusions that don’t appear to be supported by reasons because the premise or premises that help us to draw the conclusion remain unstated. These hidden premises are sometimes referred to as hidden assumptions; often the reason we don’t state all of our premises is because we are relying on things we assume to be true – things that we think are so obvious to everyone that they don’t need to be stated, or things that seem so obvious to us that we don’t even realize that we are making assumptions.
The really sneaky thing about unstated premises is that we actually use them all the time. In some cases, the premises we use are so ingrained in our social norms, and so obvious to everyone, that it actually seems ridiculous to state them. Take a look at this argument:
Don’t let children play with plastic bags. They may put the bags over their heads, which would stop them from breathing.
If we look at just what is explicitly stated, this argument is incomplete. It relies on premises that are missing.
For one, it relies on our knowledge and acceptance of certain facts: that not breathing is bad, and could result in death. This is such a basic fact, that it seems inconceivable that anyone might be unaware of it, or refuse to accept it as valid. It seems like a pretty safe assumption.
In addition to the point of fact, there’s also a point of value that’s being taken for granted. Ask a sociopath, and s/he might say, “So the kid stops breathing. So what? Why is that a reason not to let him play with the bag?”
As you’re inching towards the exit, you could explain that pretty much everyone else would think that suffocating children is a bad thing, and should be avoided. You didn’t think you needed to explain that part. Yikes.

I assume you like Huey Lewis and the News.
In this example argument, the unstated premise won’t really cause any problems – at least, for 96% of the population it won’t. It’s hard to imagine anyone who would dispute the unstated premise that we should make saving children a priority; it’s a statement of belief/values that anyone would accept, so it really does seem unnecessary to state it. Leaving “obvious” things unstated is a common way we all communicate. Having to list everything we believe to be true anytime we make a claim would get really tedious, really quickly. But arguments like this get us in the habit of not always stating our premises – and, sometimes, not noticing when we don’t.
And this is why hidden assumptions may be one of the most dangerous logical fallacies. They often represent a lack of critical thinking, wherein we don’t recognize that the beliefs we take for granted may not be shared by everyone. This can lead us to be biased or to be blind to the viewpoints of others.
Even worse, in trying to persuade us, people sometimes deliberately neglect to state an assumption that they know will be controversial, or expose them to criticism, so that they can make their argument seem stronger than it really is.
Consider these two arguments, from opposite ends of the political spectrum:
Killing is morally wrong. Therefore, abortion should be illegal.
Killing is morally wrong. Therefore, capital punishment should be illegal.
Both arguments rely on hidden assumptions. They provide reasons for their claims, but fail to state all of their premises. As the arguments are stated, there’s actually no explicit connection being made between the premise and the conclusion.
Each of these arguments starts by stating the premise that killing is morally wrong. But each also assumes that anything that is morally wrong should also be illegal. That is the unstated premise that actually produces their conclusions. But there are many who might disagree with that premise, claiming that the law is there to protect people’s rights, not impose a code of morals. By leaving that premise unstated, these arguments take the fundamental support for their claims and hide it from scrutiny.
As these examples show, this is problematic, because in many cases the premises we use to build our arguments, are actually themselves conclusions of some other line of reasoning. As such they are claims that need to be supported with their own reasons.
Notice that even the stated premises of the arguments above are actually claims that killing is always morally wrong, in any circumstances. But there are many people who would not agree with this; they might say that it is justified to kill in self-defense, in a “just war”, or to serve some greater good. For the arguments above to be persuasive, one would need to further support the claim that killing is always morally wrong, or would have to provide specific reasons why killing in this instance is morally wrong.
Unexamined assumptions, whether they are hidden or not, can lead us to make illogical decisions, since we sometimes don’t realize that they are influencing our reasoning. In these cases, they can also present an obstacle to effective writing, since we will fail to make arguments that are persuasive to those who don’t share our assumptions.
Circular Reasoning
Circular reasoning occurs when one fails to adequately support his/her claims with reasons because the reasons provided are not truly independent of the claim. The claim depends on the reasons, but the reasons depend on the claim, so you end up in a running in a circle, like a dog chasing its tail.
Consider this argument:
You shouldn’t go swimming in the river. It has a strong undertow, which can cause even experienced swimmers to be pulled underwater and drown.3
In this argument, I’ve provided reasons for my claim that you shouldn’t go swimming in the river; these reasons are independently valid, and could be verified.

You are definitely doing it wrong.
But what if I can’t be bothered to come up with good reasons? I might settle for something like this:
You shouldn’t go swimming in the river. The river is a bad place to swim.
I’m really just restating the same thing in different words. The second statement doesn’t provide any additional information or justification beyond the first statement. It’s not a reason for the claim, it’s just a reformulation of it.
This fallacy is sometimes referred to as begging the question, since it “begs off,” or sidesteps the important questions involved.
Simple forms of circular reasoning, like the swimming example, don’t provide reasons at all; the person making the argument simply repeats him or herself. But circular reasoning can take more complex forms:
Tough antidrug laws will reduce drug use. If these laws are implemented, recreational drug use will be criminalized. This means that anyone caught using drugs can be arrested and prosecuted. This will deter people from using drugs.
This “argument” does a good job of appearing to present reasons, but what it mostly does is describe what happens when something is made illegal. It doesn’t say why these conditions will deter drug use.
This person claims that antidrug laws will reduce drug use because laws deter us from doing things that are against the law—but the whole point is that this person is supposed to prove that the law will work! By relying on the notion that all laws work, this person has entirely sidestepped the question of whether this law will be effective; he or she assumed from the get-go that it would be effective.
Either-Or Fallacy
The Either-Or Fallacy is also sometimes referred to as a false dilemma. It’s a way of oversimplifying an issue by making it seem as if there are only two options. You may have heard the saying,
You’re either with us, or against us!
This statement is usually an attempt to gain support from people who disagree, by forcing them into a corner, and making it seem that there is no room for compromise.
False dilemmas are a common propaganda technique, used to pretend that there are only two possible outlooks on issues: conservative or liberal, Democrat or Republican, patriotic or un-American, etc. And of course, one of the alternatives is often presented so negatively as to force us to choose the other one.
But false dilemmas aren’t always foisted upon us by pundits who want us to choose sides. Sometimes our own expectations or assumptions prevent us from thinking “outside-the-box”, and recognizing that there may be creative ways to solve a problem that have not been tried or considered. Often, the Either-Or Fallacy represents the limitations of our own ways of talking or thinking about a particular topic.
For more info on logical form and logical fallacies, check out the other posts in this series:
- Best Life Hack: Logical Reasoning
- Logical Fallacies Exposed
- Fallacies, Episode II: The Use and Abuse of Evidence
- Fallacies, Episode III: Misdirection
1. Adapted from Robert B. Donald, et al. Writing Clear Essays. 3rd ed. Prentice Hall, 1996. p. 300.↩
2. Adapted from Robert B. Donald, et al. Writing Clear Paragraphs. 6th ed. Prentice Hall, 1999. p. 337.↩
3. Adapted from Gilbert H. Muller and Harvey S. Wiener. To the Point: Reading and Writing Short Arguments. Pearson Longman, 2005. p.14.↩