Feb 23, 2025
I declare the Worst Argument In The World to be this: “X is in a category whose archetypal member gives us a certain emotional reaction. Therefore, we should apply that emotional reaction to X, even though it is not a central category member.”
An gives the example:
“Taxation is theft!” True if you define theft as “taking someone else’s money regardless of their consent”, but though the archetypal case of theft (breaking into someone’s house and stealing their jewels) has nothing to recommend it, taxation (arguably) does. In the archetypal case, theft is both unjust and socially detrimental. Taxation keeps the first disadvantage, but arguably subverts the second disadvantage if you believe being able to fund a government has greater social value than leaving money in the hands of those who earned it. The question then hinges on the relative importance of these disadvantages. Therefore, you can’t dismiss taxation without a second thought just because you have a natural disgust reaction to theft in general. You would also have to prove that the supposed benefits of this form of theft don’t outweigh the costs.
I agree with Scott in that I hate this argument and I hear normies making it constantly and I think they should stop. What I want to do is connect with the idea of fuzzy/plausible reasoning, as contrasted with deductive reasoning. I like the way the great Bayesian E. T. Jaynes introduces the idea:
“The actual science of logic is conversant at present only with things either certain, impossible, or entirely doubtful, none of which (fortunately) we have to reason on. Therefore the true logic for this world is the calculus of Probabilities, which takes account of the magnitude of the probability which is, or ought to be, in a reasonable man’s mind.” — James Clerk Maxwell
Suppose some dark night a policeman walks down a street, apparently deserted; but suddenly he hears a burglar alarm, looks across the street, and sees a jewelry store with a broken window. Then a gentleman wearing a mask comes crawling out through the broken window, carrying a bag which turns out to be full of expensive jewelry. The policeman doesn’t hesitate at all in deciding that this gentleman is dishonest. But by what reasoning process does he arrive at this conclusion? Let us first take a leisurely look at the general nature of such problems.
A moment’s thought makes it clear that our policeman’s conclusion was not a logical deduction from the evidence; for there may have been a perfectly innocent explanation for everything. It might be, for example, that this gentleman was the owner of the jewelry store and he was coming home from a masquerade party, and didn’t have the key with him. But just as he walked by his store a passing truck threw a stone through the window; and he was only protecting his own property.
Now while the policeman’s reasoning process was not logical deduction, we will grant that it had a certain degree of validity. The evidence did not make the gentleman’s dishonesty certain, but it did make it extremely plausible. This is an example of a kind of reasoning in which we have all become more or less proficient, necessarily, long before studying mathematical theories. We are hardly able to get through one waking hour without facing some situation (e.g. will it rain or won’t it?) where we do not have enough information to permit deductive reasoning; but still we must decide immediately what to do.
I think people making the Worst Argument are trying to do deductive reasoning: theft is bad, taxation matches the formal definition of theft, therefore taxation is bad. This is confused because the right way to reason about things is probabilistic, and when you try to do use neat definitions to do deductive reasoning you’re just playing useless word games.
Perhaps it’s somewhat controversial to claim that we should be reasoning about morality probabilistically. In fact I’m not sure I believe this myself! It feels awkward to me to claim that there’s a “80% probability” that taxation is moral. It’s more that “taxation is moral in some ways, in others not quite, in a way which comes out to it being mostly moral.” The underlying process is more like some data clustering thing, where my brain’s neural network black box compression scheme places taxation close to the cluster of “moral” things in the-space-of-possible-things across most dimensions, but a bit off in some other dimensions. But also, I feel like a mathematically inclined reader would be able to convince me that this “fuzzy” reasoning scheme where morality is “kind of” true is actually just boring old probability theory, if we want the scheme to fulfill some simple desiredata. This is what E. T. Jaynes does in the cited book: he imagines we’re programming a robot to reason about the world, gives some simple properties that we obviously want the robot’s reasoning scheme to have, and then shows that then the robot has to be reasoning using probability theory as we know it, with Bayes rule and all.
Things are a bit less awkward if you believe in some kind of objective morality, although I will note that I think this is silly. Then there’s some true answer to the question of “is taxation moral” and I don’t get this intuitive ick from thinking of P(taxation is moral). There’s still a slight awkwardness in that it’s a bit like asking, “what’s the probability of P = NP?”, but a good Bayesian thinks of probabilities in terms of degrees of certainty so it’s probably fine. Though actually, I’m not so sure about this either - don’t we then have to make sense of probabilities conditional on something which is logically false? How do we condition on P = NP if it’s not true?
But ugh, this is also the problem with hypothetical scenarios in general if you’re being really autistic about it - if the universe is deterministic, then it makes no sense to think about the probability that you’re late to work tomorrow conditional on it raining - if it won’t rain, then it raining tomorrow is impossible by the laws of physics, and the conditional makes no sense. I’m sure someone has thought about this and “solved” this. Maybe information theory comes to the rescue somehow?
In any case, dear reader, I apologize for my philosophical confusion and I hope you found the link between the Worst Argument and plausible reasoning interesting.