Starving Orphans, Come Back! — On Arguments That Don't Work
Some arguments feel compelling but are structurally broken — they would prove too much, or prove the opposite, or simply restate the conclusion in different words. This article identifies and names the most common patterns of non-working argumentation, with examples from technology debates.
Rhetoric has always distinguished between arguments that are valid and arguments that merely feel valid. The distinction matters: an argument that feels compelling but doesn't actually work can lead us to wrong conclusions, waste our time, and poison the well of productive debate.
The title of this article refers to a classic argumentative fallacy — the appeal to emotion through an irrelevant extreme case. Imagine debating whether a government programme should continue. Someone opposed to any cuts says: «But if we cut this programme, starving orphans will die!» Even if true in some extreme hypothetical, this argument is broken: it would prove that any programme, however wasteful, should never be cut, because cuts always have some negative consequence somewhere. An argument that proves too much proves nothing.
Let's take a tour through the taxonomy of non-working arguments, with particular attention to patterns that appear frequently in technology and internet discussions.
1. Arguments That Prove Too Much
The «starving orphans» argument is an example of a broader class: arguments whose logic, if accepted, would justify far more than the speaker intends.
In technology debates, this appears constantly. «We can't restrict data collection because it would slow down AI research.» This argument, if accepted, would justify virtually unlimited data collection under any circumstances, since AI research is always ongoing and always benefits from more data. The argument proves too much.
The test: ask whether the same argument structure could be used to justify any instance of the same general type. If yes, the argument is broken.
2. Motte and Bailey
This is one of the most useful patterns to know. The «motte» is a defensible but modest claim (the fortified tower in a medieval castle). The «bailey» is a bold, interesting, and controversial claim (the desirable land around the castle). The move: advance the bailey claim, and when challenged, retreat to the motte and act as though that's what you were saying all along.
Example in a technology context:
- Bailey: «AI will replace most programmers within 5 years.»
- Motte: «AI tools are making developers more productive.»
When pressed on the bold claim, the arguer retreats: «Well, obviously AI is changing development. I don't know why you're disputing that.» The modest claim is true and uncontroversial; the interesting claim was never properly defended.
3. Unfalsifiable Claims
A claim that cannot be disproved by any conceivable evidence is not a scientific or rational claim — it is an article of faith. This matters because the rational force of an argument depends on the possibility of being wrong.
«Agile will fix your team's problems.» How would you know if Agile had failed? If the team is still dysfunctional, you didn't implement it correctly. If the team succeeds, it was Agile. The claim is structured to be immune to disconfirmation.
Unfalsifiable claims are common in management, productivity, and self-help discourse. The move to watch for: any claim where failure is redefined as incorrect implementation rather than evidence against the theory.
4. Circular Reasoning (Petitio Principii)
The conclusion is smuggled into the premises. The argument appears to prove something but simply restates the claim in different words.
Classic example: «We should trust the official documentation because the official documentation is authoritative.» Why is it authoritative? Because we should trust official documentation.
In technology: «This framework is better because developers prefer it.» Why do they prefer it? «Because it's better.» The argument is a loop.
Circular arguments are often hard to spot when the circle is large — when many steps separate premise from conclusion, and the restatement is disguised by different vocabulary.
5. The Nirvana Fallacy (Perfect Solution Fallacy)
Rejecting a proposed solution because it's not perfect, without acknowledging that the alternative is also imperfect.
«We shouldn't use static type checking because it doesn't catch all bugs.» True — static typing doesn't catch all bugs. But the comparison is not between typed code and bug-free code; it's between typed code and untyped code. If typed code has fewer bugs, the argument against it fails even if it doesn't eliminate all bugs.
The nirvana fallacy appears in policy debates constantly. Any real-world proposal can always be criticised for not being the perfect solution to the problem, but the relevant comparison is with the realistic alternatives.
6. Appeal to Nature
The claim that something is good because it is «natural» or bad because it is «artificial» — without any explanation of why naturalness or artificiality is morally or practically relevant.
In technology discourse this appears as hostility to various kinds of mediation or augmentation. «Real programming is done without AI assistance» invokes the same structure. Why? What is the normative significance of naturalness here?
Note: this is not an argument against considering consequences of technologies on human skills, social structures, and so on. Those are legitimate concerns. The fallacy is specifically the move of treating «natural» as a terminal value rather than as a proxy for something more specific.
7. The Moving Goalpost
When a position is falsified, the speaker shifts the standard of evidence rather than updating their belief.
«Quantum computers won't be practically useful within 10 years.» [Ten years later, quantum computers achieve a significant milestone.] «Well, they still can't solve practical optimisation problems faster than classical computers.» [They start to do that.] «Well, they're not economically accessible yet.» And so on.
This pattern is a sign that the speaker is not actually updating on evidence but is defending a pre-formed conclusion. The tell: every time the original prediction is falsified, a new condition appears that «really» was the point all along.
8. False Dilemma
Presenting two options as exhaustive when other options exist.
«Either we collect all possible user data or our product will be completely useless.» This false binary ignores the large space of intermediate positions: collecting some data with user consent, anonymising data, using on-device processing, building less data-dependent features.
False dilemmas often disguise motivated reasoning. When someone presents only two options, ask: why exactly these two? Who benefits from restricting the option space?
9. Appeal to Consequences (for Truth Claims)
Arguing that something is true because we'd prefer it to be true, or false because the consequences of it being true are undesirable.
«AGI cannot be dangerous because if it were, we'd have to restrict AI research, and that would slow down beneficial applications.» The consequences of a proposition being true are irrelevant to whether it is actually true.
Note: appeal to consequences is a legitimate argument when the proposition in question is a normative claim (what we should do) rather than a factual claim (what is true). Confusing factual and normative claims is itself a common source of broken arguments.
10. Galaxy-Brained Reasoning
A long chain of individually plausible steps that arrives at a conclusion that is obviously wrong or harmful, but which the arguer has convinced themselves is correct through the process of reasoning.
This pattern is particularly relevant in the context of AI safety discussions, where it was named and discussed by AI safety researchers. A sufficiently capable reasoner can construct seemingly valid arguments for almost any conclusion. The strength of an argument is not sufficient evidence that the conclusion is correct.
The defence against galaxy-brained reasoning is a strong prior on conclusions that seem obviously wrong and a willingness to challenge argument chains rather than just concluding that since you can't find the flaw, the conclusion must be right.
Why This Matters
The goal of identifying these patterns is not to «win» arguments by pointing at fallacies. It is to improve the quality of our own thinking. All of these patterns are traps that we fall into ourselves, not just mistakes made by others.
The starving orphans gambit is tempting because it recruits our genuine compassion for a rhetorical purpose. The motte-and-bailey is tempting because it lets us claim credit for bold positions while retreating to safety when challenged. Circular arguments are tempting because they feel tautologically certain.
The antidote is a habit of asking: «Would this argument work against a case I'd want to reject? Does it prove too much? Am I using 'true' where I mean 'good' or vice versa? If I'm wrong, what would that look like?»
These are not natural cognitive habits. They require practice. But they are among the most valuable intellectual skills available — especially in an information environment where broken arguments spread faster than corrections.