Argument in a high-trust environment
This post describes a pattern I tend to follow (unconsciously) when it comes to arguing things, and suggests cases where this pattern may work very poorly.
The method, in one line
I often begin an argument by making extremely strong claims, and then trying to guide the collective mind of the arguers to show me why my claim is false.
A concrete example
A couple of days ago, I made the claim to some of my team at work that “functional programming is more intuitive than imperative programming”. My state of mind before I made this claim was a tentative suspicion that this was idealistic and not necessarily true, but no concrete reasons why it would be false, and a preference on ideological and tribal grounds for it to be true.
Then someone replied to the effect that my assertion was false. (Hooray! We’re about to have a fun argument.)
There was some back-and-forth in which we determined a bit more closely what my claim was and what the counterclaim was; we ended up with something more like “someone who has never programmed before, and has never encountered the imperative programming model before, will find functional programming easier to pick up, and this is for fundamental reasons to do with how people think”.
Then I started trying to construct counterfactuals and hypotheticals to describe the state of the world in the case that the hypothesis was true or false, and I came up with recipe books as an example of an everyday thing that people are generally expected to be able to use.
We had a bit of a discussion about whether recipe books appear to be “functional” or “procedural”, and eventually I was persuaded that they are primarily procedural. I tried to think what a recipe book would look like if it came from a civilisation of beings who naturally thought using the functional paradigm, and the result appeared to be quite different from what we see in recipe books.
So I am now decently convinced that people naturally think procedurally.
Why does this method work?
The method works in a high-trust environment where everyone understands that everyone else is operating in good faith and can have their opinions changed by data. It’s basically a clumsy and inconsistent way of finding cruxes in the LessWrong sense: we are essentially binary-chopping through the space of hypotheses, finding the minimal places where we differ on our opinion.
Why is the method a bad one in general?
Well, it is entirely phrased around an adversarial conversation, which might be mind-killing in itself! The method may well tend to get people digging their heels in.
Whenever I remember, I’ll try and practise the double-crux method in a way that’s closer to its original formulation, because it should be possible to get the benefits without being so adversarial.
Under what specific conditions does the method perform particularly badly?
One easy place where the method totally fails is if there’s a big power imbalance among the people in the conversation, such that some people feel they can’t contradict others or point out obvious flaws. It also fails when people aren’t used to a culture of calling people out when they say things that are false.
Appendix: what would a functional-paradigm recipe book look like?
It seems to me that a functional-paradigm recipe book would be phrased less in terms of the instructions you perform to get from one state to the next, and more in terms of descriptions of the states and the possible transitions between them. Recipe books currently appear essentially as an imperative list of instructions; a functional recipe book might appear more like a graph of the intermediate states you expect to reach during the recipe.
I think recipe books might be better if they took some inspiration from this! A recipe book which was more explicit about the dependency order of the steps (rather than cleaving strictly to the “linear sequence of instructions” idiom), and which described each state thoroughly (perhaps with pictures), might well be easier for a beginner to follow/understand/learn from.