Boris Smus

interaction engineering

How to Actually Change Your Mind by Big Yud

Technically the second book in a giant epic entitled "Rationality: From AI to Zombies". The author is a well known rationalist and active member/founder of lesswrong.org, Center for Applied Rationality (CFAR), etc.

Essentially, this is a series of essays written and published as "The sequences", around 2009. They are loosely related and cover a wide array of topics, many of them highlighting irrational modes of thought. Much of the work focuses on biases central to behavioral economics, focusing on Kahneman-style results. But the author goes beyond that, and also introduces a lot of opinion for how a rationalist should behave. At the same time, there is a tendency to be incredibly nerdy, which is alternatingly endearing and borderline autistic. I found myself asking the question: if one becomes a purely rational agent, isn't a computer strictly better? On the path to rationality, what aspects of humanity is worth preserving?

Here's some new stuff I learned. A fair amount of the book covers behavioral economics concepts that I read about already in TF&S.

Litanies

Aumann’s Agreement Theorem suggests that no two rationalists can agree to disagree, given that they have the same information.

Litany of Gendlin:

What is true is already so. Owning up to it doesn't make it worse. Not being open about it doesn't make it go away. And because it's true, it is what is there to be interacted with. Anything untrue isn't there to be lived. People can stand what is true, for they are already enduring it.

Litany of Tarski:

If the box contains a diamond, I desire to believe that the box contains a diamond; If the box does not contain a diamond, I desire to believe that the box does not contain a diamond; Let me not become attached to beliefs I may not want.

Against black and white thinking

Yudkowsky is especially effective in his attacks on binary thinking. For example, on partisanship:

Politics is an extension of war by other means. Arguments are soldiers. Once you know which side you're on, you must support all arguments of that side, and attack all arguments that appear to favor the enemy side, otherwise it's like stabbing your soldiers in the back - providing aid and comfort to the enemy. People who would be level-headed about evenhandedly weighing all sides of an issue in their professional life as scientists can suddenly turn into slogan-chanting zombies when there's a Blue or Green position on an issue.

On the tendency and fallacy in binary thought:

There is a natural tendency to treat discussion as a form of combat, an extension of war, a sport; and in sports you only need to keep track of how many points have been scored by each team. There are only two sides, and every point scored against one side is a point in favor of the other. Everyone in the audience keeps a mental running count of how many points each speaker scores against the other. At the end of the debate, the speaker who has scored more points is, obviously, the winner; so everything that speaker says must be true, and everything the loser says must be wrong.

The horns effect - all negative qualities correlate:

Stalin also believed that 2 + 2 = 4. Yet if you defend any statement made by Stalin, even “2 + 2 = 4,” people will see only that you are agreeing with stalin and you must be on his side.

And a very nice summary of a better way of thinking:

Not all arguments reduce to mere up or down. Lady Rationality carries a notebook, wherein she writes down all the facts that aren’t on anyone’s side.

Real belief vs. belief in belief:

Roger Zelazny once distinguished between “wanting to be an author” versus “wanting to write.” Mark Twain said: “A classic is something that everyone wants to have read and no one wants to read.” Criticizing yourself from a sense of duty leaves you wanting to have investigated, so that you’ll be able to say afterward that your faith is not blind. This is not the same as wanting to investigate.

Assume your interlocutor is good

"To argue against an idea honestly, you should argue against the best arguments of the strongest advocates". This, and the closely related concept of the principle of charity aka Steelmanning (the opposite of strawmanning), which I heard from a Sam Harris interview, sent me on a long reading tangent of arguments for and against. Insightful tidbit from that last link:

First, seek to understand the actual viewpoints people you disagree with are actually advocating. Second, seek out intelligent and well-informed advocates of viewpoints you disagree with. You don’t have to make up what your opponents believe! As it happens, you have many smart opponents! Third, whenever possible, try to switch conversations from a debate focus to a collaborative truth-seeking focus.

Back to Big Yud. Some wisdom on focusing on the argument, not on the person:

Someone once said "Not all conservatives are stupid, but most stupid people are conservatives". If you cannot place yourself in a state of mind where this statement, true or false, seems completely irrelevant as a critique of conservatism, you are not ready to think rationally about politics.

A variation on the reasonable person principle (harkens back to my time at CMU).

To understand why people act the way they do, we must first realize that everyone sees themselves as behaving normally. Don’t ask what strange, mutant disposition they were born with, which directly corresponds to their surface behavior. Rather, ask what situations people see themselves as being in. [...] Realistically, most people don’t construct their life stories with themselves as the villains.

Deciding which side to argue

Great distinction between rationality and rationalization. Very related to Haidt's position that beliefs are intuitive but their defence is rational. But Haidt makes no distinction like this. Would love to hear his thoughts on it.

Rationality is not for winning debates, it is for deciding which side to join. If you’ve already decided which side to argue for, the work of rationality is done within you, whether well or poorly. But how can you, yourself, decide which side to argue?

Eliezer suggests enumerating the evidence: "Lady Rationality carries a notebook, wherein she writes down all the facts that aren’t on anyone’s side". Here's how to construct an honest ultrarational argument for a particular political candidate:

  • Gather all evidence about the different candidates
  • Make a checklist which you will use to decide which candidate is best
  • Process the checklist
  • Go to the winning candidate
  • Offer to become their campaign manager
  • Use the checklist as the campaign literature

The future is hard to predict

Herd instinct in venture capitalism:

The majority of venture capitalists at any given time are all chasing the same Revolutionary Innovation, and it’s the Revolutionary Innovation that IPO’d six months ago. This is an especially crushing observation in venture capital, because there’s a direct economic motive to not follow the herd.

And what to do about it. DFJ (a VC) has a rule favoring a passionate minority to outweigh a negative majority. This also reminds me of the Tenth Man Rule.

Only two partners need to agree in order to fund any startup up to $1.5 million. And if all the partners agree that something sounds like a good idea, they won’t do it.

Movies and books have a huge effect on the human psyche. This will probably compound with more immersive storytelling mediums:

So far as I can tell, few movie viewers act as if they have directly observed Earth’s future. [...] But those who commit the fallacy seem to act as if they had seen the movie events occurring on some other planet; not Earth, but somewhere similar to Earth.

Predicting numbers is especially difficult:

I observe that many futuristic predictions are, likewise, best considered as attitude expressions. Take the question, “How long will it be until we have human-level AI?” The responses I’ve seen to this are all over the map.

Avoid having THE Great Idea and get granular and specific

Avoiding partisanship by focusing on the minimum viable argument, reminds me of Sunstein's Judicial minimalism:

But try to resist getting in those good, solid digs if you can possibly avoid it. If your topic legitimately relates to attempts to ban evolution in school curricula, then go ahead and talk about it—but don’t blame it explicitly on the whole Republican Party; some of your readers may be Republicans.

Avoid overly large uhh Thingies, and chop them up.

Cut up your Great Thingy into smaller independent ideas and treat them as independent. For instance, a marxist would cut up Marx's Great Thingy into theories of 1) value of labor 2) political relations between classes 3) wages 4) the ultimate political state of mankind.

Other interesting stuff

Taber and Lodge's "Motivated skepticism in the evaluation of political beliefs" describes six predictions which are very Haidt-y. It's a list of political thinkos that are driven by behavioral economic biases.

Beliefs don't need to be completely bullet proof. But this contradicts science, where a single counter example can topple a theory.

A probabilistic model can take a hit or two, and still survive, so long as the hits don’t keep on coming in. Yet it is widely believed, especially in the court of public opinion, that a true theory can have no failures and a false theory no successes.

On the uselessness of "Deep Wisdom":

Surely the wisest of all human beings are the New Age gurus who say, “Everything is connected to everything else.” If you ever say this aloud, you should pause, so that everyone can absorb the sheer shock of this Deep Wisdom. There is a trivial mapping between a graph and its complement. A fully connected graph, with an edge between every two vertices, conveys the same amount of information as a graph with no edges at all.

There's a distinction between Traditional rationalism and Bayesian rationalism. And I worry that the Bayesian variety, which Eliezer is a subscriber of, is a sort of hedgehogginess: a very focused and blindered approach. But I liked the idea that you can go beyond falsification, the ability to relinquish an initial opinion when confronted by clear evidence against it.

I suspect that a more powerful (and more difficult) method is to hold off on thinking of an answer. To suspend, draw out, that tiny moment when we can’t yet guess what our answer will be; thus giving our intelligence a longer time in which to act. Even half a minute would be an improvement over half a second.

"Make America Great Again":

A key component of a zeitgeist is whether it locates its ideals in its future or its past. Nearly all cultures before the Enlightenment believed in a Fall from Grace – that things had once been perfect in the distant past, but then catastrophe had struck, and everything had slowly run downhill since then.