Homo Deus by Yuval Noah Harari
I decided to read Homo Deus because I enjoyed Harari's previous book, Sapiens. The first 2/3 of Homo Deus is basically a rehash and more succinct version of his previous book. Despite the repetition, I got a lot out of it, mainly because I listened to Sapiens, so my retention was pretty bad. The last third of the book was very speculative.
Interesting things from the recap:
- In 2016, war killed 120k, crime 500k, suicide 800k, diabetes 1500k.
- Well stated paradox of knowledge using Marx as an example. Specifically communist revolutions never came to the western world because its leaders had read Marx.
- Also this logic conveniently defends Homo Deus existence as a way of avoiding the future it describes. But also of obviating the author from any responsibilities. Smart :)
- Big claim that in 1016 Europe people knew what 1050 would be like (but not 2016, 2050). Not sure if this is just hindsight bias.
- Nice crisp description of intersubjectivity, better than in sapiens.
- Claim: religion is for control, science is for power. I agree, except s/science/engineering/.
- The Vatican was once an institution for technological progress, a Silicon Valley type institution. Monasteries did contribute quite a bit to tech innovation, even recently (e.g. Mendel)
Even in the summary, Harari goes on some bizarre but interesting tangents:
- Several pages railing against lawns. They are purely status symbols and have no practical value.
- Snakes as ancestors. Dragons as wise creatures. Rainbow Serpents of Australian Aborigines. Eve as a snake. Garden of eden as an anti animist fable.
- The Garden of Woolsthorpe (where Newton's apple accident occurred) as a founding Myth paralleling the garden of eden. This is where newton had his kinematic revelation. Except rather than being punished for his curiosity like Adam and Eve were, we have modern science.
- Why do we have consciousness when there are so many automatic processes that are handled by the body and the brain without any conscious intervention?
- Weird and tenuous analogy of gods to corporations. Corporations actually produce output like a product or service but gods are purely fictional.
- Judge systems from without. For example a corporation thrives if it makes a lot of money. But that's not why corporations exist.
To transition from the summary to the meat of the book (speculating about the future), Harari rehashes his branches of humanism:
- liberal: equality for all, economically artistically etc
- social: socialism and communism. Deference to the group.
- evolutionary: belief in superiority of some over others.
My critique: this is too broad. Humanism seems to include everything that is non-religious. Because either God is the main thing or humans are the main thing. Are there other better alternatives? Maybe with Zen Buddhism?
Harari then dives into an overly stark analysis of the weakness of liberalism. As if the only US allies were dictators during post war period. Not true! Most of Europe was liberal. And although the Soviet Union was critical in defeating Nazi Germany, liberal Allies did play a huge role too. Harari then goes into counterfactuals, suggesting that social humanism (communism) defeated evolutionary humanism (Nazism) in World War II, and that the Cold War didn't turn Hot because of mutually assured destruction due to nukes.
The book really begins in Chapter 8, where Harari jumps headlong into future speculation. It's unclear to me why Harari jumps to some post humanist fantasy. Currently we are seeing the resurgence of "evolutionary humanism" in the form of authoritarian regimes.
Harari is clearly not an expert in AI and overly optimistic about it. He claims that"computers programmed with ethical algorithms could far more easily conform to the latest rulings of the international criminal court."
On replacing service type jobs, especially in medicine, over optimism. People like dealing with people. And the supposedly empathetic machine is still a long ways off. Even in the case of near perfect replication, I think acknowledging consciousness and experience on the other side goes a long way.
More reductionist thinking: biology is just an algorithm and we can do algorithms. Yes it is but the devil is in the details.
He cites a Frey and Osborne study which suggests that all sorts of jobs are going away. A twenty year horizon cannot reasonably have such precise probabilities. And many problems with specifics. 98% Sports referee jobs to go obsolete by 2033? Sure we can tell fowls through video. Who will prevent fights? Who will make difficult calls? Seems like bullshit.
On self knowledge again Harari goes too far by claiming that your self is the sum of your biometric data. As long as we don't divulge everything to the machine, aren't we quite far from this vision? I am surprised that a deep meditator like the author believes this.
Harari lists several threats to liberalism posed by this hypothetical future thing:
One threat is that we will lose our agency and become agents of some networked machine. This is terrifying. We may need some anti-tech revolution to prevent this. Also it's unclear what the scope would be. I'd like to always be able to disconnect and go into nature.
Another threat is that the rich will become increasingly intelligent, healthy, successful, further separating the gap between the rich and the poor. I believe that if things get bad enough, the population at large will rebel and redistribute the spoils more evenly.
Here's why. In established first world countries, the rich know how to behave. They rarely flaunt their money and mostly blend in with the rest of the world. I think this is largely a self-preserving strategy. Many of those that don't follow it aren't respected by giant swaths of the population. For example overly flashy people are generally considered to be douchebags. Trump is an aberration and widely hated. Now imagine if he was also some superhuman demigod. Would his supporters be into that? If a tiny clique of people were visibly richer healthier, people would rebel.
I think Harari a third point is only viable in an dystopian oligarchy. And if people have become completely useless, why would some small elite group of humans still hold the reigns? This scenario folds into the superintellihgence one.
We use machine strength and machine intelligence augmented tools all the time. Harari seems to be obsessed with the neural augmentation aspect, but is this such a game changer?
Good points that are hard to argue against: human loss of smell, attention, ability to dream. I seek to get it all back. But I wonder: were these all things that came naturally to everybody in some long forgotten utopia, or did rare individuals have to work on developing these skills for themselves while the rest of the population plowed the field or caught the deer?
- Dreaming: It's no accident that Joseph's dream interpretation was so rare and that there were commonly priest classes for interfacing with dreams
- Smelling: yes seems likely that the masses of the day: hunters and farmers needed this more long ago.
- Attention: also an elite skill I think. We are In an age of distraction but attention today is more needed than ever before. Attention for a hunter was focused with adrenaline. Focusing otherwise is and probably was rare.
The focus on technology changing our minds and wills seems premature. And by the way this is why I think SSRIs are a slippery slope. Maybe the google normal health index can serve as some humanistic baseline for human norms?
The last section is entirely focused on "Dataism". It sounds like he's describing information theory. I suppose any discipline can already be viewed through that lens. Except is that activity likely going to produce useful results? Empirical evidence from science points to no.
Harari claims that the phrase "information wants to be free" means that the information itself is the actor. This is surprising, since the phrase is not meant to be taken literally, used as a slogan about increasing human access to information. A cult of information worship for its own sake? I don't think this makes sense without actual value to an actual human.
Harari has so far failed to explain who believes in dataism. Maybe some people in SV appear to, but IoT usually has some actual purpose, like for example, making things easier to use, or increasing productivity of a farm or factory. If you lose the thread, you end up with tech for tech's sake (an existing problem).
Also Harari's stance on the instagram sharing thing is really cynical. Yes sharing with others is part of it. But many people have taken photos in the past, for themselves and their friends. It's clear to me that the value of Human experience is inherent. Some people are overly addicted to external validation, but this too is an existing problem.
Throughout the book Harari judiciously uses the word "may". This is good because man the last third especially is just chock full of wild speculation. It strikes me that the way to read this book is as a provocation. Overall I'm left with the feeling that this was a fast follow for Harari. A lot of the solid stuff in the first 2/3 of the book is a rehash of Sapiens. The last part was stimulating but mainly to get me thinking about why I disagree.