Don't be a Doomer
[DISCLAIMER: this post is most definitively NOT saying that your anxiety about your bespoke form of societal collapse is unfounded. It’s simply warning against excessive pessimism as a general social phenomenon :]
For a variety of unrelated reasons, quite a number of people in my life or whom I follow online have recently started buying into a variety of overarching pessimistic narratives. The thing that’s confusing to me as an outside observer to this is that the reasons for these pessimisms are so varied, and everyone has their own distinct set! Unfortunately for my friends, the human race can only go extinct once1. That said, the some dominant strands I can identify are the following:
Climate - People concerned about climate change
The climate crisis is just getting started and I cannot bear to stand around—tinkering with microcontrollers and Python programs—while my future burns away.
I’m frustrated. […] I wish I could accept what everyone else seems to have already accepted—that our climate catastrophe is inevitable and we will just have to adapt. I’m tempted. However, these are things I cannot accept for accepting such things would be ignoring the truth that individuals can and do catalyze change.
Population decline - the concern that population will collapse
AI X-Risk - The possibility that rogue AI will destroy civilisation or make humanity subservient
Rise of populism - The much discussed “rise of authoritarians” in various countries (examples — just google the phrase)
Falling apart of global order - the idea that the era of Pax Americana ending will lead to multitudinous catastrophes
Vague worries about internet rotting people’s brains / turning them into degenerates etc. (often wrapped up in concerns about fertility).
This is far from a complete list, and perhaps my list is somewhat of a Rorschach test of what I’m interested in or thinking about. If I didn’t include your personal doom scenario, sorry!
So why don’t I think you should be worried about these things? Well, you probably should! At least to some extent. In each case the data is kind of there that these things are a Big Problem™! The question is how much you should worry about them.
Implicitly or explicitly, the thought process taken by people who worry excessively about problems similar to the above seems to be “X is a problem; I can’t see a solution; therefore society is over.”
This form of “absolute” argument leading to a general vibes based pessimism is an appealing line of thought. It forms a kind of attractor in the space of ideas — once you are fixated on a problem with no obvious solution (or no solution which you can directly affect), the world feels fundamentally broken in some way and the actions of others not directly in service of working on this problem feel at best inexplicable or at worst malign.
At the level of reality, however, this line of thinking is unhelpful and potentially malignant. The history of pessimism is littered with examples where an impending disaster turned out to be irrelevant. Moral panics come and moral panics go, and the world moves on. There’s even a Twitter account dedicated to the exhibition of past instances of moral panics focused around things which seem laughable today.
In extreme forms, pessimism can be counterproductive at the object level in several ways. The first is that its logical conclusion can spawn an almost self-pitying state of inaction. This is perhaps exemplified by the (slightly tongue in cheek) “death with dignity” strategy announced by one of the most prominent groups focused on AI existential risk.
A second trap inherent in doomerism is that it can actually lead to an almost religious reverence for the problem itself, in which advocates for the problem being a problem, leading people worrying about a problem to take a position which I can only characterise as their saying “a mere solution will not suffice to solve this problem!” Or, even more bafflingly, they mockingly call anything proposed “solutionism” or “techno-solutionism” implying that the proposer is hopelessly naïve and has impure motives. Maybe they are and they do, but we can just as easily turn these sorts of ad hominem arguments against the doomers themselves and, further, such a reverance for a self-selected problem is risible, to the point that it feels like the problem has become a twisted sort of religious idol, so embedded into people’s identities that it would be painful to see it solved (or not be an issue in the first place).
Furthermore, often followers of a particular doom cult will claim to completely reject the validity of other problems, or other aspects of life, in the face of their own chosen issue. One of the most interesting things to watch is when tow forms of doomerism interact. While both sides agree that Everything Is Terrible And Becoming Worse, they will fight each other to the death of the reasons why we are so doomed. My favourite example of this is the fight between people who are worried about AI for social reasons (of amplifying biases / etc. in society), and true AI doomers, who hate each other even more than their common enemy of techno-accelerationists!
Tellingly, many such extremists still care about other issues in their personal lives. And of course we cannot begrudge them for this— unless one is actually a terrorist on behalf of an issue, there is kind of an inevitability in this: by definition, doing anything that is not furthering the cause of an issue is implicitly caring about things outside of the value system constructed around a particular issue.
Furthermore, reality is huge and extravagantly complex, so the remedies to problems of the past are often not only not clear at the time, but not clear in hindsight! An interesting example of this is to do with the “population crisis” itself — which used to be taken to mean the impending overpopulation of the world. As a result of falling birth rates this completely reversed in the subsequent decades to the point where now people are worried about a declining population. Which is probably a bad thing, but to say civilisation is going to crumble soon also seems like a gross exaggeration.
Social dynamics amongst human networks and exponentials in technology are powerful and very hard to predict in advance. This is obviously a double edged sword — pandemics spread much faster than one would expect them to from a linear extrapolation, and, yes, the intelligence singularity may creep up unexpectedly. To balance these concerns, though, if you’re worried about rapid progress then these worries should be tempered by an equal and opposite optimism that solutions can quickly emerge, such as that of the vaccines in Covid, which were widely predicted to be far slower than they were, or, potentially, alignment in the context of AI risk, or batteries and solar in the context of climate.
I like Tyler Cowen’s approach to thinking about these issues related to the future, prescribing a dual mandate of radical uncertainty and empiricism. We don’t know what the future will look like, but it will probably look somewhat like the past combined with things we can extrapolate given the current environment, and we have some large degree of agency over it. So while it is important to think about the problems of the future, we should work towards alleviating and yes, solving, them, tackling them with the practical mindset of problem solving rather than being paralysed by vague bad vibes.
N.B. not all the things I list are “existential risks” which would cause human extinction but you wouldn’t be able to discern that from the alarmist valence of some of their comments