Not for the faint of art. |
This BBC article (whose headline resembles a Cracked headline) is a few years old now, but all that means is they might be missing a few recent ones. In recent history, a few individuals have made decisions that could, in theory, have unleashed killer aliens or set Earth's atmosphere on fire. What can they tell us about attitudes to the existential risks we face today? Already, I have a bad feeling about this. Sounds like an Agenda, which I don't expect from the BBC. In the late 1960s, Nasa faced a decision that could have shaped the fate of our species. Following the Apollo 11 Moon landings, the three astronauts were waiting to be picked up inside their capsule floating in the Pacific Ocean – and they were hot and uncomfortable. Nasa officials decided to make things more pleasant for their three national heroes. The downside? There was a small possibility of unleashing deadly alien microbes on Earth. No. No, there wasn't. They thought there might be such a possibility, so I understand the caution, but in hindsight, there was never any danger from that. A couple of decades beforehand, a group of scientists and military officials stood at a similar turning point. As they waited to watch the first atomic weapon test, they were aware of a potentially catastrophic outcome. There was a chance that their experiments might accidentally ignite the atmosphere and destroy all life on the planet. Again: no, there wasn't. They thought there might be such a possibility, and they went on with the tests anyway, which says more about humans than I care to admit. This sort of writing demonstrates some sort of fallacy that, well, I don't know what it's called, and it's hard to back-search these things. The people in those cases were acting on imperfect knowledge. I mean, we all are, all the time, but not usually with imaginary stakes so high. Let me maybe give you an example. You're walking through the woods one day, and you come across a pedestal with a single, big, red button on it. There's no writing, no pictograms, nothing to indicate what (if anything) pressing the button might do. From cartoons, you know that pressing the Big Red Button is generally a Bad Idea, but those are cartoons, not reality. You're human, and you're faced with a mysterious button in a strange location to find a button. The urge to push it is real, palpable. How can you, curious ape, just... walk past it? You can't. Billions of years of evolution can't be overcome by the moral lessons in cartoons, not easily. Maybe it opens a secret cache with 10 million dollars in nonsequential, unmarked bills. Maybe a genie would come out to grant wishes, and you could wish for more than $10M. Okay, that's unlikely. Maybe nothing will happen. That seems most likely. Maybe a piano will drop onto your head when you press it, killing you. Or maybe it connects to the gigantic planet-exploding device hidden deep in the core. Wow, that's really unlikely. What you're doing, whether you realize it or not, is risk assessment. Each possible event has a probability. Each possible event also has a desirability. Assigning probabilities is tough, and it's even tougher without Bayes' Theorem to help you assess the probability through the lens of prior knowledge. It's also very difficult if you have no way of knowing what every possible outcome is. But the point is, if you can do that, even roughly, and multiply the probability by the desirability, you can more easily rank your choices. Finding a stash of money? Low probability but desirable. Blowing up the planet? Really low probability, but undesirable. In the end, billions of years of evolution win out, and you press the button. Only through hindsight can you really know if you made the right choice or not. Getting back to the examples from the article, yes, of course, I'd heard about them before. Take the astronauts thing. High probability: no deadly space microbes. Result: no disaster. Low probability: deadly microbes. Result: Astronauts die, Earth saved. Or, Result: Everyone dies. That last one would be Bad, so I don't blame them for hedging with the "deadly microbes" scenario. It's kind of like the Trolley Problem, only your action shifts the trolley onto a random track where a random number of people are tied. I don't know if I'm explaining all this very well, but my main point is, the article does a really bad job distinguishing "what we thought the probability was at the time" and "what the probability actually was, in hindsight." The remainder of the article does a much better job explaining risk and our reaction to it, but the way it's described at the beginning stuck in my craw, so I could no more resist posting about it than you could resist pushing that Big Red Button. |