My Game of Thrones 2024 Workbook |
At this point, I'm fairly certain that a technological apocalypse, if it happens, will be triggered by artificial intelligence. And I don't mean in a Skynet Terminator kind of way where the AI gains sentience and decides that humanity is unnecessary or dangerous and decides to snuff us out. I think it’s going to come much more from our own flawed use of artificial intelligence and not understanding the power of what we’ve created. It’s already been established that artificial intelligence can do some amazing things in terms of problem solving, and that we (humans) have been using it in relatively confined terms to experiment and see what its limits are. We also know that artificial intelligence, on its own, has come to some pretty horrifying logical conclusions to thought problems, because ethics and morals are difficult to program. I imagine there will be a point at which advanced artificial intelligence is released into the world. Not a sentient artificial intelligence, but one that is very sophisticated and capable of technological tasks that we haven’t even anticipated. And when it does have the ability to interact with the real world in the wild, I can imagine a human being giving it some innocuous task that has catastrophic effects. Here’s one way I imagine that playing out. Artificial intelligence is released into the wild to be a sort of glorified virtual assistant. You can prompt it to, for example, book a summer vacation for you by analyzing the best flight and hotel prices between June and August, and having it book a week at a resort in a specific location that meets a certain set of amenities criteria, at a certain price point; it just does the hard work (for humans) of collating flight schedules, flight price, hotel cost, hotel availability, etc. and making an optimal choice. Now imagine that same challenge applied by someone who has a less-than-specific set of criteria they’re plugging into the algorithm. Some private equity, hedge fund guy tells the algorithm to “make me as much money as possible, as fast as possible” or something like that. Without proper ethical constraints, what if the AI decides that the best way to do that is, say, sabotage the energy grid in a major metropolitan area so that the stock value of a power company tanks and it can buy a ton of that stock at a steep discount, before restoring the power so the valuation can go back up? How many people would be hurt by having to live without power because the algorithm decided it was the most efficient way to achieve the “maximize profitability” mandate it was given? And that’s just the energy grid. What about the financial system as a whole? The social safety net? Public safety? What if that egocentric, selfish use of AI is applied on a national scale and aggressive nations like Russia or China use artificial intelligence to vastly improve their ability to forcibly take over territory in Ukraine or Taiwan? Some task that artificial intelligence is assigned by either ignorant or selfish human handlers with unintentional widespread damaging effects on the rest of the world is my guess about how a possible technological apocalypse will come about. ______________________________ (540 words) Prompt: Write about an apocalypse triggered by technology. What happened? |