In recent weeks, both Elon Musk and Stephen Hawking have made statements about the dangers of developing artificial intelligence. Curious, I picked up a book on the subject, “Our Final Invention”, by James Barrat. While many people have visions of a wonderful future where thinking machines solve our problems, Barrat isn’t one of them. He lays out quite the case— first, that there are no real barriers to the development of what he calls “AGI”, or advanced general intelligence, and from there to “ASI”, or advanced super-intelligence. Once machines can improve themselves, they could conceivable do so very rapidly, in a sort of intelligence explosion. The early possession of such technology could potentially lead to great rewards, so corporations, nation-states, and criminals alike are developing it as fast as they possibly can. And, there appears to be no stopping this development—no set of laws could stop it, and would only ensure that bad actors achieve it first.
Second, Barrat holds that we truly don’t know what we’re getting ourselves into here, that thinking machines are highly likely to be “black boxes” that we don’t understand, and will be “alien minds” that could easily be hostile or indifferent to humans. He gives an interesting example, of how an ASI machine designed to play chess could decide that it needed to build a spaceship. Without careful programming, Barrat holds that these machines are highly likely to be disastrous for humans as a species, and that even if we’re as careful as we can be, that accidents are likely to occur, and that those accidents might kill millions.
So, all very interesting. But, as Mr. X reminded me recently, we humans don’t really have any shortage of existential threats. If not AI, then the danger could come from something else—nanotechnology, or super-viruses, or biotech, or an asteroid strike, or nuclear weapons. Or, something much, much easier to see and understand—environmental destruction and exceeding the planet’s carrying capacity.
Now, my thought here—regardless of the threat, we need to slow down, as a species. We’re rushing pel mel ahead, on all fronts, without paying quite enough attention the big picture. I watched “Interstellar” over the Thanksgiving weekend, and one line in the movie caught my attention, when one of the characters referred to the 21st century as an “Age of Excess”.
That moniker seems to fit. It’s not just the relentless development, it’s that so much of it is driven by consumerism and the desire for profit, with no meaningful direction. We’re either trying to entertain ourselves, or to make our lives ever easier, or to distract ourselves with some other new “opiate of the masses” (posts: “A Little Hardship is a Good Thing“, “Minimalism for the Mind“, and “The Economic Taproot of Consumerism“). Or, perhaps worse, as Barrat points out, to develop ever more sophisticated weapons of war. Much of it we don’t actually need, and much of it distracts us from the big things that we really need to be paying some attention to.
It would also be good, moving forward, to not only do a better job of keeping the big picture in mind, but to keep some redundancy in our lives. Can we survive if some existential threat knocks us back a step? Do we have backup systems for food, water, finance, transportation, and communication? Do we know our neighbors, who we might have to depend on in an emergency? I’m afraid that far more often than not, that we don’t. And because of that, we might be headed right out onto a limb here.