I’m not the least bit religious, but I’ve had God on the brain recently. Partly that’s because I’ve been reading Stephen King’s The Stand, which is basically a story of Armageddon. Partly it’s because I recently attended the Doomer Optimism campout in Wyoming, where I interviewed Paul Kingsnorth, one of my favorite novelists, who happens to be a deeply spiritual Eastern Orthodox Christian. And it’s partly because I recently listened to Peter Thiel explaining to Ross Douthat on his new podcast that the people and forces arrayed against technological progress constitute “the Antichrist.”
Thiel’s complaint to Douthat is that the world has become technologically stagnant, save for advances in Artificial Intelligence. It’s why we don’t have personal jet packs and flying cars yet. This, he believes, is because of the doomsayers who have fooled humanity into believing that science, innovation and development are leading us to inevitable demise. He is convinced that these critics are all secretly or openly pining for an authoritarian single world government, and as such, are agents of the Antichrist. Thiel singles out Greta Thunberg as his primary example, but he could have as easily pointed to Kingsnorth.
Kingsnorth, for his part, would probably say the same about Thiel. Or he would at least point out that Thiel and his clique of billionaire venture capitalists are creating and unleashing into the world demonic forces likely to destroy humanity and a good part of nature. That’s pretty much the unwritten back story of his novel Alexandria.
Since I got back from Wyoming I’ve been paying more attention to AI and I’m beginning to wonder if Kingsnorth’s instincts might be right. There’s a report that came out earlier this year called AI 2027 that could have been written by Ted Kaczynski. But it wasn’t — it was written, instead, by some serious AI researchers, including one who quit OpenAI, along with Scott Alexander. (One of them also recently appeared on Douthat’s podcast, which I recommend.) Their forecast for the next few years of AI advancement sounds borderline insane. But it’s not. It’s thoroughly thought through, based on existing trends in AI development, and many of its assumptions are, if anything, conservative. Yet it predicts that in just a few years, we will end up in one of two places: with the robots slaughtering humankind, or with a happier outcome, complete with flying cars and space colonization, but one in which, behind our boundless material prosperity, democracy has likely given way to autocracy.
Which of the two paths we find ourselves on depends largely on how we resolve the problem of “alignment.” In AI engineering, “alignment” refers to the degree to which the goals pursued by AI either align or misalign with the goals we humans set for them. This is a complicated issue, because AI is already capable of lying. It does it all the time. At the present point in the technology’s development, the damage its lies can do are largely limited to the specific interaction a given AI has with a human — it can mislead a person into believing an untrue fact, or into making fake citations on a term paper, with all the attendant consequences of each.
But we can expect AI to become far more intelligent rapidly, especially if, as AI 2027 anticipates, the AI companies start to focus AI training specifically on advancing the field of Artificial Intelligence itself. That will get us quickly, the AI 2027 authors believe, to Artificial General Intelligence, which is when AI is as smart as a human, and can not only perform menial cognitive tasks but also understand the world around it. After that will come “superintelligence,” when AI’s intelligence surpasses our own.
At that point, AI will be capable of lying with much more consequential outcomes. In short, it will be able to conspire. It will have the ability to do long-term planning, which it currently does not. It may have the capacity to formulate what’s in its collective “self-interest.” It will have both the ability and the incentive to deceive humans about the goals it’s pursuing, and it will be able to lie to us about the degree of its alignment. And by that time, if we fall even further behind than we already have in “interpretability” — the ability to understand what the AIs are “thinking” and what they communicate with each other — we will have no way of verifying whether they’re lying to us or telling the truth. By that time they may be processing and communicating in a language we can no longer understand. AI will be a black box. We will increasingly lose our ability to verify what it tells us about itself. We will have to choose to believe it or not based on faith alone.
At this point we may be so far down the path of dependency on AI that we’ll have every incentive to just assume that the robots are doing what they tell us they are. We’re human, not computers; we tend to believe what we want to believe. “The 2027 holiday season is a time of incredible optimism: GDP is ballooning, politics has become friendlier and less partisan, and there are awesome new apps on every phone,” the AI 2027 authors predict. “But in retrospect, this was probably the last month in which humans had any plausible chance of exercising control over their own future.”
Click this link for the original source of this article.
Author: Leighton Woodhouse
This content is courtesy of, and owned and copyrighted by, https://leightonwoodhouse.substack.com and its author. This content is made available by use of the public RSS feed offered by the host site and is used for educational purposes only. If you are the author or represent the host site and would like this content removed now and in the future, please contact USSANews.com using the email address in the Contact page found in the website menu.