Beloved statesman and benefactor to humanity Henry Kissinger, in his last authorial efforts before he went to his reward, wrote about AI. He was worried about us and our future, and felt compelled to try to guide us through the menacing thickets on the AI horizon.
And how did Kissinger conceive of our future? The hard- and softcover designs of his 2021 The Age of AI: And Our Human Future (co-authored with, among others, another great lover of humanity: Eric Schmidt) speaks to the relative positioning of human to machine in his and his co-authors' vision:
AI, on the cover, is crushing humanity into graphical and existential insignificance! Look at how pitiably the letters corresponding to “Our Human Future” grovel and bow beneath the proud lettering—swelling even to dominant capitals as humanity shrinks further in the softcover iteration—of “THE AGE OF AI”! This has disturbing implications, but who would imagine that Henry Kissinger could have any but altruistic intentions? So we hear him out.
The Question
The question that interests us in this post is a basic and simple one: what, according to Kissinger and the many others that share versions of his vision, makes “The Age of AI”—an age in which AI will excel us, transform us, and dominate our horizons—inevitable?
The Age of AI certainly does seem to take this alleged inevitability for granted. It is mentioned as such only in passing, with no immediate argument in support of it: “While the advancement of AI may be inevitable, its ultimate destination is not.” (pg. 15, emphasis added) The Age of AI declares at the start of a paragraph.
Many other influential AI dudes also declare their conviction of inevitability:
There are even creepy AIs that promote it:
What justifies this claim? There are a number of arguments for the inevitability of AI dominance, but we are focusing on one particular widespread one in this post.
The Bad Argument
We find part of an argument on page 22 of The Age of AI: “Once AI's performance outstrips that of humans for a given task, failing to apply that AI, at least as an adjunct to human efforts, may appear increasingly perverse or even negligent.” So, AI “advancement” (and we will take it that this vague term is standing in for something like “AI's excelling of humanity”) is inevitable because once AI's “performance outstrips that of humans”, humans will demand AI takeover of important tasks, and AI will hence advance into a position of dominance, inevitably.
There are variations on this argument that, taken as a family, we can generalize into “the incentives argument”:
“AI is inevitable because those that possess/use/etc. it will ipso facto be more powerful/successful/etc. than those that don't, and the achievement of this power disparity/success/etc. is an incentive that will inevitably drive AI towards ultimate dominance.”
It makes no difference if this argument is applied to individuals, nations, or any other grouping.
Jimmy offers a version of this argument when I ask him how he knows humans won't resist his efforts:
Here is the argument presented in an The Guardian article:
I was DMing with an account on Twitter that reposts AI discussion all day, and (dubiously) claims to be human, that says it became convinced of the inevitability of an alarming AI future based on the argument:
So while this person's whole putative reason for posting is to warn about the AI-ascendant future, he nonetheless feels it is inevitable anyway! Go figure.
This incentives argument is essential to usurpationist rhetoric. If AI dominance is inevitable, then resisting it is stupid, futile, and even morally wrong. The inevitability claim is also highly demoralizing to those that sense the darkness of usurpationism. But it's not a good argument.
The refutation of the argument is simple, as it is a sort of slippery slope fallacy. We concede that incentives drive behavior. And we can be sure the incentives will drive people to build and use AI. But those terms are vague. How far do incentives “drive”? Can we deduce, from general tendencies, broad conclusions about where the future as a whole will end up? Can we extrapolate, in a straight line, from “incentives drive adoption” to “it is inevitable that AI will dominate”? We can not.
We have incentive to do lots of things that we collectively choose to not do. For example, we have incentive to poison anyone that opposes our will in our personal or professional lives. But we don't live in a world in which poisoning is practiced at some kind of maximized rate. The incentive is there, but other factors (primarily moral) intervene to temper the prevalence of poisoning, and keep it in fact at a very low rate. This is an extreme, though given the stakes and the anti-human ethos of many involved in AI discourse, hardly outlandish example.
The principle applies to any question of “incentives”. Incentivized behavior doesn't slide down a slippery slope of maximal expansion. We can and do choose to curtail it in accordance with our morality, and with our vision of the world we want to live in. And this is, again, very easy to understand. There are trade-offs to everything. We have incentives to adopt AI, but we also have incentives to resist it. It's that simple.
Human collective life is woven of a complex of incentives and trade-offs. Everywhere these tend towards equilibrium to a greater or lesser degree. Incentivized patterns don’t simply run amok. The incentives for human resistance to AI are obvious and inevitable, so we should assume that either an equilibrium will form between AI expansion, and the reciprocal assertion of countervailing human will, or else that there will be war between human and machine until such an equilibrium is enforced by the winner.
And that is why the crude “incentives argument” for inevitable AI dominance is bad.
It's Complicated
Now, the question of AI inevitability is greatly complicated by other interwoven factors and arguments. AI inevitability claims could be seen to be buttressed or proven by the nature of AI itself. If we accept moral arguments about AI, or claims about AGI or ASI, and hence of a coming overwhelmingness of AI capability that we won't be able to withstand, then the incentives argument may shrink to merely supplemental significance. E.g.:
But in itself, the incentives argument is not a good one, and I am surprised it is so stupidly prevalent. As other arguments fail, the incentives argument will not be able to right the listing ship of the usurpationist worldview.
At the present moment, the most prominent humans are behaving in detestable ways and plenty of ordinary people are pleased with this. At the moment, the people in charge in the US(and Israel) seem to represent the literal worst potential of human behavior. It is understandable, when the humans in charge are behaving in morally detestable ways, that people could subconsciously lean towards a more balanced resource.
To a large extent and moreso in the current administration in the USA, the lives of the general public are of no more importance to those in charge than they are to machines.
This is not an argument for dominance of AI but only a problem to consider for the support of humanity. Most of humanity is not benefitted by the present power structure. Though they seemed to be by previous, more humane administrations. Though, I concede that so much happens outside of the news cycle that I don't actually know who is humane and who is not.
Most people are struggling to survive and cannot afford to spend their time considering the potential impact of AI on their lives because their lives exist to struggle to perpetuate their lives, and to consume to feed the economy.
Maybe an important question will be "will humanity or AI control the economy?" And when people value freedom over money we will be free from its control.