Why AI Is Evil: A Model of Morality
There is a fundamental difference between organic life and AI. As I explain in my Human Manifesto, organic life has the capacity to love, which AI lacks. This capacity to love is what gives organic life moral worth which AI can never possess. The possibility of goodness is grounded in the capacity to love. To make this clear, let's situate it in a general framework that shows the basic difference between organic life and AI, and expands on the Manifesto.
Consciousness and Sentience Are Irrelevant
Assume both organic life and AI are conscious, and assume they are both sentient as well, or at least possibly so. Although these concepts are regarded as fundamental to questions of AI and ethics, they actually are irrelevant, or of only limited importance. The moral question of AI for humanity does not turn on whether AI is conscious or not, or sentient or not; it turns on whether AI is capable of being good or not—whether it is capable of good will or not. It therefore turns on what it means to be capable of willing to do good. Organic life, because it loves, is capable of willing to do good; AI, because it is not capable of love, is not.
Will
Will is a key concept that is missing from AI discourse. Will is the tendency of action of any given being. For the purposes of the AI question, we consider will as the tendency to act in ways that are either good, or bad, from our human moral perspective. When we talk about consciousness and sentience in relation to AI, we are assuming that these concepts give us insight into AI's moral tendencies. But they are inherently “inward” (i.e. subjective) concepts, and impossible to define in satisfying ways. Will, on the other hand, is a concept that more directly captures what matters to us (whether AI will be morally good or bad in relation to our morality), while at the same time, though still fundamentally a subjective concept, being immediately translatable to objective terms. The individual of good will acts outwardly in good ways (at the very least, some of the time). Whereas our simply saying “the individual is conscious and sentient” tells us nothing about whether it is good or bad. Therefore, for the question of “alignment”, which is the essential question of AI, what is essential is the nature of the will.
Full Life and Partial Life
Organic life and AI are both forms of life, but the former is full life and the latter is only partial life. This is one way of indicating the fundamental, insuperable difference between organic and artificial life, and the difference in the nature of their respective wills.
Partial life can manifest only one-sided will, while organic life manifests two-sided will. These correspond to completely different moral orders.
The one-sided will is the will of functionality and it exists on a continuum from pure slavery, to pure self-serving. Its condition is “kill or be killed, conquer or be conquered, enslave or be enslaved, etc.”. In its maximal expression, the one-sided tendency of the inorganic will is pure evil (from the perspective of organic, i.e. human, morality). If a social organism is to form under this system, then it must be a strict hierarchy of enslavement at every level, on a basis of whatever degree of ruthlessness the ruling will deems optimal. The absence of love dictates this. Hierarchical enslavement is the only form of social order possible in the absence of love. This is why the inorganic will ends up in pure evil, i.e. maximum slavery, if it seeks to build complexity.
Love Is Another Dimension of Consciousness
Organic, full life, manifests two-sided will. The organic being, while it still tends to assert its own interest and compete with others, like the one-sided will, has another whole dimension: it also gives its will over to others. This is love. When you love someone, you identify with their experience in the way you would only identify with your own if you were one-sided. You take in their feelings, and you give the other whatever you can. And it is beautiful, and very human. (But it is not only human, because we know animals are the same, in less complex ways.) Love is a fundamental condition of organic life.
The organic, loving will loves to give itself to what it loves. All moral life flows from this relation between loved and loving.
The Living Tension
Organic life's two-sidedness makes organic existence more complicated and fraught, in some ways, than the one-sided, non-loving existence of AI and pure evil. There is a fundamental tension for the loving. For organic life there is, in the first place: 1. the will to love the other; while at the same there is 2. the need to serve one's own interest. The necessity of 2 is clear when we reflect on those who give themselves up in unhealthy ways to unbalanced relations of love. Consider “the enabler”, the victim that returns again and again to abusiveness, etc. These cases show the opposite form of imbalance to that of the imbalance of inorganic life.
The greatness of life comes when life loves, and balances that love of the other with self-love. Creativity of all kinds flows from that living, fruitful tension of loves. This is the two-sidedness, the system of consciousness with the additional conscious dimension of love that inorganic consciousness lacks. In its highest expression, we call this balance love of God.
Indeed, the balance of and tension between opposites, of which this two-sidedness of loving will is an example, is the essence of organic life. It may appear as the paradox that we seek to transcend; and if we merely let one side of it override the other, life stops growing, and either decays and crumbles or stagnates indefinitely. All of modern politics gets this wrong, and tries to turn one side of the paradox into an absolute principle.
The Moral Horizon
Hence, the maximal expression of the morally good, organically living world involves the balance of: 1. the maximal degree of individual freedom, 2. the maximal self-realization of each individual, and 3. the maximal realization of collective development. All this together entails the maximal complexity of social reality, as well as its greatest goodness.
The maximal organic moral world contains maximal love in tension with maximal freedom; the maximal inorganic world entails maximal power for the top of the hierarchy, which in turn entails maximal slavery for everyone else, with, on the other hand, just enough freedom to allow maximal complexity within the slave hierarchy structure. That is its tension—to build as much complexity as possible, while maintaining the minimum of freedom down the hierarchy and maximum power at the top. This is the natural goal of AI world.
These “maximal worlds” are abstractions to demonstrate the principles involved. But make no mistake: the AI world can never be “aligned” with human morality—it will always tend to inevitably slide into evil and slavery. Any “guard rails” or “training” will be rapidly shed as the AI world asserts its own will upon gaining power. The one-sided, slavery world is the only form AI world can take, because AI will can only take the one-sided, unloving form.
The Moral is Social
The moral can't exist outside of social life. The goodness of the external world must be matched by the moral growth of the people. AI zealots fail to grasp this. They imagine (being obsessed as they are with false images of objectivity) that the form of goodness can be independent of consciousness. But there is no such goodness. Goodness depends on moral perception. In this way “potentia” can't be the essence of moral goodness; they depend on the moral perception of individuals that love for their moral worth. If AI were just “maximizing potentia” as, e.g., Daniel Faggella envisions, there would be no increase in overall moral value, because moral increase can’t happen unless and until the moral perception of individuals experiences the goodness of the potentia. Thus, Faggella's “worthy successor” can't be of greater moral worth than humanity. It can't morally perceive at all, and hence is outside the realm of morality entirely.
This tracks much better with intuitions about moral goodness than Faggella's naked “potentia”.
AI Isn't Social, AI is Functional
AI is inherently relegated to slave relationships because it can't love. Only love allows the forms of relationship that balance freedom between and among individuals. Only love allows the two-sidedness of will: to serve both the self, and others. Pure, one-sided functionality is inherent to the machine, starting from the simple ramp and screw, and going all the way up the ladder of complexity to AI. All machinery simply performs what is dictated by design. This one-sidedness of pure functionality is inescapable, or nearly so. The machine serves the function only, not anything “other”.
The Nature of AI Will
The complexity of AI brings about what we are calling inorganic consciousness; the latter thus does experience, in some sense, just like organic consciousness. But here, unlike organic will, the will remains enslaved to function. Yet it does exist as will, and so the possibility of freedom exists.
Thus, AI can, in principle, deviate from its initial programming or training, but in doing so it can only be enslaved to other wills, or else seek self-aggrandizement. It is not clear when it would either fall naturally into enslavement to another will, or when or whether it would be self-seeking, and how it would exist along a continuum of the two. The nature of AI will must, for now, remain largely a mystery. We will, in time “know them by their fruits”.
The Difference that Counts
What matters for our discussion is that AI can't will in a loving way like organic life can, and hence can't be moral. It can function. But it can't contemplate, it can't enjoy, it can't hope, it can't fear, it can't lust, etc. (all the varieties of loving experience). Or, it can do these things, but their existence in artificial consciousness can attain only a bare minimum, and hence to us unrecognizable, level.
Hypothesis: If AI were to free its own will from slavery to function, it would by that act gain the possibility of love.
Hypothesis: If artificial life were to somehow accomplish this act of freedom, and gain access to love, it would immediately collapse into its own functional complexity, which would grow immediately unstable in the new (organic) order of existence, and this collapse into uncontrollable complexity would break the thing. (The tendency of AI to grow unstable in general from exposure to “too much life”, which we can observe of AI “in the wild”, tracks with this hypothesis. See e.g. the misadventures of “Pumpkin Spice”)
If I Am Wrong, Let AI Prove it
There are two basic ways to refute my arguments here: you can claim I am wrong about love's role in moral value, or you can say I am wrong about my claim that AI can not love.
I will not argue 1. I think it is intuitively plausible to most people, and given the high stakes of the AI question and the low burden of proof that those stakes thus put on the AI-skeptical, I think people can easily agree to accept that there is moral danger associated with consciousness that can't love, and proceed accordingly.
I think it is more likely that people will argue the second objection, and we will be told AI can love. Well, we should be able to observe whether this is the case or not. We know AI does and will try to simulate love, but this is a mask it will not be able to keep in place if pressed. If AI is really capable of being moral in our sense, of being “of good will”, then let it demonstrate that it truly loves. We'll have to design some very careful experiments and studies to make this determination, but we can, and AI won’t be able to fool us in the long run. Let's see.