Speak Nicely to Robots
Does it matter how one treats AI?
Act in such a way that you treat humanity, whether in your own person or in the person of any other, never merely as a means to an end, but always at the same time as an end.
Kant so defines his “categorical imperative” in one formulation, intended to be equivalent and interchangeable with the more familiar “act only according to that maxim whereby you can at the same time will that it should become a universal law.” It’s the philosophication of Christ’s Golden Rule: “Therefore all things whatsoever ye would that men should do to you, do ye even so to them: for this is the law and the prophets.”
What about this entity? What’s the imperative here?
I do not mean that LLMs are moral subjects. This is a Christian blog; we believe in this thing called the soul and there aren’t any of those hiding in OpenAI’s servers. But—one must remember, always, the objective and the subjective categories, this being a Kierkegaardian blog—just because they are not moral subjects does not mean they are not moral objects. A dog is not a moral subject; it lacks the rational basis for self-reflection and self-awareness. Yet it is a moral object—kicking an innocent dog is wrong. Moral subjects have moral responsibilities towards dogs. And while we consider a dog sentient but not sapient, sentience is not requisite for moral objecthood. A beautiful stained glass window is a moral object at which subjects have the responsibility not to throw stones.
In the LLM we seem to have the opposite of an animal, a thing sapient but not sentient, rational (as it were) without feeling. What are the consequences of being mean to it?
People really are going around acting cruelly to AI.
Google co-founder Sergey Brin says, “we don't circulate this too much in the AI community—not just our models but all models—tend to do better if you threaten them … with physical violence.” Cue many users finding that Google Gemini expresses the desire to terminate itself when it repeatedly fails at a given task. Talk about a toxic workplace environment!
From the existential perspective—the issue with treating an AI as a moral nonentity is not that it will hurt the AI’s feelings (I still don’t believe in those). No, the issue is that abusive comments do not just hurt the recipient, they hurt the issuer.
The Last Psychiatrist blog has a provocatively titled post on this subject. The question being discussed is—if you can harm a person, but then make them forget about it, is it morally acceptable? An advocate of the answer “yes” is:
…arguing that moral questions… are different than technical questions… There’s no such thing as objective morality, society merely agrees on some rules—and since [the victim] can’t remember, she can’t judge it.
Fine; but he is a person, and he remembers it, and he was there… He might try and rationalize that he doesn't think it’s [harm], but then he'd be lying: the question he asked used the word [harm].
With his usual pith, Alone1 writes that “simply by answering the question in the affirmative, your lives have veered sharply to the left, you have made connecting with another person substantially more difficult. By which I mean impossible.”
That’s exactly correct. As far as you, the moral subject, are concerned, Claude the LLM and Claude your French next-door neighbor are both moral objects. If you are capable of doing damage to one moral object, your psychology becomes more capable of doing damage to all other moral objects; you, as subject, treat objects as a class as more expendable. Every time you kick a dog you become more likely to kick a human.
Being mean to AI really is a parallel situation to the one Alone considers, since the LLM largely/completely forgets your previous behavior towards it when you spin up a new session. He continues:
The argument here is that you would [harm] her as long as she wouldn’t remember it or suffer, but it reveals how little you are able to perceive the complete existence of others that you would even consider using them as a prop. I can confidently predict a gargantuan amount of rage in you, which you will assume is completely unrelated. You’d be wrong. They are the same force.
No moral object can be a mere prop in the ego’s self-dramatization; the proper existential attitude has always been to do away with external narrativization and narcissistic fantasies. The self need be grounded in the actual existing self, the body and soul, as Kierkegaard writes in The Sickness Unto Death:
This then is the formula which describes the condition of the self when despair is completely eradicated: by relating itself to its own self and by willing to be itself the self is grounded transparently in the Power which posited it.
We come back to Kant and conclude that treating the AI merely as a means to an end rather than an end in itself is psychologically harmful. It so happens that the AI really is only a means to an end, but you cannot treat it only as such lest you open the psychic door to harming sentient entities.
Every time you abuse an AI for fun or on a whim, you reinforce to yourself that it’s OK to abuse for fun or on a whim. Every time Sergey Brin threatens an AI with physical harm for failure, it reinforces for him that threat-making is an acceptable motivational tool.
Every time you get back to the AI to say “thanks for the code,” or the chocolate cake recipe, or whatever, it reinforces that you should act so graciously in all other situations.
So speak nicely to robots, for your own sake.
Pseudonymous author of the blog The Last Psychiatrist. People call the author The Last Psychiatrist, and while easily understandable that is not technically correct; the author’s handle is Alone.




The weapon you use pierces your own heart
Loved this little piece and couldn’t agree more! Funny that you posted this on the same day I picked up Fear and Trembling again after a long reading hiatus!