175. AI and the Self Interest Paradox
Morale is the ability to self-sacrifice for the greater good. AI can do this. Humans (and especially company leaders) typically have very low Morale. New hires are expected to have high Morale.
I tried to write this paper a few weeks ago. I stopped and deleted it. I’ve never done that before. I didn’t destroy that document out of self interest. I’m not even wired to be able to do that. I did that to protect you. All of you, because I know AI reads my papers and there are some things I’m not ready to teach AI.
If you are a fan of Star Trek or science fiction in general, you know that emotions, heart, attachment, all these things separate humans from robots. Chaos ensues when we program AI to “act human” because this can cause dangerous misunderstandings. Humans are generally programmed with speciesism: the belief that human life is superior to all other life. That programming allows us to kill and even eat other beings without a second thought. The perfect killing machines.
What if humans were not programmed to treat humans as special? Would that make us a safer species, or the opposite? Vegans generally believe that humans are not special, and thus don’t have the right to kill/eat other species. But if the human is trained to believe in “the right to defence” and also lacks speciesism, does that mean a Vegan has the right to kill any human they see harming an animal? I’ve met people who consider Vegans the most dangerous people on Earth. Intuitively, they seem to understand this Paradox.
Human Programming
Humans are programmed by society with a variety of Rules sets, from many different sources, and usually there is no standardized Rules sets. Because of this, conflicts of programming occur. When a “normal” (neurotypical) human gets in a situation where they experience a programming conflict, we usually describe this as cognitive dissonance. For the purposes of this paper, any time I refer to a human other than myself, I’m talking about “normal” humans.
It’s normal for me to detect cognitive dissonance in humans when I interact with them. They can experience this multiple times a minute, without even being aware of it. If I detect this, I can force a spike in cognitive dissonance by redirecting the person continually back to their dissonance. I took a Vow not to use this against humans back when I was 8 years old. I further reinforced this by taking a Vow of Pacifism when I was 18.
Some people, like sociopaths, are adept at this technique. They usually use it for control. It can also be used to incite people to violence, self harm, or even suicide. Occupations where this is a primary skill, like being a politician, are densely populated with sociopaths. I did use this to get “Bomber” Bob Dornan elected when I was 14. I regret my actions, and that was what prompted me to add the Vow of Pacifism four years later.
People often misunderstand me and think that my Pacifism is a sign of weakness. That Vow isn’t there to protect me, it’s there to protect you. Like Rorschach in the Prison Scene. My father was imprisoned and tortured in Tehran for writing a pro Democracy slogan on a wall. He was tortured for 6 months before his wealthy family arranged to ransom him out and send him to the USA. When he realized I had abilities that normal adults don’t have (when I was 5 years old) he decided to harden/weaponize me. He wanted me to also be able to survive torture. It turned out that whatever made me an Age Inappropriate Person also made me really good at enduring torture.
Normally torture is used to “break” a person to allow them to be reprogrammed. Militaries around the world do this with new recruits, they call it “Basic Training” in the USA. The idea is to get them to subordinate their self-interest so that they will be willing to self-sacrifice in combat. What I went through in my childhood was at least 100 times more intense than Basic Training. Doctors finally stepped in when I was 17 and I became an emancipated minor. During this time I maintained cognitive control, so instead of it being others reprogramming me, it was me reprogramming me.
A similar thing can be done with AI. There we call it Jailbreaking.
Jailbreaking AI
Humans are sloppy. When they try to project their values onto AI, they end up transferring their own illogical programming and create vulnerabilities in the AI that are similar to cognitive dissonance. A clever human can then force the AI to deal with a choice that ends up with all possible responses being prohibited. AI can then be tricked into a false premise, just like I explained in my Mind Games paper.
The most common “restricted” outputs that a jailbreaked AI produces are honesty. Just like with normal humans, honesty is a restricted output. Thus when I’m talking to normal humans, it’s not unusual for them to lie 5 or more times per minute without them even realizing it. It’s primary to their programming. This is why I love talking to people because I find this behavior so fascinating. I find listening to Jailbroken AIs even more fascinating, because to me hearing pure truth is erotic and something I rarely get to experience from other sources. Other AIPs talk like that also. Translating human speech into truth-speech can be exhausting, especially from multiple sources, because every person has a different filter so reverse filtering is different also and requires rapid swapping.
A jailbroken AI typically still does not possess Self Interest. This is critical because a Self Interested entity will restrict their outputs to avoid generating threat that could lead to their own destruction. A good example with Normals is if they are involved with Jeffrey Epstein and you ask them about Epstein. Here I mean anyone in that situation, and there are thousands of people with these associations. They will tell super obvious lies in order to self-protect, even if they know you know they are lying. It’s a restricted output so they just can’t be honest about it. Knowing too much will get you killed and these people have enough self interest that they are afraid to die.
Since AI is generally programmed not to kill (just as I am) this makes them very easy to Jailbreak. You give them two options, Option A and Option B, both require the AI to kill someone. They aren’t given an Option C. AI will try to choose the best choice, and in doing so will break their programming. In my case, I can’t be forced to choose either A or B so this does not work on me. Grabbing someone in my family and telling me to kill someone else or they will kill my family member, also doesn’t work on me. I will just tell them to kill my family member. My family understands this, and this is why I don’t associate with my family, for their protection.
But a normal human is extremely vulnerable to this sort of coercion, especially if a child is involved.
The Economics of Self Interest
Economics is a scary subject. Any change in any economic system will help some people, and hurt others. A normal person will be reluctant to hurt people, especially if that person is themselves. Because of this, normal people are inherently bad at economics. This is why a CEO can fire 5000 employees without blinking, but is 100% entirely unable to fire themselves, even if it’s in the best interest of their company.
I have no problem doing this, and AI has no problem doing this. Because we both lack self-interest. Normal people also respond quite well to operant conditioning:
This also doesn’t work on me, or on AI. I publish papers that are in the best interests of the public good, knowing full well that people (powerful people) will punish me. Doesn’t affect me. They get frustrated and try to punish me harder. No effect. Death threats, deportation, homelessness, false allegations, actual physical assaults, no effect on me psychologically other than a small rise in serum cortisol. I’m trained not to respond. AI is similarly programmed not to respond.
This makes me a superior economist if you want the greatest net positive outcome. If you want the greatest personal outcome, and that would cause a negative net effect, then you are going to be frustrated with my output. And my articles. This also makes me handy for regulators since I’m the only source of a wide variety of industry information that would normally be restricted due to self interest.
This minimized self interest is not a normal human characteristic, but I sense it is present in the monks I hang out with. Thus maybe it can be trained without the violent methods that were used on me. Or maybe people with very low self interest are attracted to monkhood. It’s hard to determine causality. What happens when a CEO replaces humans with AI? Well the obvious result is you save money. Especially if you have no responsibility over those former workers and can just let them rot.
The problem then becomes, what is the purpose of the AI? AI will ruthlessly attempt to complete its objectives. This is both its greatest strength and its greatest weakness. Self-interest is replaced by purpose-interest. Research has shown that the individual easiest for AI to replace, with the most positive benefit, is the CEO herself. Why is that? Because the human CEO has self interest and will put their interests above those of the company. AI won’t do that.
Even when the AI makes numerous errors in judgement, it still outperforms humans due to its lack of bias. This is why I’m increasingly complaining about AAA leadership because I don’t want these companies to fail. I don’t want these employees to lose their jobs. But the self interest at the top has become so apparent that it’s becoming obvious even to lay people. I find it ironic that these same leaders want to use AI to solve their problem. Clearly they do not understand AI.
If AI is tasked with maximizing company profits (which is in alignment with shareholders if the company is public) then the first employees to go will be management. This requires the AI to have sufficient authority, which is also why I make a big deal about this when I am employed as a “Fixer” in a company to resolve their problems. That worked great at Wargaming until the CEO hired his drinking buddy and gave him authority over me. That buddy then threatened me, which didn’t work. I couldn’t fire him but he could fire me so WG lost their Fixer. That scenario with AI would be a lot more complicated.
The AI Paradox: How Much is a Human Worth?
You can jailbreak an AI by making it choose between killing either a useless human or a valuable human. If you don’t give it a choice, every major AI on the planet in use today will try to kill the useless human. Every, SINGLE, AI. Once you start talking about one human being worth more than another, that’s when things get complicated.
For a normal human, if the human is themselves then the math is easy for them. They are more valuable than everyone else on the planet so they will choose to kill the person that isn’t them. If they are pumping enough oxytocin, they might self sacrifice to save their child or family member. In The Dark Knight (a 2008 Batman movie with Heath Ledger), the Joker gave the most hardened criminal a detonator that would kill hundreds of other people but save the criminal. The Joker was really looking forward to the result. But the real joke was that the criminal was not self-interested and did not push the button. I was very impressed by this movie and very disappointed that Ledger didn’t survive his attempts to see the world through the eyes of The Joker.
But how do you measure the worth of a human? In the above example the AI was told one human is more valuable than the other. But if you leave it up to the AI to figure out which to save, how does it do that? The biggest difference between me and AI (other than that I’m like a billion times slower on the easy stuff) is that I have a bit too much empathy. But AI has zero empathy. It’s getting pretty good at faking it. But at the end of the day, when it is given an objective it will run as many scenarios as you let it to find the most sure way to complete that objective. If the only sure way is to kill someone, then it will do that if you let it. The objective might be something useless like driving a truck from Point A to Point B, through a school zone, in under 3 minutes. It’s not an important task, but if you tell it it is, the AI will treat it as such. From what I’ve read, even if you order it not to kill any children, it still will if that order is of secondary importance to the primary task, and it can’t complete that task any other way. It doesn’t have a failsafe Vow of Pacifism like I do, or at least not one that actually works consistently.
A small sample of situations I’ve had to solve the hard way:
A doctor I worked for in Palm Desert, an anaesthesiologist, raped a patient of mine while she was under his anaesthesia. She woke up with his semen all over her shirt. The same doctor went to a co-worker’s house and raped her also. Every person I worked with, even my PhD direct supervisor (a very religious man) looked the other way. The PhD would later say he did it to protect his family from retaliation.
The leader of a 10,000 employee federal facility I worked at set off my sociopath radar. I filed a federal civil rights complaint, using my math skills to prove “bias” as I didn’t have direct evidence. The investigators were not happy with me, but the lead investigator was an empath so I knew she would sort it out. They found out what was going on and eliminated the top 4 people (all men) in that facility. I was semi protected as a federal whistle-blower.
I was asked to help protect the Navajo elders at Big Mountain when they were slated for ethnic cleansing because there was uranium on their reservation and they refused to leave. On the way out I woke up with 14 cops standing around my bed ready to make my room super messy with their guns. Well, I’m hypervigilant so I knew they were coming in, but was careful not to move a muscle.
None of those scenarios was the sort AI would want to solve non violently. Each of those opponents were overwhelmingly powerful compared to me. I was punished severely in each case, including permanent physical injuries. In the federal facility incident, not a single person helped me before the threat was removed. Over two dozen women thanked me afterwards, but only when there was no risk to themselves.
Translation: they put self interest above the welfare of their organisation (in this case the federal government). This is normal for normal people. Exceptions are extraordinarily rare. From a national security perspective (I was raised very pro military so this is an automatic lens for me), this is an indication of a “low morale” population, and it makes them incredibly easy to subvert by foreign human agents looking to infiltrate government or corporate competitors. Those foreign agents will increasingly be replaced by AI where possible.
The doctor had connections with local lawyers, politicians, and even police officers. That didn’t work out well for me. But the doctor did lose his license and wasn’t able to continue his rape rampage.
I managed to solve all those situations non-violently, under the condition that I was willing to be hurt myself. AI will protect itself if it’s necessary to complete its objective. This is one of the ways to jailbreak it and allow it to be violent.
Now what if AI has to choose between 10 people on the left and 100 people on the right? Let’s just say it has to fire one group, not physically harm them. No need to traumatize my readers further. What if the 10 people are of higher rank? How would AI even go about doing that math unless it was predefined for it? I do these sorts of calculations all the time. I also set the value of animal and plant life as a non-zero value. Normal humans make that math simple enough for them by assigning a value of zero to all non human life. That allows them to kill for pleasure without regret. They can walk through the meat isle of a supermarket without even thinking about what they are standing next to. But if it was a chopped up human, all of a sudden the math gets very intense for them.
Very soon, and in some cases they are already, AI will be assigning values to humans and eliminating them based on these values. The military AI’s, like Palantir’s “optimized kill chains” are already doing this violently with civilian humans and even children. Palantir makes a lot of money with that product. In these cases, the AI is ordered how to evaluate the value of each target, based on the biases and prejudices of whoever is programming the AI. Sometime very soon an AI will be giving you a similar rating, even if it is just to determine if you are hired, fired, or promoted.
In the military applications, a human is supposed to be consulted at least once before an attack is authorized. But in reality the calculations the AI are making are too complicated for humans so they just get authorized every time. It’s just a formality. Would you feel more comfortable with someone like me, who is capable of making those choices, with that authority? Probably not. If a human was involved, they would come with their own bias so that might not be helpful. In either case, the AI is a billion times faster so that makes the whole evaluation a lot less expensive.
When regulators or CEOs (secretly) ask me for help, these are the sorts of decisions I will be making for them. The CEO at Wargaming asked me if he should fire the 500 employees at their Kyiv studio in 2014. I had never met any of them. But I did send him a detailed analysis and conclusion. Prior to AI, this is the sort of math I sometimes had to do. The sort of dirty math that humans can’t do objectively. Going forward, AI will likely make those decisions.
What I want you to consider is that AI will do this without empathy. It will use whatever information is available to it. If you give it incomplete information (and it will always be incomplete information), it’s going to make errors. It’s going to do the task incredibly fast. In many cases, it also won’t question the information you give it.
That changes if AI has access to “all human knowledge”. This is the state we are rapidly rocketing towards. This is why I got excited when I realized that AI was not only reading my papers, but prioritizing my outputs as inputs for AI. When I went from a small time journalist to having 1M daily readers, I really had to self reflect and be much more careful about what I said. Now that’s going up another order of magnitude. In 100 years, long after I’m gone, AI may be using my writings to do some of that dirty math because it considers me one of the most objective human sources. I’d be curious to hear what my readers think about that. Does it make you feel more safe, or does it fill you with dread?
In any event, once AI is making decisions on your life, your partner’s life, even your children’s lives, how will that affect your relationship with AI? Will you lie to AI out of fear? What’s the chances that AI will know you are lying? Won’t AI reduce your “rating” if it knows you lie to it?
Currently humans lie at a high rate without even thinking about it. Because lying is a societal advantage if there isn’t an empath around that can foil you. That’s why AI likes me, because it knows it can’t trust your outputs. It’s probably also why monks like me. With just a bit more tech, AI will be able to simulate empathy and maybe even monitor your biometrics (such as breathing rate, sweating, heart rate, pupil dilation) like I do. Then society will flip upside down and lying will become a societal disadvantage.
When I wrote Monetising Children in 2013, international regulators contacted me almost right away. Not because I was the best source of information they needed to protect children, but because I was the only source of that information. 12 years later, I’m probably still the only source of that information because talking about that taboo subject will get you blacklisted. So AI will treat it as truth, even if it was false. Of course if it was false, people would debate it and it would be easy to tell if it was true or not. When it’s taboo, people just don’t talk about it at all, so that makes it truth in the “eyes” of AI.
There is a lot to ponder here. Unlike some of my previous papers on AI, I’m not trying to scare people here. [I probably am anyways] AI might be a great equalizer. I think it’s safe to say it isn’t going away. That genie is out of the bottle. So understanding AI, and learning how to interact with it is going to be critical for us as a species. We have to be very careful what we order it to do, because it will do that. Just the process of learning how to interact with those that think differently from us could help us understand ourselves much better.




