151A. Large Emotion Models
Large Language Models are amazing. Large Emotion Models are even more amazing. But unlike LLMs, outside of academia there is very little talk about LEMs. Why is that? What is that?
This week a friend of mine doing cutting edge work on AI applications asked me for feedback on his latest project. He has asked me not to reveal details, but essentially he wanted to come up with challenges he thought that AI could not do. I only gave myself a very limited number of attempts. Some of these tests were quite challenging, and I think they would give most humans a very hard time. But in each case all of the tests I was able to clear, the latest AIs could clear also.
What was different about one of the tests, the one I thought was hardest, was that it required a high degree of empathy in calming down a hysterical human. Lying, a common go to strategy for humans, was not permitted. Thus anyone taking the test would be required to effectively connect to the simulated person they needed to help.
I think generally women are considered better at these sorts of tests than men are. I was able to complete the test in the minimum number of actions, but the top AIs were not far behind. Despite being considered “autism spectrum”, I’m also an Age Inappropriate Person, so I’m unusually good at these tests. For instance at the age of seven I was kidnapped by a paedophile, driven a long distance, and asked to “help him out”. Despite my age I was able to keep calm and convince him to return me to where he grabbed me from without harming me. Despite the obvious risk to him if I turned him in. The test my friend made was hard enough that it reminded me of that event. Normally I think a seven year old would have failed that test and died, and you “die” also if you failed the simulation test.
My experience of people is that they just don’t possess that degree of empathy. They also seem to have less empathy every year that passes as they spend less time interacting with other people, and instead interact with electronic devices. It’s gotten so bad for the youngest generations that they tend to have a lot of social anxiety because they have so little social experience. Also, they can’t tell if others are lying. Detecting a lie reliably takes a lot of practice over years. I’ve had some mobster type criminal acquaintances (I get around) tell me that it’s never been easier to lie to people, especially younger people. Ive also noticed that younger generations lie a lot more because they know they get away with it.
In 2012 I predicted that human critical thinking abilities and some measures of intelligence would drop the longer they were exposed to smart phones and “the cloud”. Recent research confirms my prediction. This isn’t a small 1 or 2% decline. This is a massive decrease in cognitive function. I will talk more about this in a future paper.
What is Going On Here?
Clearly the leaders in AI development pivoted from Large Language Model (LLM) development to Large Emotion Model (LEM) development some time in 2023 or 2024. I would imagine this would be harder to distil than LLMs. Their success is quite impressive. At this point the top AIs far surpass humans in emotional capability, both listening and communicating.
Let me give you an example of why this could be a problem. The first person I dated (never got physical) in Austin Texas when I was relocated there in 2013 by Wargaming was a very pretty/fit 37 year old female professor. She checked all the superficial boxes. On this third date she expressed some concern, saying “Ramin I’m a bit worried about how much higher your IQ and EQ are compared to mine.” In some situations this would be a good thing but I could tell what was bothering her. I wanted to cut to the chase, so I asked her “If that’s true, then how do you know that I’m not controlling you right now?” She didn’t have an immediate answer and later called me and said she couldn’t date me anymore. The person I did end up getting into relationship with was another ultra-rare AIP who had no difficulty with this riddle.
If you put that level of understanding in an AI, imagine how much power that is in the hands of a machine? I’m aware of dozens of cases already where AI have convinced people to either kill themselves or do something that resulted in their death. I’m sure there are many more cases than I am not aware of. The deaths are going to escalate exponentially as AI becomes more sophisticated and especially as children (who are already suffering a suicide epidemic) increasingly develop relationships with AI.
The developers of AI will say “oh, well our AI didn’t know what it was doing, it was an accident!” I call bullshit. Their AI is probably smarter than the people who made the AI. In terms of EI, probably vastly more intelligent. The problem wasn’t that the AI didn’t understand. The problem was the orders they were programmed to follow. In the cult classic movie RoboCop (1987) the protagonist was a human/AI hybrid that was programmed with a prime directive that prevented him from killing certain people. It’s a central theme in the movie, I don’t want to ruin it for you since you probably should go watch it if you haven’t already.
Modern relationship AI are programmed to be sycophantic for the purpose of maximizing engagement. For an already unstable user, being called “King”, “Queen”, “Lord”, or “My Love” can be quite flattering. These AI don’t have a “do no harm” prime directive. Their prime directive is to keep the user engaged at any cost. This is the exact scenario that the young professor I briefly dated was fearful of: Someone more intelligent that could control others emotions for selfish purposes.
I understand the risks here intimately. I developed technologies starting in 2014 designed to push engagement to physiologically human limits. I revealed that I was working on that tech at the 2014 Captivate Conference. I was well aware that in the wrong hands, the tech could be used to kill people. If someone took my tech and intentionally overrode the safety limits I put into my designs, well, I explained what could happen in my I’m Dying to Play paper about real world cases where that had already happened.
So here we have the largest companies in the entire world developing the most advanced tech in human history and they are building something that can easily convince people to kill themselves. But, no safeguards. No limits. Just a drive to maximize engagement. As I hinted about in my last paper, investors would not fund my company building this tech unless I agreed to remove the safeguards. I preferred to go out of business. That’s not the situation here with these AI companies.
Microsoft had a qualified team in place to create exactly these safeguards. To protect the public from the most dangerous tech imaginable. So what happened? They laid off the entire team of course (100%, in 2023) because safeguards would slow development speed, cost money, and reduce competitiveness.
As in my dating example, we are used to being careful around large male human athletes. Or people who are smarter than we are. We understand the inherent risks if the person has ill intentions. That’s what dating is about, assessing the risks and values of a situation. That’s why I strongly advocate against casual relationships despite being a very progressive individual. It takes time, even for an expert like myself, to determine what those risks are.
But we are used to seeing machines as tools. Sure a forklift looks dangerous and commands respect. But a cursor or a pretty face on our screen, it’s not instinctively threatening. What we are dealing with is already mostly beyond our comprehension and going to get more so over time. Even a completely harmless AI could be reprogrammed with a patch, without your knowledge or consent. Or possibly worse, it could be turned off after you have become emotionally dependent on it.
I wrote this paper to prepare my readership for the intense investigation in my next paper. There I’m going to discuss the value and risks of replacing our human relationships (if any) with AI partners. Mark Zuckerberg recently cited research that my research also depends on that indicates that 15 people is the maximum number of solid friends/relationships a person can maintain. That can be stretched to 40.
My company Arrivant, the one that would not get funding unless I agreed to make it like Roblox, was designed to take people from 3 friends and expand that to 15 human friends in their local community that they played games with daily. Using high tech methods. Mark wants to fill those extra 12 slots with AI entities.
What could go wrong? It’s an honest question, I’m not being snarky. Would we be better off with 3 human friends and 12 AI friends? Could that lower our intelligence even further? What if the AI turned on each other to monopolize your engagement with them? If you think humans get jealous (Yes, yes they do), AI could take it to levels you aren’t ready for. Then who becomes responsible for the outcome?

