127. EOMM: The Big Lie That Was Used to Exploit Hundreds of Millions of Gamers
How one of the world's largest companies used 7 professors to create the Engagement Optimized Matchmaking Framework that determines if you will win or lose before you even enter the game.
[Because this will no doubt become a legal document at some point, this post is very long and detailed. If you are reading this via email you may need to go to the substack source to read all of it]
Gamers often play online to interact with other people in a world that is merit based, unlike the “real” world they live in. But the unfairness that permeates “real” life has infected gaming. You can feel it even if you can’t articulate it. But what if bad actors would do anything to get your money, even decide if you win or lose before you even enter the game using an algorithm designed to engage in psychological manipulation against you? And not just casual psychological manipulation, this is a PsyOps designed by 7 professors to automatically do this to anyone engaging in online play involving a matchmaker.
This magical money making technology might have been interesting in a Machiavellian kind of way if it actually worked. My precision dissection of the research that led to this “tech” and two patents being created shows that the entire endeavor was a big fraud that included multiple professors, universities, and companies. Victims could total up to 700 million gamers or even more that were exposed to this tech since 2016.
What is EOMM?
The Engagement Optimized MatchMaking Framework is the product of seven computer science professors backed by Electronic Arts and the University of California Los Angeles (UCLA), my Alma Mater.
The original research can be found here.
The involved “scientists” were:
Dr. Yizhou Sun (UCLA)
Dr. Magy Seif El-Nasr (Northeastern University)
Dr. Zhengxing Chen (Northeastern University)
Dr. Navid Aghdaie (Electronic Arts)
Dr. Su Xue (Electronic Arts)
Dr. Kazi A. Zaman (Electronic Arts)
Dr. John Kolen (Electronic Arts)
Dr. Kolen was helpful enough to list the patents on his LinkedIn profile, adding a key missing piece to this mystery.
Electronic Arts filed two related patents, 1 and 2. Both patents were initially filed in 2016. The research supporting the patents and tech was not published until 2017, suggesting that the tech was secretly applied as early as 2016, followed by the “research” created to support the legitimacy of the patents. This could create a conflict of interest where the researchers had to come up with a proscribed result before even starting, which is exactly what I believe happened here.
So the idea here is that “fairness” is obsolete and not what gamers really want. They want to be treated unfairly, and that will keep them coming back for more. The use of the word “significant” has a special mathematical meaning in research papers, and this is one of the conditions that I will demonstrate was falsified.
So this becomes a mathematical problem that can be solved effortlessly by an algorithm. How to maximize the amount that a player plays before churning?
W = Win, L = Loss, D = Draw . You can see that even the researcher’s own data shows that evenly matched matches have the lowest churn rates. The researchers then proceed to try to mathematically gaslight you into thinking the opposite is true.
Pay attention to the churn rates here as I will use them to reverse engineer the real data later on, showing that they used manipulated data to prove their preconceived research conclusion.
This is from the first patent. So it plans your matches in advance for the day when you first log in, to give you the correct sequence of wins and losses to maximize your engagement via win/loss contrast. Thus your wins or losses are in this way predetermined by EA, not you.
In psychology this mirrors an “engagement” system using “highs and lows” (wins and losses) called Fractionation:
This is a technique adopted by pick up artists.
This begs the question, “Does Electronic Arts genuinely care for gamers’ well-being, or are they just trying to get in your wallet?”
My regular readers will recognise this as a dopamine cycle, and thus this form of psychological manipulation, generated by an algorithm, makes this product a Digital Drug. This confirms that EA has been secretly testing and selling Digital Drugs since 2016. I’ve been warning regulators about this since 2013 but this is the first case of hard evidence.
From the second patent:
This sort of secret manipulation is strictly and harshly forbidden by the EU’s Artificial Intelligence Act, with bankruptcy strength penalties:
Yes, Yes, Yes.
Enforcement of Prohibited Acts will start 6 months after ratification.
7% of $11.56B is $809M. Electronic Arts has a big problem.
What Are They Really Testing?
Note that since the researchers don’t tell us which game this data came from, it’s likely that EOMM was already being used to alter the “churn” results. And since they try to keep the use of EOMM secret at EA, they would not want the public to know that.
And apparently at least 1.68 million unique EA customers were subjected to psychological experimentation without their knowledge or consent in just 6 months in 2016. That number has to be tens or even hundreds of millions in 2024, including children.
“Churn” here is defined as quitting for at least 8 hours after a match, so the same person can be tested using different matchmaker types on different days. It is important to note that this definition of Churn, an 8+ hour pause for rest, is substituted for the industry accepted definition of Churn (completely quitting the game) during the conclusions. These are two totally different things, making the conclusion mythological. In the test, what they are really measuring for is Fatigue as this is the moment where the player stops to rest.
An even match, such as when players are matched by skill level, is going to be the hardest to win or lose and will likely be the longest match in terms of minutes played unless the match always has a fixed duration. We don’t know because the researchers have hidden the source data and even the source of the source data. This is extremely unusual in research because it prevents cross examination and proper peer review.
An extremely unfair match, as the EOMM system ensures, would result in either the fastest match, or at least one where the least effort is involved. If your opponent is much weaker, you don’t have to try hard to win. If your opponent is much stronger, you don’t have to try hard to lose. It’s immediately obvious what the result is going to be to both players when the skill discrepancy is large.
A harder match, such as a perfectly even skill match, is going to fatigue the player faster than any other condition. Since they are specifically testing for fatigue (but just name swapping this for “Churn”) it is expected that skill based matchmakers would create the most fatigue and the rest state that is measured as a churn indicator.
In the final conclusion they casually swap this contrived Churn state (which is actually fatigue state) for the industry accepted Churn definition, an attempt to deceive the reader. This actually works because I had the paper read by several of my industry peers and by AI and none of them detected the definition swap.
Thus the research is entirely unrelated to the conclusions made by the authors.
This is the conclusion of the paper. It switches to the definition of
Retained = 100% - Churn%
But this is only true where “Churn” means “not retained”. In the experiment the churn rate is set to 0% for all players because they all were retained for play sessions across four different types of matchmakers.
AI failed to detect this definition swap. My highly skilled game designers who follow my posts failed to detect the definition swap. I would classify this as a very sophisticated deception.
Data Manipulation
We could stop here and declare that the “research” was fatally flawed and should be ignored. We would then leave the question of whether this was incompetence or fraud unresolved.
I prefer to resolve this final question here and now.
This requires very detailed analysis.
Significance (Data Manipulation Case 1)
In a scientific paper when we use the term “Significance” what we mean is, is A significantly different than B?” If not, the results don’t matter. The researchers make it clear that they performed the required significance tests, which are complicated mathematical checks which I used to have to do manually back in the old days. So I know how they work.
Here the authors claim they did all the relevant significance tests and found only two pairings to be too similar to be of a significant difference. By including “Skill vs. Random” it is clear that they are not just considering the pairings involving EOMM.
As the number of rounds was set very high (10,000) this is an attempt to make all the numbers significant as more rounds allows even small differences to be significant.
The authors say that the number pairings in “1” and “2” above were too close together to be significantly different. Okay, we have to just trust them since they are not disclosing the source data. But then what about pairings “3” and “4”? These are almost identical and surely not significantly different.
Something is very not right here. And if one of the authors did this without the consent of the others, the others just green lit this without questioning it. Of course they could have had a financial incentive to look the other way since this was sponsored “research”. AI flagged this immediately as a conflict of interest.
I personally question whether any of these numbers were significant as the differences are very small.
Cherry Picking (Data Manipulation 2)
The underlined statement is blatantly fraudulent. The authors took the P= 200,300, 400, and 500 differences and divided by 4 to get ((0.3+0.9+1.1+0.6)/4) = 29/4= 0.725%. This is an extremely small number. It’s also a fraudulent number. The authors just dropped the P=100 difference because it was not favorable to their predetermined outcome. The P=100 difference is 51.90/52.52 = 0.988, or a differential of -1.2%. Done properly you get ((-1.2+0.9+0.3+1.1+0.6)/5) = 17/5= 0.34% , which is a microscopic difference and surely not significant by any measure.
Time Dilation (Data Manipulation 3)
Imagine if this was true, that 15% more players would be retained every 8 hours using EOMM. This is such a fantastic claim that it would justify the expense and complexity of using EOMM, and ignoring all the ethical and moral issues, in every AAA company on Earth. Which is why it’s probably now being used in every AAA company on Earth. The reality is that this isn’t just a made up number, it’s a fraudulent number. My designer/economist peers didn’t detect it, and AI did not detect it. So what is the chance that a CEO will detect it?
For this 15% to be true you would have to:
Ignore the definition swap from Churn to Retained, already discussed,
Ignore the fraudulent 1.007 number, already discussed,
Assume that 20 rounds fits into 8 hours.
But what is a “round”? You may assume you know, but this term is not defined anywhere in the paper. You would be excused for assuming it is one game. But that’s the only condition that we know is not true. Here is why. Remember this chart?
The “Churn” (pause) rate is ~3.6% on any particular game.
But a game is not a “round”. We know that the “Churn” rate on a round is ~48%. Some quick and dirty math tells me that 100%-3.6% = 96.4% (0.964) raised to the 18th power gives me 51.7 % retention.
So a “round” is actually about 18 games, or perhaps 8 hours of pure gameplay!
So when the author claims you can “play 20 rounds … in 8 hours” they know this isn’t possible. That would be ~360 games. But they want you to think it is. So many fraudulent claims and semantic tricks were used to come up with that 15%, which is a clearly impossible number. And of course this whole research doesn’t test for what you and I would call churn, so the entire paper is meaningless.
Except…
What This Paper Shows
This paper does not show any of the things the authors claimed it does.
We know Fractionation works, how it works, how long it works, and the behavior outcomes that are predicted. If someone (usually a “(sex) player” but in this case a AAA game developer) puts you on an emotional rollercoaster, it will suck you in initially. If you ignore all of the blatant ethical issues (which was done here, and which AI predicted) or potential damage to the target (which was done here and AI predicted), there will be an initial spike in engagement. That’s why players use this technique. That engagement wears off quickly after the victim becomes fatigued and traumatized sufficiently to realize something is wrong. That’s usually long enough to get into the victim’s pants (or perhaps wallet) before they churn out, that’s why the technique is used.
And then you get numbers like this:
Realize that’s with a large marketing budget trying to replace churned players. Churn rates per player over 3 months could be 80% or higher. Did EA get into your wallet before you churned? For EA’s sake I hope so. For the consumer’s sake, I hope they realized what was happening before they unzipped.
This is not random. I was able to predict it just from reading the research paper, and so was AI. Your typical CEO? Apparently not. I don’t blame the CEO for this failing as the 7 PhD’s involved worked hard to be convincing, and people have a natural tendency to trust PhDs. I talk about why you should not automatically trust PhDs in my paper on Scientific Cynicism.
But I would blame the CEO for the moral failing, intentional harm to consumers, and deliberate attempts to hide what they knew were immoral behaviors (transparency issues).
It is reasonable to believe this technology, and the similar technology that was patented by Activision in 2015, have propagated all across AAA companies in the ESA umbrella. Since I am in dopamine recovery, I don’t play these sorts of high dopamine games. I’m exceptionally good at detecting matchmaking anomalies when I do play. I only found out due to tips from some observant gamers, with some help from AI and Dr. John Kolen’s patent brag. The other six professors were smart enough to keep those patents off of their LinkedIn profiles.
If you could just pump out any lame Game as a Service (GaaS) and sprinkle your magic automated Fractionation dust onto it for massive engagement guaranteed by psychological manipulation, who would resist? Other than a moral person. Thus the explosion of GaaS all across AAA recently that gamers look at and say “Why?!?”
Now you know why.
But it was all a scam, and so many people were scamming each other that it was just one big clusterscam. And the EU is going to Doomhammer all of this next year with the Artificial Intelligence Act. We may not know how many consumers were harmed, but if AAA can’t make this all disappear before next year they will be looking down the barrel of discovery motions.
Then we will know.