How Is Harry Potter Fan Fiction Connected to OpenAI?

Have you ever experienced the unsettling combination of vomiting and diarrhea simultaneously? I have, and during that unfortunate episode, I was listening to a fan-made audiobook of Harry Potter and the Methods of Rationality (HPMOR), a fan fiction crafted by Eliezer Yudkowsky.

No, the simultaneous bodily distress wasn’t triggered by the story, yet the two experiences are forever linked in my memory. Years later, I was surprised to uncover how the extensive 660,000-word fanfic I consumed while unwell surprisingly intersects with influential figures in the tech world, particularly those connected to the recent OpenAI upheaval.

For example, 404 Media uncovered an Easter egg in HPMOR that I, a dedicated reader of the text, overlooked: a Quidditch player mentioned only once named Emmett Shear. Yes, that's right—the same Emmett Shear who co-founded Twitch and recently stepped in as interim CEO of OpenAI, one of the most significant companies of the 2020s. Shear was an admirer of Yudkowsky’s work, following the serialized story as it was published online, and received a cameo in the fanfic as a birthday gift.

Yudkowsky’s fan fiction has captivated many in the artificial intelligence sector, becoming his most prominent work. HPMOR reimagines the Harry Potter series by starting with the premise that Harry’s Aunt Petunia married an Oxford biochemist instead of the unkind Vern Dursley. Harry grows up as a precocious, know-it-all child deeply committed to rationalist thinking, which prioritizes empirical, scientific approaches over emotional or religious influences. The story kicks off with Harry referencing the Feynman Lectures on Physics in an attempt to resolve a dispute with his adoptive parents about the existence of magic. If you found the original Harry Potter frustrating for not asking the right questions, get ready for this version, who could rival “Young Sheldon” with his inquisitiveness.

It’s no surprise that Yudkowsky mingles with prominent figures in AI today, as he has been an AI researcher for many years. In a 2011 New Yorker piece about Silicon Valley's techno-libertarians, George Packer recounts a dinner at billionaire venture capitalist Peter Thiel's home. Among the guests were PayPal co-founders David Sacks and Luke Nosek, former Google engineer Patri Friedman, and, of course, Yudkowsky.

Additionally, a recent selfie taken by ousted OpenAI CEO Sam Altman features Grimes and Yudkowsky, showcasing their interconnected networks in the tech world.

While Yudkowsky isn’t as widely recognized as Altman or Elon Musk, he frequently pops up in narratives surrounding companies like OpenAI and even in the unique romance that produced children with names like X Æ A-Xii and Exa Dark Sideræl. In fact, Musk once intended to make a joke about "Roko’s Basilisk," a thought experiment about artificial intelligence rooted in Yudkowsky's blog, LessWrong. However, Grimes had already referenced “Rococo Basilisk” in her music video for “Flesh Without Blood.”

HPMOR serves as more than just an innovative story; it acts as a recruitment tool for the rationalist movement, showcasing Yudkowsky’s philosophy. Through an engaging narrative, he uses the beloved world of Harry Potter to exemplify rationalist principles, demonstrating how Harry overcomes cognitive biases to become an adept problem-solver. In a decisive encounter with Professor Quirrell, who embodies the dark side of rationalism, Yudkowsky creatively invites readers to submit rationalist theories that would allow Harry to escape a dire situation. Fortunately, the community succeeded in crafting viable theories, leading to a satisfying resolution.

However, the central message of HPMOR goes beyond simply becoming a better rationalist. "Much of HPMOR emphasizes that while rationality can lead to incredible effectiveness, being highly effective does not preclude being incredibly evil," a friend who also read the fanfic explained to me. "Ultimately, rationality means little if your intentions are misaligned."

Despite differing perceptions of good and evil, current debates at OpenAI raise fundamental issues surrounding alignment in AI development. OpenAI is striving to create artificial general intelligence (AGI) that aligns with human values—such as averting catastrophic AI-induced scenarios. Ironically, this “alignment research” represents Yudkowsky’s specialty.

In March, thousands of notable individuals in AI signed an open letter urging all AI labs to pause operations for at least six months. Among the signatories were engineers from Meta and Google, founders of Skype, Getty Images, and Pinterest, Stability AI founder Emad Mostaque, Steve Wozniak, and even Elon Musk, who resigned from OpenAI’s board in 2018. Interestingly, Yudkowsky did not endorse the letter, opting instead to publish an op-ed in TIME Magazine arguing that a six-month hiatus was inadequate.

“If a powerful AI is developed under current conditions, I expect that every human being and all biological life on Earth would face extinction shortly thereafter,” Yudkowsky contended. “There isn't a clear plan for how we could navigate such an outcome and survive. OpenAI’s stated intention is to delegate our AI alignment tasks to a future AI. Just learning that this is the plan should alarm any sensible individual.”

Yudkowsky’s calls for caution contrast sharply with the rapidly shifting landscape surrounding AI leadership and the varied ideologies competing for dominance. Emmett Shear, now acting as interim CEO of OpenAI, is one of the most influential figures in this discourse, sharing memes about the different factions within the AI debate.

The tech community is divided into different camps: techno-optimists advocate for unrestrained tech growth, believing that any arising issues will be resolved through further technological advancements. The effective accelerationists (e/acc) echo similar sentiments while invoking the second law of thermodynamics as a rationale for their stance. Safetyists or “decels” push for cautious technological development that prioritizes regulation. Conversely, doomers believe that an advanced AI will inevitably pose a lethal threat.

As a leading figure among the doomers, Yudkowsky has long associated with many individuals within OpenAI’s board. Speculation regarding Altman's ousting suggests that the board sought to appoint someone aligned more closely with its “decel” perspective. Enter Shear, who, inspired by Yudkowsky, considers himself part of both the doomer and safetyist factions.

As the situation at OpenAI evolves, uncertainties prevail. The narrative shifts frequently, echoed in social media discussions surrounding decel versus e/acc ideologies. Amid this turmoil, I’m captivated by the realization that much of this complex drama can be traced back to an unexpectedly intricate Harry Potter fan fiction.

Most people like

Find AI tools in YBX