illustration of a robot-like figure
03 December 2025
essay

Hank Green And The Fantastical Tales of God AIs

AI Doomerism is a Decoy:

Yet the supposed AI apocalypse remains science fiction. “A fantastical, adrenalizing ghost story is being used to hijack attention around what is the problem that regulation needs to solve,” Meredith Whittaker, a co-founder of the AI Now Institute and the president of Signal, told me.

Savannah, Georgia—In the old lacquered coffee shop on the corner of Chippewa Square, I eat a blueberry scone the size of a young child's head and sip cold black coffee while staring incredulously at my phone. I'm watching Hank Green interview Nate Soares, co-author of the new book If Anyone Builds It, Everyone Dies, and I am in utter disbelief at the conversation taking place before my eyes. Hank Green, the internet's favorite rational science nerd, does not appear to be approaching this interview with any critical lens at all. Instead, he seems to be outright gushing over Soares, an AI-doomerist who’s made it impossible to know where his message ends and big tech’s lobbying begins.

Let me explain...

The second author of …Everybody Dies is self-described genius Eliezer Yudkowsky, founder of the Peter Thiel-funded nonprofit Machine Intelligence Research Institute (MIRI) and leader of the Rationalist movement. In his spare time, Yudkowsky writes the LessWrong blog where he tells his followers that they should find a dignified way to die. To Yudkowsky, the AI apocalypse isn't a cautionary tale; it’s biblical. It’s prophecy. (According to him, the singularity will happen in 2025.) And yet, despite being the hardest working guy in the AI-doomerist biz, Yudkowsky still finds time to take the odd selfie with OpenAI CEO Sam Altman.

Hank Green promoted this guy’s book in an hour-long video. Hank then followed up with a second video where he makes an argument for the type of AI alignment that sounds like the talking points Sam Altman and other tech CEOs have been reciting to Congress. You wouldn't know it watching Hank’s AI videos, but to many, this AI-doomerist rhetoric—propped up by lobbying firms cosplaying academic nonprofits and hand-delivered to lawmakers by AI-company CEOs—is an obvious regulatory-capture strategy to kill open source and place AI tech in the hands of a few billionaires.

Extinction-level event(?) #

I shove another piece of scone in my mouth and wash it down with a long sip through my straw (I'm a stress eater). I'm watching Hank Green speak on AI, this time directly to his audience. It's my third viewing. By now, I’ve memorized the gist of his monologue, and I’ve jotted his cited sources in my notebook with aggressive ornamentation scribbled around a few key terms—Anthropic, Center for AI Safety, Control AI.

In We've Lost Control of AI, Hank Green warns us of catastrophe. He cites the Statement on AI Risk—a tweet-sized document on the Center for AI Safety’s website “signed by Nobel Prize winners, scientists, and even AI company CEOs,” as Hank claims. Hank reads the statement aloud almost verbatim, except, he omits a single word: extinction. As in: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.” The video’s description, which also identifies the risk and encourages viewers to sign up for Control AI’s political mobilization tools, also omits the term extinction. Though, when you click through, Control AI’s webpage has extinction on it three times, once in the first sentence.

Extinction is the threat, according to these folks.

Hank clearly wants his audience to know he’s leaning on the testimony of who he identified as “leading AI experts.” Those experts thought it appropriate to use the word extinction. So, why would Hank omit the most consequential word in a single-sentence declaration? Is this Hank trying to spare us from anxiety? But then why make the video at all if he felt the need to coddle us?

Maybe the goal of the video was to mobilize his audience by generating just enough concern they click through to Control AI’s website, but not so much that people start asking too many questions in the comments.

Or, maybe Hank was just embarrassed, who knows? Either way, I imagine Sam Altman is over the moon watching the same Hank Green video that is currently filling me with dread.

The devil is in the details #

As it turns out, the Statement of AI Risk—the one hosted on the Center for AI Safety's website and cited by Hank Green—is not the document signed by multiple Nobel Prize winners. It is the document signed by Sam Altman, Bill Gates, Dario Amodei (Anthropic’s CEO), and other AI-company executives. And it's the document those CEOs are using to lobby Congress for self-serving regulations around AI.

According to Carl Brown of Internet of Bugs, the actual statement is named Global Call for AI Red Lines and its warning seems a little more like something respectable experts would sign.

Global Call for AI Red Lines:

AI could soon far surpass human capabilities and escalate risks such as engineered pandemics, widespread disinformation, large-scale manipulation of individuals including children, national and international security concerns, mass unemployment, and systematic human rights violations.

Number of times extinction is mentioned in Global Call for AI Red Lines: 0.

So then, who are The Center for AI Safety, Control AI, and what’s the Statement on AI Risk?

The Center for AI Safety (CAIS) is a billionaire-funded think tank/lobbying firm whose members lobby for legislative outcomes that conveniently resemble the desired outcomes of OpenAI, Google, and Anthropic. The Statement on AI Risk is CAIS’s lobbying tool.

AI doomsayers funded by billionaires ramp up lobbying:

Each nonprofit spent close to $100,000 on lobbying in the last three months of the year. The groups draw money from organizations with close ties to the AI industry like Open Philanthropy, financed by Facebook co-founder Dustin Moskovitz, and Lightspeed Grants, backed by Skype co-founder Jaan Tallinn.

The Statement on AI Risk’s brevity is its strongest feature. It’s a clever trick, because it leans on the incontestable without ever having to provide evidence of a threat, or even specify the nature of that threat. Mitigating the risk of human extinction from anything should be a global priority. Of course I’ll sign it!

The statement makes no claim about the imminency of catastrophe or what we must do to mitigate it. It doesn’t even explain how AI would cause an extinction-level event.

Omitting those pesky details outlined in the original document frees up tech CEOs to make the Statement on AI Risk whatever they need it to be in the moment. The statement can be the prologue to any fantasy tale, so long as it compels Congress to legislate a deep lair for proprietary AI models where big tech gets to decide who has the keys.

From AI Doomerism is a Decoy:

Those 22 words were released following a multi-week tour in which executives from OpenAI, Microsoft, Google, and other tech companies called for limited regulation of AI. They spoke before Congress, in the European Union, and elsewhere about the need for industry and governments to collaborate to curb their product’s harms—even as their companies continue to invest billions in the technology. Several prominent AI researchers and critics told me that they’re skeptical of the rhetoric, and that Big Tech’s proposed regulations appear defanged and self-serving.

Then there’s Control AI: the organization Hank Green recommends to his audience at the end of the video to mitigate the risk of catastrophe. Control AI is also the video’s financier, which feels different than a sponsor who supports an organic video of the channels choosing, then maybe slaps an ad in the middle of it. Like when MKBHD talks about iPads, then tries to sell you a screen protector. We’ve Lost Control of AI is different. The video itself is the advertisement.

SciShow Is Hyping up AI, Carl Brown of Internet of Bugs:

I’ve had Control AI reach out to me about collaborating, so I’ve done some research on them and I really don’t like what I saw there. Because they feel, to me, as if they are acting as a propaganda arm of the AI industry.

I agree with Brown. Though it’s difficult to know who Control AI speaks for since its source of funding is opaque as far as I can tell.

Control AI is lobbying Congress for AI licenses. Such legislation would end open-source AI models—the same models currently eating into Anthropic’s and OpenAI’s profits. These CEOs say they’ll carve out exceptions, but of course, that never actually happens, does it?

However you feel about AI and its long-term usefulness, I’m sure no one wants artificial-intelligence technology to become the catalyst for a telecommunications-style monopoly of our most critical communications infrastructure. AI is baked into everything. Big tech will continue to find ways to make the technology indispensable to our everyday lives and business dealings.

The Ghost Stories of Savannah #

I’m on my second scone. I’m not proud. I head out the cafe and towards the square to walk off all these delicious carbs.

I visit Savannah often. The bronze statues and sprawling oaks always make me want to write. The historic downtown is packed with so much implied history it practically begs its tourists to ask about it. Though, the stories of Savannah are decidedly spooky, not historic. Walk around and you can hear ghost stories told by any number of tour guides, all stopping at the same Victorian mansions, each with a slightly different version, all crafted to give you a cheeky Saturday-night scare. This is how tourists consume the history of this once-bustling slave-trade port. We all know where the doors under the stoops of these victorian homes once led, but its a more palatable vacation if we imagine ghosts instead of enslaved humans.

Savannah leans on the fantastical to hide a much darker history. The ghost tours are there to distract us from the echoes of slavery.

The question I wish I could answer in this post—the question I cannot answer—is: does Hank know? Does Hank Green know that what he’s peddling to his largely left-wing audience are ghost stories designed to distract us from the material harms caused by AI? Or that people like Eliezer Yudkowsky, and even the “godfather of AI” Geoffrey Hinton have expressed what feels like distain for those issues?

AI models present plenty of concerns beyond the supposedly existential and science fictional ones Hinton is most preoccupied with, including everything from their environmental costs to how they’re already being deployed against marginalized populations today. But when CNN asked Hinton about those concerns in May 2023, he said they “weren’t as existentially serious” and thus not as worthy of his time. — Geoffrey Hinton, godfather AI

Does Hank know by echoing Anthropic’s unverifiable doom-marketing, he’s helping big tech, not challenging it? Surely Hank has heard of the FUD strategy. It’s Silicon Valley’s most tried-and-true method for killing open-source projects.

Watching Hank’s interview with Nate Soares, I get the sense Hank doesn’t fully understand these people’s ideologies or histories. Hank kept asking questions that Nate didn't seem to like, or have answer for. I didn’t get the sense Hank was playing 4D chess or anything. I just think Hank's a reasonable guy whose default is to arrive at reasonable conclusions. Hank kept hearing horses while Nate Soares kept trying to steer him towards zebras. If it were just that one video, I could write this whole thing off as Hank simply not doing his homework. But that second SciShow video was wildly irresponsible, in my opinion. And this is not the first, or even the second time Hank has made AI-doomerist content like this with a dubious call to action.

On some level, I think Hank knows something is off and he’s trying to rationalize it. In his very next video, titled Slavery Was Racist, Hank had a bit of insight that gave me some hope he’s becoming self-aware. I really hope he course-corrects soon.

When I go to dinner with people in San Francisco, they talk a lot about how to not die. Like, a lot. That, it's their main obsession, because it's the only bad thing they can still imagine happening to them. — Hank Green, Slavery Was Racist

Well said, Hank.

Sources and Further Reading #

On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜

MIRI announces new "Death With Dignity" strategy — LessWrong

Regulatory Capture in AI: How Fear of Competition Drives Policy

Eliezer Yudkowsky's Long History of Bad Ideas

We need to talk about that video Hank endorsed : r/nerdfighters

"“The whole thing looks to me like a media stunt, to try to grab the attention of the media, the public, and policymakers and focus everyone on the distraction of scifi scenarios,” @emilymbender “This would seem to serve two purposes: it paints their tech as way more powerful

AI and the threat of "human extinction": What's really going on here?

AI doomsayers funded by billionaires ramp up lobbying

AI Doomerism Is a Decoy

There’s an event tonight in for a book supposedly critiquing the AI industry called If Anyone Builds It, Everyone Dies. The PR person for the book invited me to the event, I said great, RSVPd. Then they called and said I was DISINVITED for being too critical of the tech industry!

Researchers question Anthropic claim that AI-assisted attack was 90% autonomous

OpenAI’s Sam Altman Regenerates the Gilded Age Playbook - Bloomberg

AI Doomerism Is Bullshit

For even further reading, visit my bookmark collection for this post.

Metadata

 
label name
Plot notebook
Published
Type essay
Phase
Tags
Assumed audience Hank Green Fans