I write (and think) about AI for a living. In any given 30-minute period, I waver between worrying that AI will destroy everything I know and love, and believing -- or at least wanting to believe -- that it could change humanity for the better.
Dread turns into optimism, which seeps into ambivalence, which then turns back into dread-induced cynicism. Rinse, repeat. Goodness, my central nervous system needs a break.
That debate is at the heart of a new documentary arriving in theaters today, March 27. The AI Doc: Or How I Became an Apocaloptimist (104 minutes) first premiered at Sundance in January and later screened at SXSW. The film explores the wild industry and mind-melting world of artificial intelligence. It takes an unflinching look at the tension between those who feel extreme doom versus those who feel extreme optimism about the AI boom, and how to make sense of that polarity.
The documentary's two directors, Daniel Roher and Charlie Tyrell, were soon-to-be fathers during the filmmaking process, their kids born a week apart. Through the lens of fatherhood, the documentary makes use of hundreds of interviews, both onscreen and offscreen, with key technology and risk experts worldwide -- from OpenAI CEO Sam Altman to Dan Hendrycks, executive director of the Center for AI Safety -- to explore whether AI is the greatest existential threat we've ever known, or the most singularly exciting technology we've ever known, or something else entirely.
Roher won the Academy Award for Best Documentary Feature for Navalny (2022), and Tyrell was on the Oscar shortlist for his documentary short My Dead Dad's Porno Tapes (2018). The AI Doc was also produced by the teams behind Everything Everywhere All at Once (Daniel Kwan and Jonathan Wang) and Navalny (Shane Boris and Diane Becker).
I spoke with Tyrell this week, before the documentary's theatrical release, to discuss fatherhood, the two-and-a-half years of making this documentary, inspirations, goals and society's future with AI.
The interview below was edited for length and clarity.
I know you've made documentaries before, but how did you prepare, going from a deeply personal short documentary to a documentary like this, that really looks at the biggest, impactful thing that is AI?
Tyrell: I mean, there was no preparing. Daniel Roher is the one who brought me into this film, and I can't remember how many features he had made before this, but more than me. And it was just confidence in each other. And not just in Daniel Roher, but in the rest of the team to be going through it together and kind of, "We don't need to have a plan, we'll make the plan as we go."And not necessarily being cavalier about it, but just knowing we had a job to do and a goal, and just keep moving forward toward that.
So how did I navigate? Just with faith in the people around me. Coming from a personal short before this, I still tried to apply a lot of my personal sensibilities and POV to this story. It's through the lens of fatherhood, and I became a father the same week that Daniel did. So a lot of his feelings were my feelings, and vice versa.
I was really touched by the fatherhood lens. It was very tender and took me a little by surprise. Was that an organic process, or did you know going in with Daniel that it would be the framing?
Tyrell: It happened quite organically, but also so early in the process. I think it was in our first or second group meeting with Dan Kwan and Jonathan Wang and Shane Boris that it was presented as an idea of a way we could go about this. And we started kind of entertaining it out the gate.
And you said Daniel is the one who brought you on. Do you think your shared upcoming fatherhood was part of that?
Tyrell: Definitely. I can't recall if this project came up before or after we were aware of each other's babies around the corner. But definitely. I lean into serendipity, and I believe that Daniel does, too. So it was nice to have a companion when you know you're going to go through a thing like a behemoth of a feature film, for a behemoth of a topic like AI. And to know that, "OK, I'm going to be going through this other huge thing in my life of having a kid," and, "OK, someone else is going to be sharing that experience a little bit." It was just so reassuring to know that.
Of course, you have the panic of "how am I going to be able to navigate my job with a kid?" And just knowing that wasn't going to be done all by myself gave me quite a sense of security. And actually, my kid is in the film a couple of times. There are some snuck-in frames and moments in there.
In an interview with CBS, you said a goal was making AI more democratic. Who do you think really benefits from the current AI boom, and who gets left out?
Tyrell: Well, one of the first people to benefit is going to be the tech industry, and these valuations that are happening for their companies for these, in some cases, absurd, unheard-of amounts. It's making a lot of people very wealthy, and it's making a lot of people very powerful. So that's one of the first who benefits.
And then there are the people it's not benefiting. Speaking to data centers, people are losing some of their resources that they need, like water. Some people are being displaced from their homes for these data centers. I'm mostly just speaking to the Western world and North America and the United States specifically. It's a tricky thing and overwhelming sometimes to follow the back end of this technology ... In this field, there are spaces in the world where there are individuals looking at screens and upvoting and downvoting data [to train AI], and some of it is horrific material to look at. There's still a human being assessing what's going into [data sets] and being exposed to, in some cases, some awful material and awful media -- and not being paid very well to do it.
Was there a certain perspective that most stood out to you during the process of making this documentary? Was there one person in particular who just really had a ton to say that really stuck with you?
Tyrell: The film, including the experience of making it, really was a chorus of voices. But one that really does stand out for me, just off the cuff, was Deb Raji [a computer scientist and researcher at UC Berkeley, specializing in algorithmic auditing]. She was really able to speak to the ways this technology is deployed, at the pace it is, without the regulation that maybe it should have. Right now, today, there are people who are becoming victims because of the faults of the technology. There are people who are ending up spending the weekend in jail because a facial recognition software that was powered by AI misidentified someone and confused them with someone who did commit a crime.
As this technology gets deployed into things like mortgages and loans and that kind of bureaucratic stuff that people need to live -- it needs to go well and go right, because their lives and their wellness and their stability are depending on it. These systems are not a human being with something like compassion. They're binary systems that will ultimately give a yes/no, without much room for pushback, because we take it as data and absolute truth. So people are being impacted by that.
Daniel conducted the interview [with Deb Raji], and I was zoomed in more as an observer, but I was really just taken aback by a lot of what she said because it took me out of my kind of bubble that I live in. And one thing she says is, that if you feel like the negative impacts of these technologies won't affect you because of your place in life or your privilege, it's just a matter of time. Because it just scales up.
I felt very seen at times during this documentary because on a daily basis, I'll flip-flop like, "AI's going to ruin everything." And then I'm like, "No, it's going to be OK. We're all going to be fine." Humanity's gone through really pivotal shifts before, and we've done OK. Were there any moments where your perspective on AI was flip-flopping back and forth? How many times did that happen?
Tyrell: The whole time and continues to now. And that's the reality of this technology. It is both things at the same time. One of the messages of the film is exactly that this is going to have these amazing capabilities, as well as these horrible capabilities. And to wield it, we need to acknowledge and understand that's what it's going to be. We can't have a belief that it's only going to be good, or it's only going to be bad, because it's always going to be both.
Was there a target audience for this? Because I live and breathe AI and think about it all day, every day, but I loved this documentary, and it taught me things. Did you make it with the approach that this would be more for people who have a vague idea of what AI is, or was it for everybody?
Tyrell: What we were striving for here was a bit of a primer, a bit of a first date into the technology. And with that, we could say that the audience was people who are maybe not interested or willing to engage with this technology or this landscape -- people who are maybe more content to ignore it. We wanted to make an entertaining film that would be engaging but also informative. It's a very overwhelming topic. I personally find that when I'm overwhelmed with information, I kind of want to shut off and look away. Like, let me not have another issue to deal with in my life, right? That's normal human nature for many people.
We wanted to make the film so that it was accessible and, in a way, a start for most people, a beginning of a conversation for people. And with that, I don't mean we're being super reductive with any of it or overly simplistic, but it was made for general audiences. It was made to meet most people where they're at when it comes to this technology.
Are there any questions about AI you wish more people asked?
Tyrell: In terms of people using it, I hope that there becomes more illumination on the energy usage to create a silly image of yourself in a different scenario and setting. I wish that there was more transparency or metrics on: "To make this image, this is how much water you've used, or this is how much power you've used." And if people saw that, maybe they would still try to get the exact pro-perfect image of them as a centaur or something, but maybe instead of trying 50 attempts to find the right one, they would cap it off at a couple. That would be something I would like to see baked into some of the interfaces of the models.

3 hours ago
1













































