A jury says Meta and Google hurt a kid. What now?

4 hours ago 2

Today on Decoder, we’re talking about the landmark social media addiction trials that just resulted in two major verdicts against Big Tech. There’s one case in New Mexico against Meta, and another in California against both companies, which have said they plan to appeal.

These are complicated cases with some huge repercussions for both how these platforms work and the very nature of speech in America, so to help us work through it all, I’ve brought on two heavy hitters: my friend Casey Newton, who is founder and editor of the excellent newsletter Platformer and co-host of the Hard Fork podcast, as well as Verge senior policy reporter Lauren Feiner. Lauren was actually in that Los Angeles courtroom where executives like Mark Zuckerberg took the stand in the case of a 20-year-old woman named Kaley, who successfully argued Meta and Google negligently designed their platforms in ways that contributed to her mental health issues.

These cases, the first in a wave of injury lawsuits targeting tech companies, are about the design decisions of platforms like Instagram and YouTube. They argue that the platforms have fundamental flaws that harm users, especially teenagers, and that these companies knew about these problems and were negligent in shipping these features anyway. These cases are part of much larger set of moves that aim to fundamentally change the legal mechanisms that exist that might regulate social media platforms.

Verge subscribers, don’t forget you get exclusive access to ad-free Decoder wherever you get your podcasts. Head here. Not a subscriber? You can sign up here.

When we say harm, we’re not just talking about addictive design that brings users back compulsively. It’s also about features like algorithmic recommendations and camera filters that make issues like anxiety, depression, and body dysmorphia worse. This emphasis on how the platforms work, as opposed to focusing solely on the content, is part of a movement that’s been building for years. It focuses on the argument that social media is not and cannot be healthy — that it might in fact be defective, the same way that cigarettes, when used as designed, cause cancer.

There are a lot of complex ideas, and Casey, Lauren, and I really spent some time working through them. The first of these ideas is whether there is a distinction between product features — like recommendation, auto-play video, infinite scroll — and the types of harmful yet legal speech served to young people on these platforms using these tools, like eating disorder videos or posts designed to convince young men to hate women.

But it’s very difficult, if not unconstitutional, to force these companies to moderate this kind of content in specific ways. The First Amendment obviously prohibits the government from regulating what speech these companies promote and moderate, and private action is usually blocked by Section 230 of the Communications Decency Act, which protects tech platforms from being held responsible for the content their users post.

It’s really hard to pull all these ideas apart. An algorithmic feed with no content in it simply isn’t a compelling product, let alone a negligently defective one that causes harm. A lot of smart people who we’ve had on this show and on The Verge these past few years have said these rulings are just an end run around 230 — just a way to make platforms liable for what, ultimately, is just speech, in a way that will cause more speech to be restricted. You’ll hear us talk a lot about that idea, and whether the growing calls to repeal Section 230 entirely have any logical connection to these cases, or whether they’re just politically opportunistic.

But there are many more ideas at play here and even more layers of compilation. You will hear Casey and I even crash out a few times in this episode, because we have both been covering tech regulation for so long it feels silly to act like everything is working well for regular people, who have negative experiences with social media all of the time. Section 230 is three decades old now, and it’s unclear whether the world it was designed to help create ever came into existence.

You’ll hear Lauren talk about how the authors of Section 230 are open to changes, particularly around AI and speech online. At the same time, any changes to that law run headlong into the First Amendment and potentially open the door to government speech regulations at scale. Like I said, it’s complicated, and I‘m very curious to hear what you all think about this, because it’s clear a lot of this is about to be up for grabs.

Okay: Platformer’s Casey Newton and Verge senior policy reporter Lauren Feiner on the major social media lawsuits. Here we go.

This interview has been lightly edited for length and clarity.

Lauren Feiner, you’re senior policy reporter here at The Verge. Casey Newton, you’re founder and editor of Platformer, and I would say forever Silicon Valley editor here at The Verge.

Casey Newton: I do continue to identify as the Silicon Valley editor of The Verge, so I’m glad you feel the same way.

You can check out, but you can never leave, buddy. Welcome, both of you, to Decoder. I want to talk about these trials that a bunch of social media companies faced in California and New Mexico. Lauren, at a high level, you were in the room for at least the trial in California. I think Snap and TikTok settled that one. They were out. YouTube and Meta just lost a jury verdict. Describe what happened in those trials and what you saw in the courtroom while you were there.

Lauren Feiner: At their core, these trials were about the design decisions that social media companies make, how users are going to interact with what comes across their feeds. It was trying to get around a problem that has been going on with tech for a long time: can you separate design from content on these platforms? That’s what these trials were trying to get at. And what came out at trial in the courtrooms were a lot of internal documents from these companies. In the LA case, it was Meta and YouTube. And in New Mexico, it was just Meta.

We saw lots of internal documents, lots of former Meta employees turned whistleblowers take the stand to discuss the decisions they made and the things they saw. In LA, we even saw the head of Instagram, Adam Mosseri, and the CEO of Meta, Mark Zuckerberg, take the stand.

Casey, we call these bellwether trials on The Verge. The whole industry has decided that this is a word we’re going to use. Can you just quickly explain what that means? You’ve been covering attempts to regulate these companies forever. And the idea that these trials are a bellwether seems particularly meaningful here.

CN: Yes. As you know, Nilay, for the past 20 years, companies have been able to use Section 230 as a shield. Whenever there is any remotely content-related challenge to any of these platforms in court, they just get dismissed out of hand. The reason that these cases are bellwethers is that if they were successful, it would open up this new front for litigation and these companies could no longer just automatically use Section 230 as a shield. And that now indeed has happened and we’re expecting there will now be dozens more lawsuits proceeding along exactly these same lines.

I’m hoping by this point Decoder listeners know Section 230, but it’s the law that says the platforms are not liable for what their users post. If I put up a post on Instagram or TikTok that says, “Casey Newton is horrible, Hard Fork is my sworn enemy. It should be made illegal,” Casey can sue me, but you can’t sue Instagram.

That has always been really important because it means that whenever anyone says they’re harmed by the platforms, the platforms can say, “It wasn’t us, it was actually the speech that you’re mad about. And our role in distributing or promoting that speech is actually the same as the speech itself.”

It seems like this trial did a better job of making that argument than attempts in the past. I’m thinking of cases like Herrick v. Grindr. There was the famous case against Snapchat with the speedometer filter where a teenager drove too fast trying to get a screenshot or photo of himself running his car as fast as he could in Snapchat. Those cases were not bellwethers in the same way. What set these apart and why was that argument more successful this time?

CN: The Lemmon v. Snap case was a really important precedent. Snapchat used to offer this filter where you could turn it on and take a video of yourself in your car and it would show how fast you were going. Plaintiffs successfully argued that this had created an incentive within the app for people to go really, really fast and do dangerous things. And indeed in this particular case, there was a dangerous crash.

The reason that that was important was that all of a sudden the 230 shield wasn’t absolute. There had already been a couple minor exceptions like, “The platforms have to remove terrorism and CSAM.” But now we’re saying, “You can’t offer a filter like this because it might incentivize terrible behavior.” This is what opens up the rest of the landscape for the plaintiff’s attorneys.

They’re able to say, “What other design features are there of these platforms and what incentives are they creating? We’re not going to talk about the actual messages that are being traded back and forth on Snapchat or the actual content of the post on the Instagram feed, but we are going to ask about things like infinite scroll and autoplay video and push notifications that arrive continuously throughout the night and might disrupt your sleep.” And all of a sudden they were able to find purchase because they had that initial precedent.

The thing that really drives me at that is that Snapchat had made that filter. That was Snapchat’s speech. They were saying, “Well, if you drive fast, we’ll generate a speedometer reading for you.” And in this case, it’s still not the platform speech. You can make an infinite scroll, you can make autoplay videos, and those are just ways that they are managing the speech of others.

Did the plaintiffs have to overcome that? Because that seems like where you would hit the 230 rocks over and over again and they would say, “We’re just managing the speech of others. It’s still the First Amendment.”

CN: The plaintiffs were able to successfully argue infinite scroll is not the speech of others. There’s no liability of another person that gets involved here; someone built a product and the product is defective. They were able to successfully liken these things to cars without seatbelts and it really resonated with jurors.

It’s worth taking a minute to talk about why that might be, because this is something that the people that I talk to at the social media companies never seem to understand. Everybody knows someone who has a huge problem with Instagram. This person is probably in your immediate family. They have deleted it a hundred times off their phone and they always reinstall it. They’ve set the screen time limits, but they keep coming back over and over again and they hate themselves for it. This is a near universal experience in America now. When you sit a jury down and you say, “There’s something wrong with Instagram,” it’s pretty easy to find a lot of people who say, “That sounds right to me.”

One of my feelings was that if any of these cases ever got to a jury, the thing Casey is describing would kick in. Everybody has these negative experiences with these social media platforms and the companies themselves always tell us that statistically these problems are small, but their user numbers are so vast that even a small percentage is many, many millions of people. I think the platforms never got their heads around that either.

Did you feel the same way there, that once you put Mark Zuckerberg in front of a jury, there was just no way that the social media platforms would win a case?

LF: It was really hard to know. First of all, why were these jurors selected? Were they selected because they’re the sort of people who don’t use social media a lot or know about a lot of good experiences with social media? That was the wild card in watching them: how are they really taking in this evidence? At the same time, it can be hard to hear some of this evidence. Anyone who knows someone who’s been through a mental health issue or has struggled with just using their phone too much or being on social media too much, a lot of us know people like that if we’re not those people ourselves. That’s definitely going to affect them in some way on a human level.

When I was watching Mark Zuckerberg on the stand, he was talking about a certain beauty filter that they had and how one of his own employees pushed back on including it and talked about, I believe, having daughters and thinking about how something like this would affect them. It’s maybe that these people don’t have as much experience with social media or don’t have the exact same experiences that this plaintiff had, but they certainly know other people in their lives who’ve probably experienced something similar.

CN: It also seems relevant to say that TikTok and Snap settled before the trial. That was the moment when I said, “Okay, they must be really, really scared.” I was actually waiting for Meta and YouTube to settle as well. Once that happened, I think it was clear they were in a lot of trouble.

The comparison here that everyone has been making is to big tobacco, to junk food, to sugar, right? We all know these things are bad for us. “Nicotine is awesome, so we can’t stop ourselves.” There should be some regulatory framework or we should make these companies at least communicate the risks. Does that framework hold for you?

LF: One thing that’s a big difference between this moment and that for big tobacco is that saying that there’s no safe cigarette. There are a lot of studies that show that’s not really the same case for social media, that some level of social media use actually has a positive or at least neutral effect on people. It’s really that overuse, that compulsive use that is the main problem here and really the problem that people talk about. Social media does connect people with their friends, it lets you stay in touch with people, lets you have social connection or connection outside of your immediate community, but obviously it also has really harmful sides to it and using it too much can cut you off from real social connection.

That’s a big difference here. When people compare this to that moment, I do think that’s really something we need to think about, that these aren’t really one-to-one scenarios. That said, I think the comparison is made to pull out how these companies are finally having a lot of their documents come to light in front of juries, just like what happened in the big tobacco trials. That is really the point to take away from that comparison.

Casey, you and I have talked about this a lot. We owe our careers to social media in very real ways. The idea that the internet lets us bypass gatekeepers and go reach our audiences, it’s very important to us. The flip side of that is, boy, a lot of bad people got to do a lot of bad things. How would you draw these lines?

CN: It is very tricky and you have to articulate it with some degree of nuance. To me, I separate the internet problems from the platform problems. Really, Nilay, the internet is what gave us our careers. The internet is what knocked down the gatekeepers and let us, in my case, hang out a shingle on the internet and say, “Hey, I’ll email you for money.” That is something that did not exist in the pre-internet times.

The platform problems are different. They have a lot to do with algorithmic amplification, yes. But also with these design features. This feeling that we’ve been talking about: “I don’t want to look at TikTok as much as I’m looking at it. I don’t know how to stop. I have tried to stop.” Or “I bought some device that bricks my phone when I walk in the door.” These are the problems of creating a platform whose only incentives will ever be to get you to look at it as much as humanly possible. That’s why the scrutiny is finally drifting over to those things.

We don’t want to get rid of the internet. We don’t want to get rid of your right to be able to post your opinion online. We want to get rid of this machine that increasingly seems like it’s taking more and more of your time and attention in ways that make you feel bad.

That is the story of the case. They went up, they lost. We’ll see what happens next. The real turn here is what do they all do now? They’ve been held liable for these product features. There’s some conversation that we should have in the industry, that the United States of America is going to have, about the difference between free speech and product features. We’ll come back to that.

But in the meantime, they’ve got to do something. They’ve got to change something about how their products work to avoid ongoing liability from anyone else who might look at these cases and say, “We’re going to sue you too.” Casey, this feels like a trust and safety problem, right? This is your audience, these are the people you talk to the most. What is their reaction to this?

CN: Their reaction is really negative. In particular, talking to people who still work there, and what they’ll say is even if you buy the plaintiff’s arguments here, fixing this is really tricky. Because again, even if you believe that this individual teenager had a horrible time looking at these platforms for too long and it made all of her problems worse, which design feature of this platform are you going to remove and how is that going to fix her problem? If Instagram and YouTube did not have autoplay video, if it didn’t have infinite scroll, if it didn’t have push notifications, would that have improved her mental health to a point where she no longer would have sued the company saying this is a defective product? I don’t know.

I think that the problem that we just have as a society right now is we don’t know what safe social media is. We don’t know what features are really the most dangerous. We have instincts. There are experiments that we should run, but it’s not as simple as, well, just turn off the autoplay video and all the teenagers will go play outside again.

CN: Here’s the thing. As somebody who writes more about social media than anything else, I have been shocked at the degree to which I am just throwing in my lot with Jonathan Haidt. Because I also don’t know. I do not know which are the features that we should get rid of that are going to make all the teenagers safe. What I can tell you is nobody who works at the platforms cares enough about any of your teenagers for me to trust your teenagers with them. So I would rather say, “Don’t look at it until you turn 16,” because I know that’s going to be better for you than looking at it.

We can hear Casey who talks to the people who work at the platform companies fully crashing out about that experience. Lauren, you talk to policymakers all day long. Nominally, you are our policy reporter in DC, you cover Capitol Hill. We don’t send you to courtrooms all day and all night, although that’s what you’ve been doing. On that side of the house, what are the policymakers doing in reaction to these verdicts?

LF: So far we’ve seen a big push from the lawmakers who are behind some of the biggest social media reform laws like Kids Online Safety Act saying, “This just shows that we need these new laws or we need to repeal old laws like Section 230 in order to make kids safe.” That is the big push right now. It’s still really early days though.

I am going to be really interested to see if that is where the momentum moves or is there even a counterbalance to that that says, “Let’s slow down, because actually the sort of cases we thought wouldn’t be able to go through the courthouse are actually moving forward and they’re doing so even with Section 230 in place, even without KOSA.” I’m going to be really interested to see which way that argument goes and if that speeds up or slows momentum in either direction.

All right. I warned you both that I was also having a crash out about all of this. And Lauren, you’ve just arrived at it. The notion that those laws have anything to do with these trials, and that these trials should let the government pass what amounts to very strict speech regulations is just making me feel personally crazy.

“The platforms had some design features that made them addictive, so we should pass KOSA, which will restrict the speech of marginalized groups,” does not have any throughline to me. Josh Hawley is saying we should get rid of Section 230 and these trials prove it. I can’t tell you why that is. I cannot make the link in my brain between “the platforms were optimized for virality and engagement and negative sentiment,” and “making them responsible for the speech in a way that will force them to take down more speech is the way to solve that problem.” I cannot link those ideas together. Can either of you?

CN: No. No. Truly, I have read so many of the interviews with the Republican policymakers when they get asked about this stuff, and none of them seem to understand that if they do in fact get rid of 230, platforms will over-moderate content because they will be in terror that a wide variety of things that can now be linked back to them could potentially result in legal liability. And they’re going to hate it. These are the guys that hate all content moderation. And if you delete Section 230, you’re going to get more of it. So no, it doesn’t make any sense.

Lauren, you’ve covered bipartisan attempts to reform 230, bipartisan attempts to do age verification and laws like KOSA. What’s the view on the Democrat side?

LF: There are a lot of Democrats who support KOSA and are fully on board with those kinds of changes to the law. They definitely have acknowledged some of the critiques around that this might harm marginalized communities or make it harder to access certain kinds of content that might get politicized on the internet, but they generally just think that those have been pretty much dealt with in the language of the statute. That it’s not really going to come to pass and they’ve just accepted that they feel like this is the best way forward. Certainly it’s not all Democrats. Obviously Ron Wyden, who co-authored Section 230, has not supported KOSA.

There really is broad bipartisan support for these kinds of issues. That’s going to be the challenge for some of the hardliners on Section 230 and against KOSA right now, to ask whether there’s never going to be anything that changes on these issues. Or is there going to be some kind of change and we have to figure out what we can live with?

Here’s where it gets really complicated for me, and you two are just going to help me process these feelings together as a family. I look at, okay, there’s a big trial that got lost. These companies are liable for more of what happens on our platforms in a narrow way. And now there’s a group of people that want to say, “You’re actually responsible for everything. We’re going to tear down 230 and you’re responsible for the content that you’re distributing and that will lead to even more liability and maybe you’re going to take even more steps.”

And then I think, “Well, that’s bad. Taking down 230 is bad.” I’ve felt that way for 20 odd years. There’s an infinite amount of coverage on The Verge about why tearing down 230 is bad. And then I sit there for one more turn and I think, “Well, why?”

We’ve all talked to Sen. Ron Wyden. Ron Wyden has been on the show. Lauren, I think you just recently spoke to him as well. Ron Wyden’s a nice guy. Chris Cox, who wrote 230 with Wyden, is a nice guy. The world that they were trying to create with Section 230 never happened. It literally does not exist. This law is 30 years old. It was written in a time when AOL and Usenet existed and were the dominant ways of communicating online.

Their goal was to create a competitive marketplace of moderation: if you wanted your computer to be safe for your kids, you would literally download software and run it locally on your computer and it would sit in front of CompuServe and filter the internet for you. That just never happened. It never existed. Now I’m in this place where I’m required to boldly defend a 30-year-old law whose policy goals were never achieved. And I don’t know why. Casey, I know you’ve been wrestling with this too. How should I feel about this?

CN: Yeah. I have complicated feelings too. I want Section 230 to exist so that platforms can host political speech, all sorts of speech. It creates the possibility for platforms that are very rich and vibrant and fun. At the same time, there is this 230 case that I paid a lot of attention to as a gay guy, about Grindr, you guys I’m sure are familiar with this case. But basically there was this horrible ex that was like, “I’m going to get back at my ex by posting his photos on Grindr and I’m going to send everyone his physical address and say, ‘Go to this guy’s house and he’s going to indulge your craziest fantasies and give you drugs.’” And this gets tossed out because of Section 230, right? They sue Grindr saying like, “This is awful. You got to do something.” And Grindr is like, “230.” And the case goes away.

That seemed really awful for the victim of that case. If I were in that situation, I’d be really mad at Grindr too. At the same time, why should 230 be the thing that gets that person justice? Why don’t we just take online harassment and violence more seriously in this country? So this is how I square the circle, by saying Section 230 in general does still support the internet that I want. And for a lot of the harms—mostly not the ones we’re talking about today—but for a lot of the harms that do absolutely get enabled and protected by 230, I think we can probably find other ways of addressing the harm.

But here’s another thought experiment. What if the brain trust over at Meta got together and said, “What would Instagram look like if it were great for teenagers?” Do you think it would look a lot like the Instagram that we have today? Or do you think it would look a lot different? I bet it’d be the latter. I bet it would look really, really different. We don’t live in this world, but I think that there’s another world where the executives at Instagram did do that and said, “You know what? We’re actually going to put out that version of Instagram for teens. And look, it’s mostly educational content. It’s actually not personalized to your teen at all. We’ve disabled all the communication features. You can only use it during daylight hours.”

You can imagine a million things that would probably just make this a safe product. So on some level, yes, it’s tricky to figure out what the right version of Instagram would be that would not get Meta into trouble. On the other hand, you actually could kind of sketch it out. So my curiosity is to what extent are they going to try to go down that road, because I’m sure they’re going to be desperate not to be sued by every teenager in America. To what extent are they just going to, I don’t know, try something shady and underhanded that I haven’t thought of yet?

I mean, they’ve announced Instagram for younger people, right, these tools for younger people and they get just dumped on for being cynical and trying to target kids. Do they have the social capital to say this product is safe anymore?

CN: No. My nihilistic view on this is ultimately what solves the Meta problem is that they just get outcompeted by another company that maybe is better in certain dimensions. But I don’t think the change is going to come from within with these guys because all they care about is just winning. And for them, winning looks like maximum time engaged.

To be fair, Mark Zuckerberg is currently busy hiring and firing hundreds of AI researchers every week. Again, there is some goal that is yet to be defined. The idea that he’s going to stop and put all of his attention on an Instagram that’s safe for kids—maybe only existential amounts of litigation will make him do that. But I honestly wonder if Mark Zuckerberg is the right face of teen safety in America. I think the answer is flatly no.

CN: Yeah. I don’t think the track record really would lead you to putting him in charge of that particular project. Again, and I think it’s important to underline this for folks: for Meta, addiction looks like success. They have huge teams inside the company, cognitive scientists who work to understand the human brain so that they can get you to pick up your phone and look at it as many times as possible. And this is why I feel so bad for the people who are mad at themselves for all the time they spend looking at Instagram. You were not in a fair fight. You lost a rigged game. The reason that Meta is doing that is not because they’re literally evil, it’s that they feel like the incentives of their business require them to do this. So unless those incentives change, no, Nilay, Meta is not going to be the place to go to look for moral leadership on teen safety.

The last piece of the puzzle, which I haven’t really touched on here, but is definitely a throughline, is the First Amendment, freedom of speech. We are talking about platforms that regulate and control vast amounts of speech from almost everybody in the country all the time. When you talk about changing the limits on these platforms and what they are liable for and how their products work, you are very directly talking about how speech is amplified and distributed in this country.

There are a lot of people who have built entire businesses based on understanding how Meta will make their stuff go viral. You can have a lot of feelings about what those businesses are and what they look like and what they’re doing to the brains of teenagers, but there are a lot of people who have built really big businesses on the backs of these platforms.

Are we just going to run headfirst in the First Amendment here? Is it impossible? Mike Masnick, who runs Techdirt—he was just on the show, good friend—thinks it’s a disaster for the First Amendment. Taylor Lawrence, a friend, thinks this is a disaster for the First Amendment. Their argument is you cannot separate the product from the speech. The product itself means nothing. It is the speech that the product is distributing that is the problem.

So, you are just trying to backdoor your way into speech regulation by making the product liable for whatever harm. There’s a part of me that buys this, but Casey, I know you think you can pull the two apart.

CN: I agree that this is tricky and we should be careful and lawsuits are often not the best way to work through this stuff, because in general, I would rather have lawmakers and policymakers writing really careful versions of this. At the same time, why is infinite scroll speech? Why are streaks speech? Why is autoplay video speech? At a certain point, you can get yourself all the way to like, “Why do we make Ford put seatbelts on their cars? You’re compelling speed.” No, you’re compelling a seatbelt. You should be able to compel product safety features once it becomes clear that you actually have a product safety issue.

Now I should say, there are things that I would actually love to compel these platforms to do that are just obviously unconstitutional. I would love to compel them to show educational content to children in the same way that Congress once passed a law saying that broadcasters needed to provide at least three hours of educational programming a week.

I think that was really good for society. Turned out, at least when you applied to social media, that’s just obviously unconstitutional. So I do think that you have to be really careful here, but if you’re going to tell me that every single product feature of every social media app is speech, you truly are caping for these platforms in a way that makes me uncomfortable.

Lauren, one thing that I’ve been thinking about a lot is what happens to 230 in a world where the platforms are generating more and more of the content directly with AI. Google’s AI overviews, that is probably Google’s speech, even though it’s synthesized from the speech of millions of other people on websites. Do any of these regulatory regimes or attempts to change any of these laws contemplate that problem?

LF: That’s the new Wild West that we’re going to be running into here with probably new lawsuits. But even Ron Wyden, who we’ve discussed many times today, has said that AI outputs aren’t necessarily protected by Section 230. Those will likely be treated differently. We won’t really know till we see a court case come out on it, but that’s going to be a big question. And the thing to remember with Section 230 is that it’s really a procedural tool that stops lawsuits in their tracks, and how cases get decided in the end is based on the First Amendment. Unless you’re going to get rid of the First Amendment, getting rid of Section 230 doesn’t really completely get rid of the problems that some people think they would.

CN: I want to ask you guys what you think about something, because I’m still working through this in my own mind. We were talking earlier about what is the specific feature that leads to the mental health problems suffered by Kaley and some of the other folks in these bellwether cases? I suspect that autoplay video, infinite scroll, endless push notifications all have something to do with it. I suspect the strongest factor is algorithmic personalization. It’s “I searched for one video about how to get skinny and now all of a sudden I’m in a nightmare wasteland of eating disorder content. And that actually does increase my depression and intensify my eating disorder.”

As a society, I think we want to stop that. We don’t want you to get dragged down that rabbit hole. We don’t want you to develop that eating disorder. Can we regulate that? This is actually the trickiest issue to me. Because on one hand, I could see Congress passing a law saying, “Hey, if you’re 16 and younger, we just want to disable algorithmic personalization, at least at the level of the individual. Maybe we’ll group you into a bucket and we’ll say, ‘16-year-olds in America seem to like this kind of content and we’re okay with that. But you personally know we’re going to block that for you because we don’t want you to get dragged down a rabbit hole.’” But is that constitutional under the First Amendment? I don’t know. I’m just curious what you guys make of that.

I’ve been thinking about this a lot and I keep thinking back to Barack Obama on Decoder and we talked about regulating AI a lot and he was talking about regulating AI with me because he felt he had failed to regulate social media and you could see the connection in his brain. It was clear as day. He was like, “We failed social media. We have to get AI right.” I kept asking about the First Amendment over and over again. “How are you going to get past the First Amendment?”

At the end he said, “Look, you just need a hook. You just need to find a hook the way that we found a hook to regulate broadcast television.” In the case of broadcast television, the hook is very obvious, right. There’s only so much spectrum, it’s a scarce public resource, so we can make some regulations to make sure we make good use of that resource.

You can immediately see the danger in that, which is that Brendan Carr has power over broadcast television, and now we have an unrestrained speech regulator in this country. That’s not good. At the same time, the idea that Barack Obama’s like, “You just need a hook,” is a reflection of the standard in the law, which is called strict scrutiny, and you can do a speech regulation under the First Amendment if it’s narrowly tailored to achieve a compelling government purpose.

These are the words and the precedent: “strict scrutiny,” “narrowly tailored,” “compelling government interest.” I don’t want a bunch of 16-year-old girls to get eating disorders. It feels like a very compelling government interest. You can attach a very narrowly tailored rule to accomplish. And I’m very curious if that is the future where we’re going to say, “This stuff causes harm. Here’s one rule to stop this content. With the power of AI, Mark Zuckerberg, you can now detect all those GPUs, detect the eating disorder content, and get rid of those communities.”

I think that’s just as bad. That’s just as bad as Brendan Carr as an unrestrained speech regulator. That’s just a bunch of government speech regulations. But if 230 prevents mass litigation against the platforms, because as Lauren’s saying, it’s a procedural mechanism that says “You can’t sue us at all.” If you have to dance through these hoops of “it’s product design features,” but no one can identify the specific product design features, I think a bunch of state regulators are going to say, “Look, there’s some stuff we know is bad, and we’re going to pass those laws and we’re going to take those to this Supreme Court and say, ‘These are narrowly tailored to meet a compelling government interest.’”

I don’t know if that’s how that will play out. I suspect it’s going to start and I certainly don’t know if that’s good, but you can see that that is the next escape hatch here, because that is the standard for a law that regulates speech in this country.

LF: Casey, that’s exactly the right question about algorithms, because it’s much easier to make the argument that it’s infinite scroll or autoplay, it’s not really about content. It’s not really even much of a decision by the platforms, but what an algorithm or what a company chooses to program their algorithm to recommend or not recommend, those are their deliberate choices. We’ve already had a Supreme Court decision saying that content moderation is basically editorial discretion. That’s where it gets really tricky. You’re right, that is exactly the sort of thing that people who are advocating for these changes want to see changed, but it’s probably the trickiest one to do.

[The Verge’s] Adi Robertson wrote this for us a while back. It was just a piece on how America turned against the First Amendment. This notion that we all care about free speech, and everyone says it and then you push on it and everyone wants a little bit more speech regulation than before. And that has only been growing over time.

Even the people that are like, “I love Elon,” we’re watching the Elon Musk-Sam Altman trial text from Mark Zuckerberg to Elon Musk, where Zuckerberg says, “I’m deleting all content that identifies the people in DOGE.” And Elon’s like, “Great. Do you want to buy OpenAI with me?” Mr. Free Speech Warrior is like, “Yeah, delete that stuff.” And Zuckerberg is saying, “I will never ever cave to the government again.” And he’s emailing the government employees saying, “I’m deleting the names of government employees.” This is crazy to me.

It seems like we are entering a period where there’s more pressure from the government on speech than ever before. Everyone is a little more okay with it than ever before. And we are all still pretending we all care about free speech the most. Casey, that feels like a nightmare in the trust and safety context. You wrote at the beginning of Trump 2 on how trust and safety was out of favor and no one was pushing back anymore. That was a while ago. What does it feel like now?

CN: I wrote this piece and the headline was, “Is Anyone Left to Stand Up for Trust and Safety?” Trust and safety used to be a really vocal part of the tech industry, and they advocated for a lot of good pro-social civic values. They talked a lot about human rights. They tried to bake human rights principles into the policies that these platforms observed when they were moderating content. I had a natural affinity for them. In my view, these were the good guys.

Then Trump gets swept back into power. A bunch of layoffs happen. Every platform decides almost without exception that their best move is to try to curry favor with the Trump administration. And all of these folks just get pushed aside. The ones who were the most vocal about human rights principles disappear and all of a sudden, you have people like Joel Kaplan at Meta running the policy operation. His main job is essentially to get Donald Trump to like Mark Zuckerberg and try to ensure that they get whatever they want.

It’s been hugely effective for them, by the way. Mark Zuckerberg has gotten an insane number of things from Donald Trump, and I’m sure he’ll get more as the years go on. I got a lot of pushback from the trust and safety community when I wrote this piece because I was essentially calling them out just being like, “Hey, where are you guys? Are you actually going to get on a microphone anywhere and say, ‘Hey, it’s really bad what is happening to our industry’?” And what they told me very justifiably was, “We do not have the power that you think we have. When we do speak up and when people do know our names, we get death threats, and we get hounded to the ends of the earth and it’s really scary. You’re asking us to sacrifice maybe even our lives to speak out in favor of these principles. It’s a big ask.” All of that is fair.

And yet, fast-forward to almost a year later now, and I think the question still stands. What happened when these people stopped speaking out was they just gave free rein to the oligarchs to run these platforms as they see fit. That’s a really scary thing to me, that trust and safety is no longer meaningful at any of these platforms except as a compliance function to keep them in line with various regulations. The result is now you just have a bunch of oligarchs trading favors over Signal.

Lauren, I want to end with you. Obviously the regulatory side of this is just in full throttle right now, right? They have something that at least shows that Meta is bad, that YouTube is bad, and you can make some moves. What do you think happens next on that side of things?

LF: We’re going to see a lot of discussion in Congress about whether to pass these new laws to repeal Section 230. But where we’ve seen most of the action has been in the states. We’ll probably continue to see that move forward. In the courts, we’ll see these cases be appealed. And at the same time, we’re going to see new cases brought. There’s still, in the LA case, over 1,500 cases behind that. There are several more bellwether trials just in that set of cases that are already scheduled. The next one is going to be in a few months. There’s a totally different set of bellwether trials in a federal version of these cases with the first one kicking off in June.

There are school districts, state AGs, individual plaintiffs. This is not going to slow down at all. If nothing else, what these trials have done is bring to light a lot of this information about how these companies work. You just brought more awareness among the general public about what to be thinking about and aware of when their kids are using social media.

It does feel like a perfect description of the experience of being in America right now. They’re going to set a mishmash of policies across the country until everyone pays enough money to the lobbyists to get a law passed that solves the problem. That feels at once the most nihilistic, cynical thing I can say, and also just how everything works all the time. Do either of you see an off-ramp from that?

CN: Recent history would suggest that, no, there’s not really an off-ramp, because again, all the incentives are for these companies to get you to look at their app for as long as they can get you to do that. Until the pain of those incentives is worse than the benefits of the revenue that brings in and what it does to their stock price, I don’t see a big change coming.

Lauren, do policymakers sense that they’re trapped in this doom loop?

LF: Yeah. The policymakers who’ve decided that KOSA is the way, repealing Section 230 is the way, that is their focus. I don’t think there’s this new discussion about how exactly we should do this. We have seen some newer approaches with things like app store age verification and there are different variations on how that could potentially work, whether it’s real verification or assurance.

Policymakers have chosen what they think the solution is, and that’s how this conversation is going forward. If people want to change what the mechanisms of that conversation are, they’re really going to have to inject new solutions or think differently about the incentives here.

Here are my three ideas just to end with. I’m curious about your thoughts. One, I think a federal privacy law is long overdue. That doesn’t feel like it insults the First Amendment. Two, Casey, to your point about algorithmic personalization, I think just requiring algorithmic transparency would go a long, long way. Show us why you are showing us the things you’re showing us. Make your algorithm transparent.

And then third, require them to do the research. Publish it so there’s not this incredible negative incentive to avoid knowing anything ever. I look at all that and I’m like, “Oh, that’s the European approach.” I’m just describing Europe. Have any of those things worked in Europe yet or is it just too early to tell?

CN: It’s too early to tell. Some of the transparency requirements that they’ve implemented have been good. There’s now some kind of database that you can go to where they have to essentially file a lot of the moderation decisions that they’ve made that’s accessible to the public. I think these are good things. What we haven’t seen yet is consensus on the specific problem we’re trying to solve and the exact right mechanisms for solving it. Again, it’s because it gets so mixed up in these speech issues.

We need to continue to try to narrow in on what the exact problem we’re trying to solve is. And then from there, try to build some consensus around what we can really say in an empirical way is going to protect the teens from having horrible outcomes. We have to keep driving at those things or otherwise we’re just going to continue to spin our wheels.

Casey writes Platformer. He podcasts with Kevin Roose at Hard Fork, which is wonderful. Although they’re my sworn enemies, and I think they should be illegal. Lauren’s work is all over The Verge. Lauren, you’ve been on Decoder so much recently. Thank you for coming on yet again.

Let us know what you think. I’m dying for feedback on this episode because unlike so many Decoder episodes, I think you can feel none of us quite know what’s going to happen next, or maybe more troubling, what should happen.

Questions or comments about this episode? Hit us up at [email protected]. We really do read every email!

Decoder with Nilay Patel

A podcast from The Verge about big ideas and other problems.

SUBSCRIBE NOW!

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Read Entire Article
Lifestyle | Syari | Usaha | Finance Research