NY could force TikTok, YouTube, and Instagram to roll out age verification

2 hours ago 3

A New York law could require social media platforms to implement age verification. On Monday, New York Attorney General Letitia James released the proposed rules for the Stop Addictive Feeds Exploitation (SAFE) For Kids Act, which would force platforms to confirm that someone is over 18 before allowing them to access an algorithm-driven feed or nighttime notifications.

New York Governor Kathy Hochul signed the SAFE For Kids Act into law last year as part of efforts to “protect the mental health of children.” The law joins a growing number of online child safety legislation across the US, many of which have faced legal hurdles and concerns over user privacy.

But the legal landscape surrounding online age verification could change, with recent Supreme Court rulings allowing age-gating on porn sites and at least temporarily opening the door for social media platforms. California is on the cusp of passing a bill that would require device makers and app stores to carry out age verification, while South Dakota and Wyoming have begun forcing platforms to implement age verification if they host sexual content.

Under the proposed rules for New York’s SAFE For Kids Act, social platforms must serve unverified users or kids under 18 only chronological feeds or posts from people they follow, as well as ban notifications from 12AM to 6AM. (The office is requesting comments about how exactly to define a nighttime notification.) Companies can confirm a user’s age with a “number of different methods, as long as the methods are shown to be effective and protect users’ data.” Platforms must also include at least one alternative to uploading a government ID, such as a face scan that estimates a user is 18 years old.

Kids can only gain access to a platform’s “addictive” algorithmic feeds by getting permission from a parent, which involves a similar verification process. The proposed rules state that platforms must delete identifying information about a user or parent “immediately” after they’re verified.

As noted in the rules, the SAFE For Kids Act would apply to companies that surface user-generated content and “have users who spend at least 20 percent of their time on the platform’s addictive feeds.” A 2023 version of the bill defines an addictive feed as one that generates content based on information associated with users or their devices. That means it could potentially affect major platforms like Instagram, TikTok, and YouTube. Companies that violate the law could face a fine of up to $5,000 per violation, in addition to other potential remedies.

The SAFE For Kids Act may not go into effect for a while. The proposed rules kick off a 60-day public comment period, after which the Office of the Attorney General will have one year to finalize the rules. The law will go into effect 180 days after the rules are finalized, though it’s bound to face scrutiny. NetChoice, the Big Tech trade association that has sued to block age verification bills around the US, called the SAFE Act an “assault on free speech” last year. The Electronic Frontier Foundation also said that the law would “block adults from content they have a First Amendment right to access.”

0 Comments

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.

Read Entire Article
Lifestyle | Syari | Usaha | Finance Research