Advertisement

Social media companies in the US brace to battle onslaught of legal challenges

<span>Photograph: Chesnot/Getty Images</span>
Photograph: Chesnot/Getty Images

Social media companies in the United States are bracing themselves to battle an onslaught of new state and federal legislation and legal challenges with far-reaching regulatory implications this year.

Related: Tears, blunders and chaos: inside Elon Musk’s Twitter

The majority of US state legislatures have introduced or passed bills attempting to reform how social media giants moderate their content and increase security measures for American users.

Elsewhere on the legal front, the supreme court will hear no fewer than four high-profile cases against tech giants, ranging from liability in terrorist attacks to alleged censorship of conservative viewpoints on their platforms.

ADVERTISEMENT

State and federal lawsuits, two of which were announced this month, also take aim at how social media apps and their highly effective algorithms negatively affect the mental health of American teenagers.

On 6 January, Seattle public schools and Kent school district filed a lawsuit against TikTok, Instagram, Facebook, YouTube and Snapchat, alleging they promote “harmful content to youth, such as pro-anorexia and eating disorder content”. In a statement, Seattle public schools said: “We cannot ignore the mental health needs of our students and the role that social media companies play.”

Only a couple of weeks later, Utah’s Republican governor Spencer Cox announced the state plans to file a similar lawsuit, claiming the yet-to-be filed complaint will be aimed at protecting youth.

The concerns cited by public officials about the affects of social media on teenagers are not unfounded. Last year, Facebook data scientist turned whistleblower Frances Haugen leaked internal documents to the Wall Street Journal, showing teens who used Instagram experience harm as a result of “social comparison, social pressure, and negative interactions with other people”.

Related: Meta’s decision on Donald Trump is looming. Will he return to Facebook?

In 2019, the company now called Meta – the parent company of Facebook and Instagram – obtained clear market research data demonstrating Instagram use caused 40% of American teenagers to feel they had to create a perfect image, to think they were unattractive and didn’t have enough money, the leaked documents showed. One in five teens in Meta’s market research said that Instagram made them feel worse about themselves; teenagers reported the social media app exacerbated their existing mental health issues.

Meta’s CEO, Mark Zuckerberg, responded to the Wall Street Journal’s report about the leaked documents and Haugen’s congressional testimony by calling it a “mischaracterization of the research into how Instagram affects young people”. He pointed to a Facebook Newsroom response, penned by its vice-president and head of research, Pratiti RayChoudhury, which said: “It is simply not accurate that this research demonstrates Instagram is ‘toxic’ for teen girls. The research actually demonstrated that many teens we heard from feel that using Instagram helps them when they are struggling with the kinds of hard moments and issues teenagers have always faced.”

While the pressure is mounting for public officials to legally address the harms social media cause in children, states have been waging a legislative war against social media platforms for content moderation for the past two years. Politico reported 34 states introduced or passed more than 100 bills primarily attempting to ban censorship or restrict hate speech. As legislative sessions kick off this year, that number is expected to increase.

A California law just went into effect this January, requiring social media companies to share their hate speech, extremism and disinformation policies in their terms of services.

“Californians deserve to know how these platforms are impacting our public discourse, and this action brings much-needed transparency and accountability to the policies that shape the social media content we consume every day,” said the California governor, Gavin Newsom, after he signed the bill last fall.

Starting next year, tech companies will also have to provide data about how those policies are enforced in biannual reports to the California attorney general, Rob Bonta.

Legislation passed by conservative lawmakers in Texas and Florida argues that social media platforms are censoring rightwing political speech. Appellate courts struck down Florida’s law, arguing it violated the first amendment, but upheld the legislation in Texas with one judge saying it “chills censorship”. The supreme court narrowly ruled to temporarily block the Texas law last May.

This week, the supreme court asked the US solicitor general, Elizabeth Prelogar, to weigh in on whether states can stop social media companies from eliminating some forms of political rhetoric on their platforms. Because the supreme court has asked for Prelogar’s opinion on the stalled cases, it’s anticipated that their ruling will be delayed until their next session in October 2023.

In late February, the supreme court will hear arguments in two controversial cases – Gonzalez v Google and Twitter v Taamneh – both of which raise questions about whether or not social media platforms are liable for spreading Islamic State content that ultimately resulted in the 2015 Paris attacks and the 2017 Turkish nightclub attack. Google has argued publicly that ruling against the company in this case would undermine Section 230 of the Communications Decency Act, harming “free expression online” and making the internet less safe from spam and offensive content.

Taking effect this month, the federal government banned TikTok from all of its government-issued devices unless employees are using the app for law enforcement or national security purposes. The ban came on the heels of the public concerns of the FBI director, Christopher Wray, about China using TikTok to infiltrate American users’ cellphones, collect personal data and peddle influence. More than 20 states have followed suit, requiring TikTok to be permanently removed from state-issued devices, the Associated Press reported.

Like state officials’ concerns about the harms of social media with youth, federal and state governments’ heightened anxiety about TikTok’s security implications aren’t entirely baseless.

ByteDance, TikTok’s parent company, confirmed in December two China-based and two US-based employees tasked with investigating press leaks had improperly accessed the personal data of a BuzzFeed and a Financial Times reporter, CNN reported. ByteDance fired all four employees and TikTok’s CEO, Shou Chew, called the breach “unacceptable” and a misuse of the employees’ authority.

For some lawmakers, like the Florida senator Marco Rubio, banning TikTok on government-issued devices doesn’t go far enough.

“This isn’t about creative videos – this is about an app that is collecting data on tens of millions of American children and adults every day,” Rubio wrote in a statement on his website. “We know it’s used to manipulate feeds and influence elections. We know it answers to the People’s Republic of China. There is no more time to waste on meaningless negotiations with a CCP-puppet company. It is time to ban Beijing-controlled TikTok for good.”

ByteDance has argued its data is held in the US and Singapore, not China, and the Chinese government has never asked the company to provide them with data.