The EU continued its streak of tackling big tech in a way the U.S. has either never been able or willing to. This time, the bloc took aim at TikTok, the Chinese-owned app that has caused concerns both for its highly addictive algorithm and for being a possible national-security threat.
Brussels will investigate allegations that TikTok failed to properly moderate content shown to minors and that some of the features built into the platform intended to keep people scrolling. The investigation falls under the Digital Safety Act, a landmark bill that went into effect last August and holds internet companies accountable for the content posted on their sites. Under the law, the EU has sweeping powers to investigate companies for failing to remove illegal content, limits the data they can collect on users, and the type of ads they can be targeted with.
The EU’s investigation into TikTok could serve as a blueprint for how governments can not only regulate large online platforms, but possibly punish them should they violate those regulations. In the U.S., lawmakers introduced several new bills meant to reign in Big Tech after public frustrations with social-media companies reached a fever pitch over the last year.
“People are increasingly conscious of the fact that these algorithms are being used to shift their behavior to change their behavior in ways that are very insipid, very insidious and difficult to counter, given the current architecture of the Internet,” says Tomicah Tilleman, president of Project Liberty, an initiative that supports reforming social media.
Social media has a reputation for being “weapons of mass distraction” that keep teenagers glued to a screen. Instead of studying or paying attention in school, they scroll endlessly on algorithmic feeds designed to keep them doing just that. For some teens, though, addiction to social media has turned what seemed like online fun into real-world dangers. Some young girls developed severe body-image issues. Kids struggling with mental-health problems found themselves served increasingly more disturbing content about self harm. And the algorithms that serve content based on a user’s interests made it easier for individuals who may have wanted to sexually exploit children to find and, at times, contact them.
On more than one occasion, parents have filed lawsuits against social-media companies alleging their algorithms were designed to keep users scrolling ad infinitum, essentially addicting their children to the platform. Some of the lawsuits came after a whistleblower at Meta, parent company of Facebook and Instagram, released internal documents showing the company was aware its products had especially negative effects on young girls. Social media’s critics say these instances were examples of negligence on behalf of the social-media companies that disregarded warning signs about the negative consequences their products might have on minors.
For others these issues are endemic to social media. There is “an infrastructure problem when it comes to the internet,” Tilleman says. “The way the models have been optimized for the aggregation of private information and the use of that information to manipulate our behavior has created a bunch of big problems.”
A recent study conducted by Project Liberty found a majority of parents blame the social-media companies themselves. The survey released last week found that 59% of respondents in the U.S. blamed social-media companies for online safety. More specifically, 69% of respondents in the U.S. are “very concerned” social media could expose their children to “inappropriate sexual content.” A further 62% of respondents said they were “very concerned” social media would serve their kids information about self-harm.
But the Project Liberty survey also uncovered another fact about parents in the U.S.—one that’s uniquely American. People in the U.S. are lukewarm about the role the government should…
This article was originally published by a www.aol.com . Read the Original article here. .