Major technology companies, including Facebook owner Meta and Google, said Tuesday they would join forces in a new program to combat child sexual abuse or exploitation online. Child victims of online abuse are a hot topic for regulators, and tech companies want to show they are taking appropriate measures to protect children and teens.
In the new program, called Lantern, major tech companies will share activity signals that violate their child exploitation policies so platforms can respond more quickly to identify, remove and report problematic content.
Signals can be email addresses, specific hashtags or keywords, used either to lure young people into being abused or to buy and sell material that discusses child abuse and exploitation.
“Until now, there has been no consistent process for companies to work together against predatory actors who evade detection across services,” said Sean Litton, executive director of the Tech Coalition, which brings technology companies together on the issue.
“Lantern fills this gap and shines a light on cross-platform attempts at child sexual exploitation and abuse online, helping to make the internet safer for children,” Litton added.
Tech Coalition’s other platforms include Snap, Discord and Mega, a privacy-focused platform from New Zealand.
According to Tech Coalition, as part of a pilot of the program, Meta removed more than 10,000 Facebook profiles, pages and Instagram accounts after data was shared by Mega.
Meta reported the affected accounts to the US-based National Center for Missing & Exploited Children and shared the findings with other platforms for their own investigations.
“Predators do not limit their attempts to harm children to individual platforms,” said Antigone Davis, Global Head of Safety at Meta.
“The tech industry must work together to stop predators and protect children on the many apps and websites they use,” she added.
Lantern’s announcement came on the same day that a former senior meta engineer said at a Senate hearing in Washington that top executives, including Mark Zuckerberg, ignored his warnings that teens were unsafe on the company’s platforms.
Arturo Bejar told lawmakers that in an internal survey of 13- to 15-year-olds on Instagram, 13 percent of respondents had received unwanted sexual advances on Instagram in the last seven days.
“Meta knows the harm that children suffer on its platform, and its leaders know that their actions are not addressing it,” Bejar said.
(Except for the headline, this story has not been edited by NDTV staff and is published from a syndicated feed.)
Source : www.ndtv.com