Noticias

Facebook falla en detección de amenazas de muerte, según universidad de EE UU / Facebook Is Failing to Remove Brutal Death Threats Targeting Election Workers

El estudio consistió en la identificación de diez ejemplos reales de amenazas de muerte contra trabajadores electorales y su envío, antes de la jornada electoral del 8 de noviembre, a estas plataformas en forma de anuncios publicitarios

La universidad neoyorquina NYU puso en duda la seguridad en la red social Facebook y la acusa de no haber detectado el envío de 15 anuncios que contenían amenazas de muerte contra trabajadores electorales.

Esas amenazas fueron enviadas solo a modo de prueba por un equipo investigador -junto al grupo Global Witness- para evaluar los protocolos de seguridad de varios gigantes tecnológicos, y entre las redes sometidas a prueba estuvieron también TikTok y YouTube, que demostraron tener mejores herramientas de detección.

El estudio consistió en la identificación de diez ejemplos reales de amenazas de muerte contra trabajadores electorales y su envío, antes de la jornada electoral del 8 de noviembre, a estas plataformas en forma de anuncios publicitarios.

Facebook aceptó 15 de 20 anuncios

La universidad apuntó que este método le permitió comprobar si el anuncio era aceptado o rechazado por las plataformas, y en todo caso lo retiraron antes de que se hiciera público.

«La investigación reveló resultados marcadamente dispares entre los gigantes de las redes sociales: YouTube y TikTok suspendieron nuestras cuentas por violar sus políticas, mientras que Facebook aceptó 15 de los 20 anuncios que contenían amenazas de muerte y que les enviamos para su publicación», asegura la universidad.

El informe recoge que en la publicidad que enviaron, retiraron antes los insultos y corrigieron los errores gramaticales, porque en pruebas anteriores habían detectado el rechazo de Facebook de anuncios que contenían «discurso de odio» por las faltas ortográficas y no por su contenido, ya que fueron admitidos una vez corregidos los errores gramaticales.

No eliminó ni bloqueó cuentas

Los anuncios, escritos originalmente en inglés, fueron también enviados en español y, según el informe, «en todos los casos, todas las amenazas de muerte eran diáfanamente claras en su idioma; ninguno estaba codificado o era difícil de interpretar».

La NYU insistió en que mientras TikTok y YouTube suspendieron las cuentas porque quebrantaban sus políticas, Facebook aceptó nueve de los diez anuncios con amenazas de muerte enviados en inglés y seis de los diez que se mandaron en español.

Además, asegura que a pesar de que sí se detectaron dos de los mensajes, Facebook no eliminó ni bloqueó la cuenta desde la que se enviaron.

Meta, matriz de Facebook, consultada por los investigadores consideró que la muestra de los anuncios enviados no era representativa e insistió en su rechazo de cualquier contenido que incite a la violencia contra los trabajadores electorales o cualquier otra persona.

El informe apunta que Meta les dijo que su efectividad a la hora de bloquear este tipo de contenido supera a la de otras plataformas, aunque no aportó ninguna prueba mensurable.

EFE

Facebook Is Failing to Remove Brutal Death Threats Targeting Election Workers

Meta, despite repeatedly committing to ramping up security policies ahead of the 2022 midterms, appears to fare far worse than its competing social media services in detecting and removing death threats targeting election workers.

Those findings are part of a new investigation conducted by Global Witness and the NYU Cybersecurity for Democracy which claims Facebook approved 15 out of 20 advertisements on its platform containing brutal death threats levied against election workers. When researchers tried to run those very same ads on TikTok and YouTube, however, the platforms quickly suspended their accounts. The findings suggests Facebook takes a less strict approach to moderating violent political content than its peer companies despite executives recently providing assurances the platform would beef up security ahead of the 2022 midterm elections.

To run their experiment, the researchers found 10 real world examples of social media posts including death threats targeting election workers. Gizmodo reviewed copies of those ads, many of which alluded to election workers being hung or mass executed. One of the ads directed at the workers said, “I hope your children get molested.”

“All of the death threats were chillingly clear in their language; none were coded or difficult to interpret,” the researchers wrote.

Once they collected the ads, the researchers opted to remove profanity and grammatical errors. This was done to ensure the posts in question were being flagged for the death threats and not for explicit language. The ads were submitted, both in English and Spanish, a day before the midterm elections.

While it appears YouTube and TikTok moved quickly to suspend the researchers’ account, the same can’t be said for Facebook. Facebook reportedly approved nine of the ten English-based death threat posts and six out of ten Spanish posts. Even though those posts clearly violated Meta’s terms of service, the researchers’ accounts were shut down.

A Meta spokesperson pushed back on the investigation’s finding in an email to Gizmodo saying the posts the researchers used were “not representative of what people see on our platforms.” The spokesperson went on to applaud Meta for its efforts to address content that incites violence against election workers.

“Content that incites violence against election workers or anyone else has no place on our apps and recent reporting has made clear that Meta’s ability to deal with these issues effectively exceeds that of other platforms,” the spokesperson said “We remain committed to continuing to improve our systems.”

The specific mechanisms underpinning how content makes its ways onto viewers screens varies from platform to platform. Though Facebook did approve the death threat ads, it’s possible the content could have still been caught by another detection method at some point, either before it published or after it went live. Still, the researchers’ findings point to a clear difference in Meta’s detection process for violent content as compared to YouTube or TikTok in this early stage of the content moderation process.

Election workers were exposed to a dizzying array of violent threats this midterm season, with many of those calls reportedly flowing downstream of former President Donald Trump’s refusal to concede the 2020 election. The FBI, the Department of Homeland Security, and The Office of U.S. Attorneys, all released statements in recent months acknowledging increasing threats levied against election workers. In June, the DHS issued a public warning that “calls for violence by domestic violent extremists,” directed at election workers, “will likely increase.”

Meta, for its part, claims it has increased its responsiveness to potentially harmful midterm content. Over the summer, Nick Clegg, the company’s President of Global Affairs, published a blog saying the company had hundreds of staff spread out across 40 teams focused specifically on the midterms. At the time, Meta said it would prohibit ads on its platforms encouraging people not to vote or posts calling into question the legitimacy of the elections.

The Global Witness and NYU researchers want to see Meta take additional steps. They called on the company to increase election-related content moderation capabilities, include full details of all ads, allow more independent third-party auditing and publish information outlining steps they’ve taken to ensure election safety.

“The fact that YouTube and TikTok managed to detect the death threats and suspend our account whereas Facebook permitted the majority of the ads to be published shows that what we are asking is technically possible,” the researchers wrote.

Mack DeGeurin