Cadastre-se agora para um orçamento mais personalizado!

Facebook gives non-US users less protection from harmful content to save money: Haugen

fev, 02, 2022 Hi-network.com
Image: Tom Williams/CQ-Roll Call, Inc via Getty Images

For people living outside of the United States, Meta will deliberately provide less help, reporting of online abuse, and safety on its Facebook platform to save on costs, Facebook whistleblower Frances Haugen told Australia's Select Committee on Social Media and Online Safety on Thursday morning.

Haugen testified that Facebook takes down the "bare minimum" when it comes to harmful content, especially when content comes in languages that are not spoken prominently in developed countries as there is minimal criticism from these underrepresented users.

"It can consistently underinvest in safety, and particularly, it really under invests in safety outside the United States because, disproportionately, their safety budget is spent on paving the United States," Haugen said. "I'm sure on a per capita basis there is less help, less support, and less safety for Australians because Facebook knows it operates in the dark. Where they don't have to, they don't apologise about anything."

Providing an example of Facebook doing the bare minimum, Haugen claimed an intervention screen for eating disorder or self-harm content previously touted by Meta global safety head Antigone Davis was only showed hundreds of times per day as of last year.

On Monday, the Department of Home Affairs shared similar findings with the committee, singling out Meta as being "frequently the most reluctant to work with government" when it comes to promoting a safe online environment, adopting a safety-by-design approach, and taking adequate proactive measures to prevent online harms.

Haugen provided this testimony to the committee as part of its social media inquiry into the practices of major technology companies to curb toxic online behaviour. The inquiry was approved by the federal government at the end of last year with the intention of building on the proposed social media legislation to "unmask trolls".

Much like Haugen's previous appearances before governments from other jurisdictions, she continued to flag the core issues with the Facebook platform as being its algorithms that push extreme content and its decision to allow a higher rate of inappropriate content to remain online to avoid mistakenly removing appropriate content.

Haugen explained that Facebook's algorithms push extreme content as that is what receives the most engagement, and thus profits. To balance the push of extreme content, the platform also has a content monitoring system, which it says aims to demote content that is in violation of Facebook's policies.

According to Haugen, however, Facebook's appetite for making mistakes when demoting content is low as it would affect its profit margins, which means lots of harmful content is often not demoted, or only demoted slightly, and continues to have a prominent place online.

When Meta ANZ policy director Mia Garlick appeared before the committee a fortnight ago, she said Facebook does not make its algorithms lower the threshold for what could be classified as inappropriate content, such as hate speech, as it wants togive users more control for determining what content appears on their newsfeed.

In response to a committee question referring to Garlick's comments, Haugen said Meta has been "really good at reframing debates" that are not useful to put the onus of safety on victims.

She explained that if Facebook did not have a system of amplification to give the most reach to extreme content, there would not be a need to have a conversation about whether social media platforms are doing enough to remove harmful content.

"As anyone who remembers programming a VCR, figuring out settings on systems that you don't touch very often is really hard, right? To expect the average person to learn how to set all the dials on the safety systems for Facebook, that isn't is an unrealistic and unreasonable ask," she said.

During the testimony, Haugen was also asked by committee chair and Liberal MP Lucy Wicks for recommendations on what easy regulatory changes could be enforced by government, to which she said a requirement for large groups and pages to have moderators and limiting forwarding would go a long way towards significantly reducing the spread of conspiracies and content inciting distrust.

Over the past year, a trend among Meta critics has emerged of using the company's use of algorithms to paint it as being negligent. Alongside the various government probes around the world undertaken on Meta's conduct, Rohingya refugees recently launched two class action lawsuits valued in excess of$150 billion against Meta for its role in sparking the Rohingya genocide in Myanmar.

In those lawsuits, the plaintiffs, like Haugen, have accused Meta of acting negligently by prioritising user engagement and growth over safety through using algorithms to push extreme hateful content.

Related Coverage

  • Home Affairs singles out Meta as most reluctant to stop online abuse
  • Google and Meta on the defensive in Australian social media probe
  • Meta sued in excess of$150 billion for its role in Rohingya genocide
  • Can Meta prevent its algorithm from facilitating a communal bloodbath in India?
  • More violent events driven by social media are bound to happen, says Facebook whistleblower
  • Facebook whistleblower: 'Morally bankrupt' social giant will have to 'hook kids' to grow

tag-icon Tags quentes : Negócio Redes Sociais

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.