News around you

Guy Rosen: ‘Making upgrades to keep off harmful content, but no perfect solution’

Over the last month, Facebook Inc- which has since re-branded itself as Meta Platforms Inc- has been in the eye of the storm for fielding several allegations, including that the company chose to grow at the cost of its users’ safety and its own integrity. In an emailed interview with Pranav Mukul and Aashish Aryan, the company’s vice president of Integrity, Guy Rosen, contradicted the claim and said that platform took steps to keep people safe even if it impacted their bottom line. Edited Excerpts:

There have been multiple instances when Facebook employees as well as external experts have said that the company’s cost of growth is integrity, a claim Frances Haugen has repeated. How would you respond to it?

As a company, we have every commercial and moral incentive to try to give the maximum number of people as much of a positive experience as possible on Facebook. The growth of people or advertisers using Facebook means nothing if our services aren’t being used in ways that bring people closer together.

That’s why we take steps to keep people safe even if it impacts our bottom line. When we make these decisions, we need to balance competing social equities, like free expression with reducing harmful content or enabling research and interoperability with locking down data as much as possible.

We have made massive investment in safety and security with more than 40,000 people and we are on track to spend more than $5 billion on safety and security in 2021. I believe that’s more than any other tech company, even adjusted for scale. As a result, we believe our systems are the most effective at reducing harmful content across the industry.

The documents taken(by whistleblower Haugen) seem to have been selected to leave the worst possible impression about what we do and why. I do feel that they don’t come close to reflecting the true nature and depth of our work or the thousands of people who do it.

While Facebook has repeatedly said that for the long term health of its platforms, it is working to remove the problematic content, it has often come up short on that aspect. What are your thoughts on that?

The vast majority of content on Facebook isn’t problematic or borderline. Today, prevalence for hate speech on our platform is down to 0.03% … This number has decreased by over half in the last year, which shows we are having an impact.

We take a comprehensive approach to addressing problematic content, which includes investing in both people and technology. We remove violating content, reduce its distribution, so fewer people see it, and route suspected violating content to our content reviewers, so they can investigate it. For issues like hate speech, which are often complex and where context is critical, our human review teams play a crucial role. While we have more work to do, we have made meaningful progress and remain committed to getting this right.

As a platform, Facebook is often seen as a reflection of society. The hate speech and violence that is present on the platform could, therefore, be a function of the nature of the market itself. With that in mind, do you think government interventions would be needed to prevent people from engaging in hate speech and violence online?

Our policies are designed to give everyone a voice while keeping them safe on our apps. But drawing these lines is difficult. We have repeatedly called for regulation to provide clarity on these issues because we don’t think companies should be making so many of these decisions on their own.

Has Haugen’s complaint to the SEC and other regulators pushed Meta to re-look into some of the practices and policies?

We continue to make significant improvements to keep harmful content off of our platforms but there is no perfect solution. Our integrity work is a multi-year journey. That progress is in large part due to the team’s dedication to continually understanding challenges, identifying gaps and executing on solutions.

We invest in research to help us discover these gaps in our systems and identify problems to address them. We welcome scrutiny and feedback – but these documents are being used to paint a narrative that we hide or cherry-pick data when in fact we do the opposite. We iterate, examine and reevaluate our assumptions, and work to address tough problems.

Facebook has also been accused of going soft on celebrities and other actors/pages which bring in the big views, even if these celebrity figures tend to border on problematic?

Our policies are universal and we apply them without any regard for an individual’s popularity or political affiliations. We have removed and will continue to remove content posted by public figures in India when it violates our Community Standards.

What are the new policy measures that you plan to adopt, apart from the ones already in place, to further contain hate speech and violence on the platform?

We do not want to see hate on our platform nor do our users or advertisers. And while we will never be take down 100% of hate speech, our goal is to keep reducing the prevalence of it. We report on prevalence to show much hate speech we missed, so that we can continue to improve.

We reduce prevalence of violating content in a number of ways, including improvements in detection and enforcement and reducing problematic content in News Feed. These tactics have enabled us to cut hate speech prevalence by more than half on Facebook in the past year alone.

This is an evolving challenge so we are always working to evolve our policies and our approach to how we address problematic content. This means continuing to develop and refine our policies and processes in collaboration with experts across the globe to respond to emerging trends and make sure that we are addressing harmful content in the most effective ways that we can.

You might also like
Leave A Reply

Your email address will not be published.