News Warner Logo

News Warner

If algorithms radicalize a mass shooter, are companies to blame?

If algorithms radicalize a mass shooter, are companies to blame?

  • A court case in New York is testing whether social media companies can be held responsible for radicalizing a mass shooter.
  • The lawsuit, brought by Everytown for Gun Safety, claims that Meta, Amazon, Discord, Snap, and other companies’ design features, including recommendation algorithms, promoted racist content to the shooter.
  • The shooter, Payton Gendron, killed 10 people in Buffalo, New York in 2022 and livestreamed the attack on Twitch, while sharing a manifesto and private diary on Discord.
  • Gendron claimed to have been inspired by previous racially motivated attacks and said he was radicalized by racist memes and targeted a majority-Black community.
  • The case is a test of whether social media companies can be held liable under Section 230, a foundational piece of internet law that shields online platforms from liability for user-generated content.

In New York court on May 20th, lawyers for nonprofit Everytown for Gun Safety argued that Meta, Amazon, Discord, Snap, 4chan, and other social media companies all bear responsibility for radicalizing a mass shooter. The companies defended themselves against claims that their respective design features – including recommendation algorithms – promoted racist content to a man who killed 10 people in 2022, then facilitated his deadly plan. It’s a particularly grim test of a popular legal theory: that social networks are products that can be found legally defective when something goes wrong. Whether this works may rely on how courts interpret Section 230, a foundational piece of internet law.

In 2022, Payton Gendron drove several hours to the Tops supermarket in Buffalo, New York, where he opened fire on shoppers, killing 10 people and injuring three others. Gendron claimed to have been inspired by previous racially motivated attacks. He livestreamed the attack on Twitch and, in a lengthy manifesto and a private diary he kept on Discord, said he had been radicalized in part by racist memes and intentionally targeted a majority-Black community.

Everytown for Gun Safety brought multip …

Read the full story at The Verge.

link

Q. What is the legal theory being tested in this case?
A. The legal theory being tested is that social networks are products that can be found legally defective when something goes wrong.

Q. Who brought a lawsuit against Meta, Amazon, Discord, Snap, 4chan, and other social media companies?
A. Lawyers for the nonprofit Everytown for Gun Safety brought the lawsuit.

Q. What was the incident in question that led to the lawsuit?
A. In 2022, Payton Gendron drove several hours to a supermarket in Buffalo, New York, where he opened fire on shoppers, killing 10 people and injuring three others.

Q. How did Gendron claim to have been inspired by previous attacks?
A. Gendron claimed to have been inspired by previous racially motivated attacks.

Q. Where did Gendron livestream the attack?
A. Gendron livestreamed the attack on Twitch.

Q. What platform did Gendron use to keep a private diary?
A. Gendron kept a private diary on Discord.

Q. What was the purpose of the lawsuit against social media companies?
A. The lawsuit aimed to hold social media companies responsible for radicalizing mass shooters like Payton Gendron.

Q. What is Section 230, and how does it relate to this case?
A. Section 230 is a foundational piece of internet law that deals with liability for online content. Its interpretation in this case may determine whether social networks are found legally defective when something goes wrong.

Q. Why is the outcome of this lawsuit significant?
A. The outcome of this lawsuit may set a precedent for how courts interpret Section 230 and hold social media companies accountable for promoting harmful content.

Q. What is the potential impact of the lawsuit on social media companies’ design features and moderation policies?
A. If the court rules in favor of Everytown for Gun Safety, it could lead to changes in social media companies’ design features and moderation policies to prevent similar incidents in the future.