Public Media for Central Pennsylvania
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

French Open organizers offer players a way to block social media abuse

MICHEL MARTIN, HOST:

Players at the French Open in Paris are battling opponents on the courts and toxic comments on social media. American tennis star Sloane Stephens, who's Black, recently called out the racist abuse she gets.

(SOUNDBITE OF ARCHIVED RECORDING)

SLOANE STEPHENS: My entire career, it's never stopped. If anything, it's only gotten worse. I have a lot of key words banned on Instagram and all of these things, but that doesn't stop someone from just typing in an asterisk or typing it in a different way, which obviously software most of the time doesn't catch.

STEVE INSKEEP, HOST:

French Open organizers are offering players a new tool, an app for their phones or other devices designed to block social media abuse. It uses artificial intelligence to screen comments.

MARTIN: The app is called Bodyguard.

MATTHIEU BOUTARD: What's important is not the keywords that are used. It is very much who the message targets and what's actually toxic inside it. And if we think that it's toxic based on different criteria, we would actually remove the content from social media.

INSKEEP: Co-founder Matthieu Boutard says the app goes beyond finding keywords.

BOUTARD: AI is a lot more complex in a sense that it understands context, which is pretty much the essence of moderation. So it's a very different ballgame.

MARTIN: And he says if there is a ballgame that needs social media moderation, it's tennis.

BOUTARD: Being a tennis player is very difficult. It's an individual sport. So if you lose a game, that's your fault. You're very exposed because a lot of people are actually betting on sport and tennis specifically, which means a lot of haters going after you if you lose a point, if you lose a set or if you lose a game.

INSKEEP: World No. 1 Iga Swiatek told reporters she is using the Bodyguard app.

IGA SWIATEK: After tournaments, I had this ritual of going just to see what people thought about my matches, but right now I stopped doing that because even when I had, I don't know, two tournaments - one I won, the other one I was in the final - I went on social media, and people were unhappy. So it frustrated me a little bit, and I realized that there's no sense to read all that stuff. So the app, I think it's a great idea.

MARTIN: There are those who wonder if the tech could be used to block legitimate critiques of public figures for whom perhaps a certain level of scrutiny is warranted. Kate Klonick is an assistant professor of law at St. John's University, and she studies online speech.

KATE KLONICK: You can imagine how something like bodyguard.ai could block a lot of politicians or public figures or people who maybe it's important that they see some of the criticism leveled against them from ever kind of seeing that type of public reaction.

INSKEEP: Boutard says his company isn't doing that.

BOUTARD: We don't remove criticism, but we remove its toxicity. The line is actually pretty clear. If you start throwing insults, being racist, attacking a player, using body-shaming, that's not a criticism, and that's actually toxic to the player.

INSKEEP: And he insists the technology is working.

BOUTARD: Out of all the messages that were sent to the players, 10% of them were toxic. And we blocked more than 95% of them.

INSKEEP: With a goal of letting the players focus on the game.

(SOUNDBITE OF MUSIC) Transcript provided by NPR, Copyright NPR.