Paris, 6 August 2024 (TDI): International Olympic Committee (IOC) has introduced an (Artificial Intelligence) AI-powered system that reviews social media content to keep athletes safe from cyber-bullying and abuse.
It has been reported regularly that the elite athletes face appalling online abuse during the ongoing Paris Olympics and the IOC has decided to shield them from it.
According to Kirsty Burrows, head of the Safe Sport Unit at the IOC, the IOC has recently put growingly focus on mental health in its efforts to protect athletes’ wellbeing. The role of the social media has become enormous in dealing with the mental wellbeing of the sportspersons.
As per estimates of OIC, 2024 Summer Olympics is set to generate more than half a billion social media posts without including the comments. It is assumed that on average if a post carries 10 words, it will accumulate to a body of text around 638 times longer than the King James Bible. It may take about 16 years to read if one gives each post one second of his time.
It has been noted that some 15,000 athletes and 2,000 officials will be subject to a mountain of attention during the Olympics. It goes beyond doubt that the cheering and expressions of national pride will come side by side with hate, coordinated abuse, harassment and even threats of violence.
The head of the Safe Sport Unit has further noted, “Interpersonal violence is something that can be perpetuated in physical form, but it can also be perpetuated online, and what we’re seeing across society is online abuse and vitriol is getting higher. AI isn’t a cure-all, but it’s a crucial part of the effort to fight back. It would be impossible to deal with the volume of data without the AI system.”
Also Read More: Paris Olympics: China Ends US Swim Relay Reign
The OIC announcement said that during the games, Threat Matrix will scan social media posts in over 35 languages in partnership with Facebook, Instagram, TikTok and X. It will identify abusive comments directed at athletes, their entourages and officials at the Olympic and Paralympic Games.
It will then categorize different types of abuse and flag posts to a team of human reviewers. However, the athletes can opt out of it if they prefer.