London, 12 January 2024 (TDI): AI driven disinformation threatens global Economy; the World Economic Forum (WEF) warns that false information fueled by advanced AI poses a significant risk to the global economy, democracy, and social unity.
The Global Risks Report highlights environmental risks as major long-term threats. It precedes the annual elite gathering in Davos and is based on a survey of 1,500 experts and leaders.
The report says misinformation is the biggest risk for the next two years. It emphasizes that AI technological advances create or worsen problems.
The authors are concerned that people with specialized talents will no longer be able to create complex synthetic content that can be used to control groups of people due to the rise in generative AI chatbots like ChatGPT.
In the upcoming Davos meetings, chief executives from tech companies like Microsoft CEO Satya Nadella, OpenAI CEO Sam Altman, and chief AI scientists from Meta are anticipated to be in attendance. Artificial Intelligence is also likely to be a major topic of discussion.
According to the report, billions of people in several nations, including developed economies like the United States, Britain, Indonesia, India, Mexico, and Pakistan, are scheduled to cast ballots this year and next, and this presents a risk for AI-powered misinformation and disinformation.
“You can leverage AI to do deepfakes and to really impact large groups, which really drives misinformation,” said Carolina Klint, a risk management leader at Marsh, whose parent company Marsh McLennan co-authored the report with Zurich Insurance Group.
“Societies could become further polarized” as people find it harder to verify facts, she said. Fake information also could be used to fuel questions about the legitimacy of elected governments, “which means that democratic processes could be eroded, and it would also drive societal polarization even further,” Klint said.
Several potential concerns are associated with the rise of AI, she noted. By simplifying the execution of cyberattacks—for example, by automating phishing efforts or producing sophisticated malware—it might provide “malicious actors” more power.
With AI, “you don’t need to be the sharpest tool in the shed to be a malicious actor,” Klint said.
It can even poison data that is scraped off the internet to train other AI systems, which is “incredibly difficult to reverse” and could result in further embedding biases into AI models, she said.