AI

|

By Lottie

|

September 2023

Challenges Of AI-Driven Disinformation - Insights from TechCrunch Disrupt 2023

Image

In today's digitally connected world, discerning fact from fiction has become an essential skill. A surprising statistic reveals that 90% of American adults are now fact-checking their news, and an overwhelming 96% advocate for curbing the dissemination of false information.

However, with the emergence of generative AI tools, mitigating the surge of disinformation has become increasingly challenging. A riveting discussion at TechCrunch Disrupt 2023's AI panel, featuring Sarah Brandt from NewsGuard and Andy Parsons of Adobe's Content Authenticity Initiative (CAI), highlighted this very concern, especially in the face of upcoming elections.

Parsons candidly emphasised the gravity of the situation, asserting that shared, objective truth is the backbone of any democracy. Without it, the very foundation of democratic conversations is at risk.

While Brandt and Parsons both conceded that disinformation isn't a novel concern, AI's role in it certainly is. Parsons reminisced about the manipulated 2019 video of Nancy Pelosi, but Brandt drew attention to a growing concern: AI, especially generative AI, is drastically simplifying the creation and mass distribution of fake news. NewsGuard's findings are alarming: hundreds of unreliable, AI-driven websites have been detected, some publishing thousands of articles daily.

But why this sudden increase in AI-manufactured content? Parsons provides clarity: "It's largely about volume and revenue. Many aim to flood search engines, hoping to generate ad revenue, while others have malicious intentions of spreading misinformation."

Moreover, a recent NewsGuard study highlights a potential concern with OpenAI's text-generating model, GPT-4. Compared to its predecessor, GPT-3.5, GPT-4 seems more adept at propagating convincing yet deceptive narratives across various formats.

So, how can we address this growing challenge?

Parsons indicates that Adobe, with its AI suite Firefly, actively integrates preventive measures against misuse. The CAI, co-founded by Adobe, the New York Times, and Twitter in 2019, encourages a consistent industry standard for data provenance. But its voluntary nature doesn't guarantee universal adoption or that it can't be sidestepped.

Enjoying what you're reading?

See our work

Watermarking emerges as another promising solution. Various firms, including DeepMind and startups like Imatag and Steg.AI, are pioneering watermarking technologies for AI-generated media. These techniques embed marks that are invisible to humans but detectable by specialised tools.

Brandt remains hopeful, emphasising the market-driven need for AI-generated content to be trustworthy. She argues that generative AI companies must ensure content reliability to retain users. The future of generative AI hinges on its ability to provide accurate, trustworthy content.

But as open-source AI models devoid of safeguards become commonplace, can we truly stay ahead of this challenge? The journey ahead remains uncertain, but one thing's clear: the clock is ticking.

Enjoyed This Tech Insight?

Don't miss out on the latest trends and developments by subscribing to our Tech Roundup Newsletter where every two weeks, we share the biggest tech headlines from around the world.

Make sure to subscribe here!