Deepfake of a woman
stock.adobe.com/Fractal Pictures
2022-07-01 publication

Generative AI: Fact or fake?

Exposing fakes generated by artificial intelligence takes more than just technical solutions. Creating trust in the digital world is a task for us all.

By Ulrich Hottelet

Kontakt
VDE dialog - Das Technologie-Magazin

“I advise you to lay down your arms and return to your families. This war is not worth dying for.” This call by Ukrainian president Volodymyr Zelensky to surrender a few weeks after Russia’s invasion of Ukraine spread quickly across the Internet, though it was soon clear that it was a fake. The video featured an artificially generated Zelensky speaking words the man himself had never uttered. An example of artificial intelligence (AI) at work, deepfake technology is a form of machine learning that is trained with existing audio and video data to generate deceptively realistic new content. This computer-generated creativity has advanced considerably over recent years, and the possible applications go well beyond fake speeches. Technologies capable of churning out mass quantities of “people” and fake content make it possible to manipulate the entire digital realm: imagine fake news reporters influencing public opinion, stock market hype being conjured out of thin air to bolster share prices, and forged customer reviews distorting competition. Perhaps worse still, fake comments generated at such a scale in social networks can create the illusion of political majorities, and even courts are vulnerable to the falsification of evidence. Can we believe anything we see online anymore?

Screenshot from a fake video with Ukrainian President Volodymyr Zelensky

An AI-generated President Zelensky appears to announce Ukraine’s surrender. While not a success in technical terms, the video is considered historic as the first documented attempt to profoundly manipulate political developments using deepfake technology.

| Screenshot Twitter

Dr. Sebastian Hallensleben, who runs the Digitalization and Artificial Intelligence competence area at VDE, does not see things in quite such drastic terms yet. Generative AI is expensive and has its limits, he says: “Deepfake technology is not yet capable of simulating war footage, for example.” Hallensleben sees the primary threat in faked videos, images and texts that are combined with mass tweets generated by bots. “Automation will enable this on a massive scale in the future,” he warns. The scalability of generative AI is key to its appeal among those who want to abuse it.

Fake images can be both generated and detected by generative adversarial networks (GANs). These consist of two artificial neural networks: the generator and the discriminator. The generator creates an image, which is then evaluated by the discriminator. The process of constant learning through repetition improves the quality of the data generated.

A digital arms race

This process of constant improvement is making it ever more difficult to recognize deepfakes, even with the online tools available for examining videos. Jörn Hees, who leads the team for multimedia analysis and data mining at the German Research Center for Artificial Intelligence (DFKI), fears an arms race between AI-generated fakes and the processes used to detect them. “Today’s detection methods are based in part on certain tell-tale signs that generation processes often leave behind, such as abnormalities in the high-frequency spectral range that are invisible to the human eye. The improvements in detection naturally feed into the development of new generations of the technology used to create fakes,” says Hees. This game of cutting-edge whack-a-mole is a familiar one to experts in the field of IT security: there, technical progress enables new forms of attack, and new defensive techniques are developed to counter them. Attackers respond by creating new offensive methods, and on and on it goes.

General loss of trust reflected in the digital arena

But the struggle for reliable information is not just a technical problem, as VDE expert Hallensleben points out. He says trust plays an equally important role: an information source – for example, a blogger – must appear reputable, and transparency and trust enable us to render better assessments in this regard. At the moment, however, doubt and distrust are on the rise, says Andreas Kaminski, head of the Philosophy of Science & Technology of Computer Simulation department at the High Performance Computing Center Stuttgart (HLRS). “Disinformation often has to do with the lack of relationships between people and institutions. A better form of protection would therefore be to investigate people’s own negative experiences. Many people have felt neglected and overlooked by institutions or social groups, and this has harmed their relationships with them. For these individuals, simply putting out more information on deepfakes is unlikely to help,” he explains.

Identifying fakes: too big an ask for individual users

Distrust of bloggers or influencers is one thing. The problem of trust becomes much more pernicious, says Kaminski, when “faith in democratic institutions is undermined.” This can make fake AI-generated content a full-scale societal threat.

An EU working group headed by Dr. Hallensleben has worked on this very issue and concluded that the tools used in Europe to identify artificially generated content are insufficient. The working group’s report makes a series of recommendations to political decision makers and other stakeholders (see box). “The existing standards are only of limited effectiveness, partly because adherence to them is voluntary,” says Hallensleben. He is also well aware of the weaknesses of the individual measures. State intervention, for example: governments can ban fake content or require it to be labeled, he says, but international enforcement is difficult, especially since “the creators are often outside the EU.” For Hallensleben, better media literacy on the part of users is essential. However: “Deepfake technology is becoming so good that individual users will struggle to identify it.” So what can be done? Hallensleben recommends a new communication platform for citizens that would be organized and paid for by governmental institutions. There have already been some initial developments, such as the research project “noFake.” According to Hallensleben, what is needed to create more trust on the Internet is not individual ideas, but a broad strategy. “We need to use all the options at our disposal,” he declares.


Ulrich Hottelet is a freelance journalist in Berlin.