Aufmacher-Debatte
stock.adobe.com/kras99
2024-04-01 VDE dialog

Debate: Keeping AI under control

With all the buzz around the potential of new and ever more powerful artificial intelligence models, it is important not to overlook the impending risks: for example, that with each new development, AI becomes more difficult to control. Dr. Philip Fox knows how society can counter this threat.

Von Dr. Philip Fox

Portrait photo of Dr. Philip Fox, AI expert

Dr. Philip Fox has a PhD in philosophy and works at the Center for AI Risks and Impacts (KIRA), where he addresses the societal issues around artificial intelligence. He also considers ethics, disinformation and other political and philosophical matters.

| private

Artificial Intelligence (AI) undoubtedly holds huge potential. At the same time, leading experts, including Prof. Yoshua Bengio from the University of Montreal and Emeritus Prof. Geoffrey Hinton from the University of Toronto, warn that the development toward ever more powerful models could get out of control. Both are co-founders of the technology behind ChatGPT, and are anything but alarmists. But what are their concerns?

Firstly, nobody is yet able to reliably control the behavior of advanced AI models. An experiment the Frontier AI Taskforce presented to the UK government in November 2023 demonstrated this perfectly. The GPT-4 model was tasked with trading securities for a fictitious financial company in a simulated environment. After just a short time, it made an illegal insider deal, even though it had been explicitly instructed to remain within the law. What’s more, when a human supervisor asked about the illegal trade, the model initially denied it.

Secondly, even a model that carries out its instructions to the letter is far from risk-free. Malicious parties could misuse it for their own ends: there are governments that want to keep their population under surveillance, or terrorist groups that could use AI to conduct cyber attacks on critical infrastructure, for example.

"More people need to work on making AI more transparent and reliable. "

Thirdly, many societal issues have not yet been addressed. What values do AI models reflect? What data is used to train them? Will breakthroughs in AI development cement global inequalities or benefit everyone? The fact that it is primarily US tech companies who are deciding these matters is a democratic deficit.

Two things are paramount now.

At the moment, there are more people around the world working to make AI models more powerful than are working to make them more secure, transparent and reliable. This needs to change.

In addition, we need a global minimum consensus on safety standards in AI development (for example mandatory risk evaluation by independent testing organizations). Initiatives like the AI Safety Summit, the closing declaration of which was signed by 28 countries and the EU, or bilateral meetings between China and the United States are steps in the right direction.

However, we are still a long way from a world in which the safety of powerful AI models is comprehensively guaranteed.

Contact
VDE dialog - the technology magazine