Can artificial intelligence act in line with certain values?
The basis for trust in AI, however, is the ability to even recognize how transparent a system is – and to do so according to standardized criteria. VDE has decided to tackle this challenge. The association wants to create a VDE SPEC that will make it possible to measure how well AI systems comply with certain values. “We’re using the well-known energy efficiency labels for household appliances as a model,” says Sebastian Hallensleben, VDE’s leading AI expert and head of the VDE SPEC AI Ethics project. “We use a scale from A to G to show how a system meets specific requirements.” This scale is then applied to different categories such as transparency, fairness and robustness to show users how closely a solution conforms to relevant specifications.
As a scientific basis, the experts are relying on the VCIO model, which defines values, criteria, indicators and observables in a tree structure. At the top is a particular value, such as transparency. This is followed by specified criteria – the origin and characteristics of the training data or the comprehensibility of the algorithm, for instance.
“A set of questions is then defined for each of these criteria in order to refine everything. There’s also a whole range of possible answers for each question,” Hallensleben explains. This is how a result is produced for a single category. These findings are of interest to more than just users, as well; AI developers can also gain insights from them. They can see what is needed to improve the system in question and improve its fairness rating (for example) from level E to level A.
Explainability has its limits, especially in individual cases
Bosch, Siemens, SAP, BASF and TÜV Süd are involved in the SPEC development. On the scientific side, the participants include the think tank iRights.Lab, the Karlsruhe Institute of Technology and the Ferdinand Steinbeis Institute, as well as the universities of Tübingen and Darmstadt. The consortium aims to develop a universally binding, internationally recognized trust label for AI. An initial version of the label published by VDE at the end of April drew plenty of attention.
Hallensleben emphasizes that it is not a standard specifying how fair or transparent AI systems must be. That depends not only on the application in question, but also on political decisions. Instead, the VDE SPEC is meant to ensure that compliance with such values becomes measurable in the first place.
“This will provide a better basis for deciding whether or not to use AI for a specific application,” Hallensleben says. For example, if a company wants to use a particular system to process credit applications, it can look to the transparency classification on the label – and reconsider its choice if necessary. However, lawmakers could also specify the levels a solution must achieve on the AI trust scale in order to be used for a certain purpose.
It’s clear that there are many approaches to making artificial intelligence a bit more comprehensible. Explainability does have its limits, however. While it may be possible to create transparency by revealing things like the data used to train an AI, we still can’t explain why this data leads an AI to reject loan applicants in individual cases – at least when complex systems like neural networks are involved, as Hallensleben reports. “We have no idea whether this is even fundamentally possible with neural networks,” he admits.
In the end, opening up the black box entirely may not be absolutely necessary in every case. After all, we use AI because it delivers results or does things that other systems can’t, even if its mode of operation isn’t readily apparent. Perhaps artificial intelligence will always remain a bit mysterious in that regard.
Markus Strehlitz is a freelance journalist and editor for VDE dialog.