Roboter steht vor Tafel und lernt und löst Probleme
phonlamaiphoto / stock.adobe.com
2022-07-01 publication

“AI automates human decisions – and human biases”

AI expert Sebastian Hallensleben leads the VDE SPEC AI Ethics project. In this capacity, he hopes to build public trust in artificial intelligence. In this interview, he explains why people expect AI to be more transparent than humans, how there’s a blind spot in training systems and where he sees the ethical boundaries for AI applications.

Contact
VDE dialog - the technology magazine
Dr. Sebastian Hallensleben, Head of the Digitalization and Artificial Intelligence competence area in VDE

Dr. Sebastian Hallensleben, Head of the Digitalization and Artificial Intelligence competence area in VDE

| Uwe Nölke / VDE

Dr. Hallensleben, the VDE SPEC is intended to make it possible to measure the degree to which AI systems embody certain values, such as transparency or fairness. But if we look at the subject of transparency, there are experts who claim that artificial intelligence doesn’t have to be transparent at all. What’s your view on this?
It depends on the standard of comparison you apply. Human decisions are only partially transparent since we can’t look inside a person’s head and don’t know what life experiences have led them to use their discretion in one way or another. If an AI automates human decisions – in issuing loans or recruiting employees, for instance – we have to consider whether we’re still satisfied with partial transparency or want to raise the bar. I see this as a great opportunity to discuss the introduction of AI not only as a risk, but also as an opportunity for greater transparency, quality and consistency in decisions.
That said, you can also look at the subject from a different angle and ask what the minimum standard of safety should be. When AI controls a robot or an autonomous vehicle, there are experts who say, “We shouldn’t allow anything for safety-critical areas that isn’t 100 percent transparent and understandable.” In this case, you’re arguing for AI transparency from a safety or risk perspective and might arrive at a different conclusion.

How much transparency is even possible? Do we just have to live with the fact that AI will remain a black box to some extent?
You have to distinguish between transparency and explainability. If you look at the criteria for transparency as we’ve defined them in the VDE SPEC, it’s a matter of documenting all the aspects of training datasets and algorithms and ensuring that this information is not only accessible, but also generally comprehensible. In principle, there are no limits to a high degree of transparency. When transparency is restricted, it’s usually because it’s balanced against other requirements, such as privacy in the case of training data containing personal information. Things are much more difficult when it comes to explainability – that is, the capacity to explain an individual decision made by an AI. With a neural network, you’re faced with technical limitations that may even be fundamental in nature. 
To use the example of lending: I can create transparency by, say, disclosing the training data used to initially teach an AI. But what the AI then does based on this data – how it comes to a decision in each individual case – can’t be explained. That’s correct, at least when a neural network is used. But I would also distinguish between statistical analysis and looking at individual cases.

In what way?
Think of credit scores: with a huge volume of decisions, I can statistically show that certain groups are treated in a certain way – discriminated against or given preferential treatment, for example. Or that the decisions the system makes are better than those of a human agent. However, if I look at an individual case and ask why the AI didn’t grant a loan to that specific person, it becomes much, much more difficult. We’re not even sure yet whether it’s possible to achieve explainability in a neural network at all.

We’ve talked a lot about using AI for credit applications. There have been highly publicized cases where AI decisions were discriminatory in this context. The problem there was the training data, right?
To put it bluntly: AI automates human decisions, including previous human biases. It’s simply a fact of neural networks that you have to feed them old data for training, and that data can also contain prejudices. If you don’t want to incorporate these prejudices into AI training, you have to clean up the old data very carefully beforehand and ask yourself some hard questions about which imbalances are justifiable and which should be eliminated as unwanted biases. However, there’s also a fundamental problem with data-based learning in such situations.

And that would be?
The old data only covers loans that have actually been granted, and for each loan granted, you can see if and when it was repaid. There’s no information in this data on loans that haven’t been approved, though. So there’s no way to know whether a loan that could have been granted despite a negative assessment would have also been repaid. It’s impossible to determine whether a human bank employee might have been too cautious in some cases in the past. All of this means the training dataset for the AI system in question has a big blind spot. The same problem is being discussed in the United States, for example, where AI is used to predict the recidivism of imprisoned criminals who are up for probation. You can only find out if a criminal will end up being re-incarcerated after they’ve been prematurely released on probation. However, you’ll also never be able to determine whether those who remained in prison would in fact have committed crimes again (as predicted by the AI) after being released. 

Are there any specific applications or areas where AI should be categorically excluded?
In my view, this is a political decision. Standardization needs to create opportunities for answering this question on a more solid basis. By that, I mean clearly defining the risk classes and describing the ethically relevant system characteristics. This is a very important standardization task. 

But what is your personal opinion?I think that the first rough definition of the strictly prohibited cases proposed by the EU Commission is a very good starting point. One example is the mass use of biometrics – that is, the ability to routinely recognize all faces recorded on the street. I personally wouldn’t really want to live in such a society. The same applies to when AI is used for large-scale manipulation, like when a social network feed is designed to steer masses of users in a very specific direction. For me, that’s another case where ethical limits have been reached. But this needs to be discussed and decided within the political process, not within the framework of standardization.

Interview: Markus Strehlitz