Bildgenerator DALLE-E 2

Bei der Bebilderung dieses Artikels half der Bildgenerator DALLE-E 2  von Open AI. 

| https://openai.com/dall-e-2/
2023-02-13 publication

Artificial intelligence in the quality check

Germany has a great interest in measuring and thus proving the quality of artificial intelligence. Only then can the "Made in Germany" seal also work for products developed here in this field. A new "Ai Quality & Testing Hub" in Hesse wants to pioneer this - and establish itself as a strong partner for companies and developers.

By Martin Schmitz-Kuhl

Contact
VDE dialog - the technology magazine
Downloads + Links

VDE and State of Hesse open AI Quality & Testing Hub

Made in Germany. That's more than a label, it's a promise. Products that bear this distinction are usually not among the cheapest - but among the best in terms of quality. This business model used to work excellently and still works quite well today. This is not least because it is comparatively easy to prove the quality of cars or machines, for example, and to demonstrate their superiority over competing products. For products from the field of artificial intelligence (AI), however, it is not so easy. "That's because we don't yet have any recognized evaluation criteria for the quality of AI, no testing tools and no standards to certify against," knows Dr. Sebastian Hallensleben, head of AI and digitalization at VDE. He adds that testing is also simply difficult, because the newer AI systems - that is, anything that contains neural networks in any form - are largely opaque black boxes. In other words, you can never be sure exactly what's going on inside, and sometimes not even what comes out the back.

A risky game. Because the use of low-quality AI can have devastating consequences. If a chatbot fails to answer a question satisfactorily or a translation software fails to translate love poetry from the High Middle Ages, that may still be forgivable. But if you entrust your safety to an AI, for example in autonomous driving, error-free functioning is even essential for survival. "Here in particular, however, it has been shown, for example, that minimal disturbances such as the application of small adhesive strips to a traffic sign can completely sabotage recognition by the AI," Hallensleben reports. In this case, the AI would register a 50 km/h sign instead of a stop sign, for example - which could prove fatal in practice.

Picture generator DALLE-E 2

When asked about quality, even AI developers shrug...

| https://openai.com/dall-e-2/

Focus on responsible AI

A lot of innovation is still needed in testing the quality of AI systems. But the nice thing is: Innovation leadership is still up for grabs at this point, according to Hallensleben: "Neither China nor the United States are ahead of us here." But quality is ultimately regulated and standardized by whoever masters it technically. Alone: Even if you ask AI developers how quality is to be achieved, they usually shrug their shoulders. Nor do they have any idea whether it is legally problematic if they make any promises that are not subsequently kept. "And actually, they don't even know who to turn to if they want to improve the quality of their AI," Hallensleben explains.

But that is about to change! Together with the state of Hesse, the VDE founded the "AI Quality & Testing Hub" in December 2022, which has now been officially opened. For the state, the new hub is part of an AI agenda that was published a good year earlier by Prof. Dr. Kristina Sinemus, Hesse's Minister for Digital Strategy and Development. In the course of this strategy, not only is a strong AI ecosystem to be built, but above all a focus is to be placed on shaping responsible AI, the paper states. To this end, Hesse already has the ZEVEDI research and competence network and the hessian.AI AI association. "This AI landscape is now being supplemented and completed with the hub by the topic of quality," says a pleased Dr. Tina Klug, head of the Digitalization and AI in Business, Research and Society department at the ministry. After all, without the appropriate quality assurance, it would not be possible to deal responsibly with the topic.

Service portfolio of the Hub includes information, training and consulting

But what exactly does the hub do? "To find out whether there is a need for such a hub and, if so, how it should be structured, we first conducted an extensive stakeholder dialog," explains Klug. The result was that the concept of quality had to be defined quite broadly: It was not just about the performance of an AI, but also about issues such as transparency, non-discrimination, sustainability, legal security and legal robustness. In addition, it quickly became clear that the Hessian hub should not have a regional focus. On the contrary, there should be a European claim right from the start.

"Building up a network was very central for us," explains Klug. And this had already begun long before the official founding of the hub. In November 2022, for example, the first AI Quality Summit was held, an international, high-profile event that is now to be held annually and was something of a kickoff event for the hub. In addition, an Expert Council was established, where all non-commercial stakeholders around the Hub will come together and meet at irregular intervals to provide input on strategic direction. This will soon be joined by a Business Council, i.e. a corporate body that will formulate industry needs for the Hub and provide corresponding input.

"As far as the further service portfolio is concerned, we will initially focus on the area of 'information, training and consulting' because we can quickly offer valuable assistance to AI providers and users alike here," explains Dr. Michael Rammensee. He is the managing director of the new hub. At the end of January, Rammensee moved into his new office in the House of Logistics and Mobility (HOLM) at Frankfurt Airport. The location is well chosen. Not only because logistics and mobility will be focal points of the hub alongside the healthcare industry and finance, but also because HOLM sees itself as a development and networking platform where companies and startups, universities and research institutions, associations and political institutions meet to jointly drive forward projects and innovations. Above all, however, the location offers the opportunity to grow: "At least 50 employees!" Rammensee answers confidently to the question of where he sees the hub in the next few years. Admittedly, this is an ambitious goal, especially since even financially stronger AI companies are constantly complaining about a lack of skilled workers. But Rammensee has an ace up his sleeve: "We are the good guys!" Because while the bottom line in companies is less about "improving the world" and more about private economic interests, the hub really wants to build a bridge between innovation and responsibility - for the benefit of people. That's exactly what would have appealed to the physicist and AI specialist when he was asked if he would be interested in setting up the new hub.

Prof. Dr. Kristina Sinemus

Hessian Minister for Digital Strategy and Development: Putting AI Regulation into Practice

Prof. Dr. Kristina Sinemus

Prof. Dr. Kristina Sinemus: "With the AI Quality and Testing Hub, we in Hesse are proactively addressing the AI regulation currently being discussed in Brussels and, with the services of the Hub, are creating the best conditions for putting the upcoming AI regulation for Europe into practice and providing the best possible support for companies to incorporate AI into their business models and to do so on a quality-assured basis. Also, with the expertise of the Hub, we will contribute to building a European ecosystem for reallabs."

Standards are a must

Once the hub has been further consolidated, other tasks await: For example, quality-assured data sets must be developed with which AI systems can be trained. Test and simulation environments, quasi digital equivalents of a test equipment or workbench, are also important. "This can be testing software that is made available to companies, but a complete service is also planned, in which the hub itself puts a company's AI through its paces," explains Rammensee.

All this, of course, cannot be done without standards, another important task of the hub. And that brings us back to Sebastian Hallensleben from the VDE. Because if "Made in Germany" is to function as a seal of quality for AI as well, it must be possible to determine exactly what is good and what is bad - according to precisely defined criteria in advance. "But this is precisely where the challenge lies," explains the expert, "and a simple seal would not do justice to the complexity of the subject either." For this reason, the VDE already developed a template for an internationally valid "AI Trust Label" a year ago in the form of a VDE SPEC, which is to be presented shortly. This is intended to function more like a label familiar from ingredients in food or the energy consumption of household appliances: with standardized ratings for certain sub-aspects of quality, such as transparency, fairness and privacy protection. Such a label is significantly more meaningful, and above all more useful. "After all, we don't want to put the brakes on innovation; on the contrary, we want to promote it," says Hallensleben.

Interview with ChatGPT

Concept for AI
metamorworks / stock.adobe.com
2023-02-13 publication

When is an artificial intelligence (AI) good, bad or even dangerous? And how does an AI itself assess this issue? An interview with the chatbot ChatGPT.

Read more