Recent innovations and developments in AI have intensified the debate around data protection. Prof. Dr Anne Riechert, Board Member of the AI Frankfurt Rhein-Main Society and professor at Frankfurt University of Applied Sciences, is an expert in this field and will give a presentation on the strained relationship between AI and data protection at the Data Center Expert Summit in Frankfurt on 15 June. In an eco Interview, she gives a first insight into the topic and presents the activities of the AI Frankfurt Rhein-Main association.
What are the current challenges when it comes to data protection and AI?
Large amounts of data are often needed to train and optimise AI models. This data may contain personal information, including biometric data. This raises the question of how the legitimacy of the processing can be ensured from a data protection perspective. The data protection act does not apply to anonymous data. However, permanent anonymisation is fraught with difficulties in practice – especially because of the many different linking possibilities. Another challenge is the transparency of AI models. AI models are regularly complex, making it difficult to understand how they make decisions or predictions. The use of AI can therefore also be associated with (unintentional) discrimination.
What regulatory and societal challenges may arise from the tension areas outlined above?
If systems are trained with biased data, false results and sometimes violations of ethical values and norms can be the result. Under certain circumstances, this can lead to discrimination against certain groups of people and cause biases, i.e., systematic biases. The draft of the AI regulation offers approaches to contain the dangers arising from the use of AI systems. For example, high-quality or representative, error-free and complete AI training datasets should be used to counteract this. In addition, providers of high-risk AI systems are generally held accountable, e.g., by affixing a CE marking. However, it is also pointed out that such measures are not sufficient and that an ongoing impact assessment of an AI system would need to be carried out with a view to the fundamental rights implications. In terms of regulation, it should also be noted that the planned AI regulation does not contain any regulations on data protection and does not reference them. The regulations of the General Data Protection Regulation must therefore be examined separately.
What contribution does AI Frankfurt Rhein-Main association make to further expanding the AI ecosystem in the Rhine-Main Region?
AI Frankfurt Rhein-Main is a network that brings together the various agents and initiatives in the Rhine-Main region. The aim is to pool their knowledge, expertise and experience and make them transparent. It is precisely the bundling of the knowledge of the various agents involved in artificial intelligence that should benefit the users and practitioners of artificial intelligence. It is also particularly important for us to communicate knowledge about artificial intelligence to the public and, in doing so, to sensitise young people in particular to this essential topic of the future. My board colleague Rinku Sharma founded Techeroes (Bad Vilbel), which monitors various projects to teach children digital literacy in a playful way. We involve the expert public through events. In June, our member Clifford Chance is hosting an event on data spaces and data sharing. Here we will also discuss the new legal framework at the EU level.
Thank you for the interview, Ms Prof. Dr Riechert!