29.09.2021

The Parliamentary View: Interview with Alexandra Geese

Alexandra Geese is responsible for the IMCO report on the Digital Services Act (DSA) for the Greens/EFA parliamentary group. In addition, she is just as heavily engaged with the topic of artificial intelligence. In The Parliamentary View, our new series of interviews, Geese gives insights to her thoughts and ratings on digital policy issues regarding the DSA, DMA and AI.

You are responsible for the IMCO report on the Digital Services Act (DSA) for the Greens/EFA parliamentary group. What opportunities and possibilities do you see in the DSA for the EU’s Digital Single Market and for European businesses?

“If we act boldly, the DSA & DMA will help us to achieve an entirely new impetus for our European digital economy. With the help of interoperability requirements and the rethinking of the ad tech business, we can disrupt the massive concentration of value creation in just a few specific companies. This in turn will create immense opportunities, especially for small and medium-sized European digital companies.”

You are just as heavily engaged with the topic of artificial intelligence. Which aspect of AI do you think receives too little attention?

“This comes down to discriminations that always arise when people are evaluated by machines. In the future, we need to make a stronger distinction between these and AI for machines; the latter makes our industry more efficient and helps protect the climate, and this should be promoted. For this, we need an investment offensive so that we advance European know-how and support European companies. On the other hand, we need to set clear boundaries for risky applications for humans. They are prone to error and have a great potential for discrimination. This must be eliminated. I see it as my task to defend the European values on this issue.”

How do you rate the EU Commissions current proposals on AI?

“The draft act is intended to eliminate risks for humans, but, on the point that I’ve already mentioned, doesn’t go far enough for me. All applications that evaluate people or influence their access to resources must be regularly reviewed, because self-learning systems are continuing to evolve. Civil society must be involved, because the segments of the populations who are most disadvantaged by AI systems are those least involved in their development. Artificial intelligence is not just a technical issue, it shapes society.  The proposed scheme also ignores all applications that do not pose a high risk to individuals but to society as a whole. I also call for a general exclusion of automated facial recognition in public settings, because otherwise we are keeping a gateway for abuse and discrimination open that we should rigorously close. China provides a good example of the totalitarian potential of such systems.”

Don’t miss the next episode of The Parliamentary View, our new series of interviews. Subscribe to our newsletter eco european: go.eco.de/eco-european

 

Parliamentary View