04.11.2025

Deep Fakes: A Challenge for Business and Society

In this interview, Lennart Grunau, Lead Consultant for Cybersecurity at JAMORIE, discusses the growing threat posed by deep fakes, the role of artificial intelligence as an accelerator of disinformation, and the societal and economic consequences of manipulated media content. He explains why companies need to be especially vigilant, what cultural changes are necessary to prevent attacks, and why regulation alone is insufficient to address this development.

 

Mr. Grunau, fakes existed even before AI – such as photo montages or manipulated audio recordings. What distinguishes modern deep fakes from earlier forms of fake media content, and to what extent does artificial intelligence act as an accelerator or even a catalyst for this development?

Media manipulation has existed for as long as there has been media. The first known photo retouching dates back to 1846, when it was still very analogue and without criminal intent.

The main difference between modern deep fakes and earlier forms is the effort involved in their creation. While photo retouching and video editing require significant skill and years of experience, there are alarmingly few prerequisites for deep fakes: the image material, a little imagination and a credit card.

As with any technology, the frequency of deep fakes will increase as they become easier to create. Their accessibility and ease of creation will inevitably make deep fakes more prevalent.

Manipulated media content can massively undermine trust in public information. What societal dangers do you see from the spread of deep fakes – especially with regard to political influence and fake news campaigns?

Fake news, false reporting or the good old “hoax” have always existed. False quotes and fake audio or video material could be created and published long before the current hype surrounding deep fakes. Nevertheless, the accessibility of modern deep fakes described above is, of course, alarming.

The media landscape, but above all society, will inevitably have to learn which media and reports are trustworthy – and which are not. Platforms such as Mimikama clearly show that society is capable of recognising fakes and warning others.

Foreign policy actors have been influencing political and social reporting for many years, sometimes with impressive success. With increasingly realistic-looking image and audio material, I expect this to continue to grow.

In addition to civil society, companies are also direct targets. What kinds of damage are companies currently experiencing as a result of deep fake attacks, and which industries are particularly at risk in your experience?

Currently, deep fakes are mainly an evolution of classic “phishing”. Instead of an email that purports to come from the boss asking you to transfer money to a dubious account, there’s now a deceptively real phone call, sometimes even with video. But the goals remain the same: information, access credentials and financial resources.

Of course, companies in the financial sector are particularly interesting targets, although I wouldn’t describe any industry as “particularly safe”. As with all other attacks, the lower the barriers, the higher the chance of becoming a target – or, in other words, “it can happen to anyone.”

What combination of technologies, processes and employee awareness do you consider most effective for detecting and defending against deep fake attacks at an early stage?

In the long term, I believe that only cultural and procedural changes will be effective. A company that has always operated in an authoritarian manner and based on orders (“The boss says, the employee executes”) will be significantly more vulnerable than one where communication, independent action and questioning instructions are not only tolerated but encouraged.

An open “culture of questions and mistakes” is also a must: At which company will an employee be more likely to ask for help from when they are unsure: one that ridicules questions (“Anyone can see that’s fake…”) – or one where the employee is quickly given an assessment and assistance without judgment?

Multi-step verification will be needed not only for passwords and logins, but also for decisions. Code words or “never just in a call” processes can help with implementation. Additionally, each individual will have to learn to be more sensitive with their own data.

Regulation usually develops more slowly than technology. How do you see the future role of regulation, labelling requirements and “verified source” labels in dealing with deep fakes – and what gaps still exist today?

Regulation is, of course, necessary, but it doesn’t prevent crimes. With enough criminal energy, all labelling requirements, labels, watermarks, etc. can simply be circumvented or removed. While news outlets like Tagesschau could be required to mark AI-generated content, an attacker trying to infiltrate a company probably cannot.

However, regardless of the potential misuse of deep fakes, I think it makes a lot of sense to label AI-generated content of any kind. This not only raises awareness of deep fakes within civil society, but also serves an educational purpose at the same time.

In general, I don’t see the greatest risks in the (lack of) regulation, but rather in the (non-technical) detection of and interaction of deep fakes.

Deep Fakes: A Challenge for Business and Society