07.08.2025

Confidential AI: A New Safe Space for AI Models

AI is only as good as the data it works with – but this is precisely where the risk lies. Confidential AI, however, protects data and models directly during processing. So, is the key to trustworthy AI actually in the cloud?

Anyone who wants to fully exploit the potential of AI needs relevant data. This is particularly true for fine-tuning (adapting) AI models or for their inference (application) to company-specific scenarios. Personal information, internal business figures, intellectual property and other trade secrets are often indispensable for this – but they are subject to strict IT security requirements.

Balancing data sovereignty and AI usage

Processing this data – whether for fine-tuning models or for inference – often takes place in the cloud. This is because setting up your own on-premises infrastructure for this purpose is complex, expensive, time-consuming and lacks scalability. That’s where the issue of data sovereignty comes into play – a topic that has been high on the agenda of many companies since the advent of the cloud. As recently as 2024, 56 per cent surveyed companies that do not use external data centres cited security concerns as the main reason for avoiding the cloud, according to the study “Spillover Effects of Data Centres” commissioned by the Alliance for the Strengthening of Digital Infrastructures in Germany, which was founded under the umbrella of the eco Association.

This is precisely where the latest developments in the security industry come in: “Technologies such as Confidential AI make it possible to process even highly confidential data in the cloud without the cloud provider gaining access,” explains Prof. Norbert Pohlmann, Board Member for IT Security at eco – Association of the Internet Industry. “What companies once avoided – outsourcing sensitive data to the cloud – is now becoming a viable option for future-proof and confidential data processing thanks to this new security concept.”

What was once considered an insurmountable risk is now being solved by a technological approach. Confidential AI offers what many companies are currently seeking: the freedom to innovate while maintaining trustworthy control over data and models. The combination of data protection, IT security, flexibility and cost-efficiency makes this approach a key element of modern AI strategies.

Confidential Computing meets AI

Confidential AI is a relatively new approach in the field of privacy enhancing technologies (PETs) and is based on the principle of Confidential Computing. The idea: data is encrypted not only during storage and transmission, but also during processing. This is made technically possible by Trusted Execution Environments (TEE), hardware-based, isolated areas within a processor. These secure enclaves prevent unauthorised administrators, cloud providers or third parties from accessing the data during computation. This is fully aligned with the zero trust principle.

Confidential AI use goes far beyond data protection

Confidential AI applies this principle to AI models. Not only the data is protected, but also the model parameters, logic and outputs. This allows highly sensitive information and models to be worked with – even in the public cloud.

Without Confidential AI, sensitive data would need to be anonymised or removed before fine-tuning or inference – which significantly reduces the quality of the results. Only the protected computing process allows genuine, complete datasets to be used without regulatory or security-related compromises.

“Confidential training and inference data are one thing – but the real value often lies in the specifically adapted AI model itself: its structure, its weights, its behaviour. Confidential AI protects this intellectual property from access, theft or reverse engineering – a clear IP protection that goes far beyond traditional data protection,” says Pohlmann.

As confidential as your private safe

Unlike traditional security concepts such as firewalls or email encryption, Confidential AI ensures protection not just through access restrictions, but through technical isolation at the processor level.

A comparison: imagine booking a hotel room and wanting to secure your most valuable possessions. The hotel provides the room, the concierge knows your booking – but you bring your own safe. The key feature: although you are using external infrastructure, only you have the key, not the staff, not the hotel. Likewise, Confidential AI makes it technically impossible for even the cloud provider to access data and models – neither operators nor administrators have access to the ongoing computing process.

On-premises vs. cloud – the same old story

For companies, this means that all information remains under control – even though it’s processed externally. To enable AI use under such conditions – especially in strictly regulated industries or SMEs with sensitive data – the only option until now was on-premises infrastructure. These set-ups are considered secure but, as mentioned above, are expensive, difficult to scale and inflexible.

Confidential AI introduces a new option: data protection in the cloud reaches on-premises levels and is supplemented by maximum IP protection – with greater scalability and lower infrastructure costs. Providers like enclaive offer such security features as a modular software solution that can be integrated into existing cloud services. NVIDIA, in turn, provides the essential hardware and software technology in its GPUs that enables Confidential Computing at this level. Users can then tailor the application to their needs – for instance, adapting technical requirements to a specific use case, such as fine-tuning an internal AI model.

“Companies no longer have to compromise between the agility and scalability of the cloud and the protection of their sensitive data,” says Andreas Walbrodt, CEO of enclaive. “The ability to securely process even highly confidential information in the cloud is fundamentally changing the rules of the game for AI applications and opening up new possibilities that were previously unthinkable.”

Secure use of internal knowledge sources: RAG and Confidential AI

The Retrieval-Augmented Generation (RAG) approach can also be combined well with Confidential AI. Here, the AI model accesses company-owned sources to generate more accurate responses.

By combining RAG technology and secure enclaves, documents, databases, internal reports and even AI software code can be used without exposing them. The content remains encrypted and access is technically restricted, significantly reducing the risk of data leaks.

Confidential AI: A new AI star in the cloud sky

Confidential AI is no longer just a future concept. In addition to US hyperscalers such as Microsoft, Google and Amazon, European providers such as the Berlin-based company enclaive are also investing in market-ready solutions. The first platforms and interfaces are now available. Their use is currently concentrated in highly regulated areas: pharmaceutical companies can analyse clinical data without compromising patient protection. Banks model credit risks using sensitive customer data, industrial companies evaluate confidential production data – all in the cloud, with complete control.

What does this mean for the coming years?

“In the long term, confidential AI could become the new standard – wherever AI works with sensitive data. Not as an alternative to the cloud, but as its necessary enhancement,” says Walbrodt.

Confidential AI: A New Safe Space for AI Models