The rapid advancement of artificial intelligence is fundamentally changing the requirements for data centres – from increased power density and new cooling and redundancy concepts to more flexible power grids and microgrids. In this interview, Michael Würth, Global Head of Datacenter Services GRF at SAP, explains how the architecture and scalability of modern IT infrastructures are evolving in the age of AI, why optimising the energy efficiency of existing sites can be strategically more sensible than constructing new facilities, and what role sustainability goals and ESG criteria play in this context. He also explains why data centres must increasingly be understood not merely as energy consumers, but as active elements within the energy system – particularly in the context of business-critical platforms like SAP HANA and AI-driven workloads.
The current AI boom is fundamentally transforming many industries. From your perspective, how does the growing integration of artificial intelligence specifically change the requirements for architecture, power density, and scalability of modern IT and data centre infrastructures?
The rapid development and integration of AI is leading to a fundamental realignment of the requirements for modern IT and data centre architectures. AI workloads differ significantly from traditional enterprise applications – both in terms of their performance characteristics and their infrastructural dependencies – and are therefore creating a paradigm shift in data centre planning.
Firstly, power density is increasing dramatically. AI accelerators, especially GPUs and specialised AI ASICs, generate thermal loads that far exceed the typical requirements of traditional servers. Where 5–10 kW per rack used to be sufficient, today 50 kW, 80 kW or even more than 100 kW per rack is becoming the norm. This makes the transition to liquid cooling – whether direct-to-chip or immersion cooling – a strategic necessity in order to maintain energy efficiency and ensure the physical stability of operations.
Secondly, AI requires a much more flexible and modular architecture. Training clusters, inference factories and distributed AI pipelines require optimised networks with extremely low latency, high bandwidth and scalable interconnects. The traditional data centre is increasingly becoming a highly dynamic network of specialised zones in which compute, cooling and power infrastructure operate in close integration.
Thirdly, AI is also changing scalability requirements. AI workloads do not grow linearly, but in stages: new models, new training methods or new business requirements can multiply resource requirements at short notice. Data centres must therefore be designed or modernised in such a way that scaling is possible not only virtually but also physically – for example, through expandable power paths, modular cooling systems and intelligent use of floor space. The topic of high-density readiness in existing facilities therefore becomes essential for developing available resources economically and sustainably.
In summary, the AI boom means that modern data centres must be designed not only to be more powerful but also more adaptive and energy-intelligent. AI is redefining the requirements for architecture, cooling, power supply and scalability – and forcing us to modernise existing infrastructures so they can not only support the next generation of workloads but actively enable them.
Companies facing the development of their data centre landscape are often confronted with a choice between new construction and transforming existing facilities. Why is the modernisation and energy optimisation of existing facilities a strategically relevant approach for SAP? What role do sustainability goals and ESG criteria play?
For SAP, the modernisation and energy optimisation of existing data centre locations is of central strategic importance. Our approach is based on the conviction that we must further develop the existing infrastructure not only technically but also ecologically – particularly in light of rising performance demands driven by AI workloads and high-density IT loads.
A key component of our renovation programme is therefore the systematic preparation of our existing sites for high-density environments and liquid cooling technologies. These enable us to manage significantly higher thermal loads while at the same time substantially reducing the energy consumption of cooling. By integrating liquid cooling, optimised airflow management and modularly expandable supply paths, we are laying the foundation for the efficient operation of modern AI infrastructures within our existing footprint. This also enables us to make considerably more efficient use of waste heat.
At the same time, the reuse of existing buildings and infrastructure is a direct contribution to sustainability. New construction generates significant amounts of embedded CO₂ – from raw material extraction to construction processes and site development. Transforming existing data centres avoids these environmental impacts and enables us to use resources responsibly without sacrificing technological progress.
Our ESG goals are therefore an integral part of all modernisation decisions. We not only want to operate more efficient systems, but also design our data centres in such a way that they harmonise with the growing requirements for energy efficiency, waste heat utilisation and CO₂ reduction in the long term. For us, modernisation means combining technical future-readiness, economic rationality and ecological responsibility.
Through this transformation approach, our data centres become platforms capable of meeting today’s AI demands while supporting SAP’s sustainability objectives – creating clear added value for our customers, our organisation and the energy system as a whole.
A key focus of the event is on flexible power grids and microgrids. To what extent does SAP see data centres in the future not only as pure electricity consumers, but also as active partners of energy suppliers – for example, through demand response or the use of storage systems?
Data centres in general – and SAP’s data centres in particular – will play a much more active role in the energy system in the coming years. We no longer see them exclusively as large electricity consumers, but increasingly as flexible, controllable energy partners. This shift is enabled by several technological and regulatory developments.
On the one hand, modern data centres have highly automated control mechanisms that allow loads to be adjusted at short notice – whether through demand-response programmes, flexible workloads or the intelligent use of cooling capacities. These flexibilities can help grid operators to better balance peak loads and stabilise the overall system.
On the other hand, storage systems – both electrical and thermal – are gaining considerably in importance. Through them, data centres can not only make their own operations more resilient, but can also prospectively provide balancing energy or absorb short-term surplus renewable energy. In combination with microgrids or local renewable generators, genuine bidirectional energy hubs are thus emerging. SAP is planning to achieve CO₂ neutrality by 2030. Our data centres, and above all their waste heat potential, will be a significant component in achieving that goal.
In the long term, we therefore view data centres as integral building blocks of a flexible energy system. What is crucial is that this partnership is shaped jointly with internal stakeholders, energy suppliers, technology providers and the public sector – transparently, securely and always with an eye on Supply Security and sustainability.
Business-critical platforms such as SAP HANA place particularly high demands on stability and performance. How are redundancy, availability and cooling concepts evolving in the context of AI-driven workloads, and what strategic priorities should operators set today?
Business-critical platforms such as SAP HANA have always demanded the highest levels of stability, availability and performance. However, with the introduction of AI-driven workloads, complexity is increasing significantly – particularly in the areas of cooling and redundancy concepts. Traditional air cooling is reaching its physical limits, while modern AI servers generate thermal loads that can no longer be reliably controlled without liquid cooling. At the same time, however, the introduction of new cooling technologies must never lead to a reduction in the availability of business-critical platforms.
One of the greatest current challenges is developing reliable redundancy concepts for liquid cooling systems. While established air cooling systems have offered mature N+1 or 2N architectures that have been refined over decades, liquid-based systems are still undergoing technological maturation. Questions concerning pump redundancy, failover design, cooling circuit monitoring and interoperability with existing data centre infrastructures must be rethought and rigorously tested. This is precisely where one of our strategic focal points lies.
SAP is therefore building its own Liquid Cooling & High-Density Innovation Lab, in which we evaluate a wide variety of technical approaches, vendor solutions and redundancy architectures under realistic conditions. Our goal is to understand how direct-to-chip cooling and rear-door heat exchangers behave under load shifts, pressure fluctuations, and failover scenarios – and which combinations are suitable in the long term for business-critical SAP platforms. In doing so, we consider not only technical resilience, but also operational stability, maintainability and integration with existing monitoring and building management systems.
Strategically, operators should set three priorities today:
- Redefine redundancy: Redundancy must not only refer to power paths, but must also include cooling circuits, pump systems, heat reject components and control logic. AI infrastructures require holistic availability concepts.
- Establish high-density readiness: Operators must modernise existing data centre space so that it can technically accommodate liquid cooling – including space for secondary cooling circuits, sufficiently dimensioned chilled water paths and scalable heat rejection capacity.
- Test before scaling: Introducing liquid cooling into production environments without first determining “proven patterns” in controlled test environments is a risk. Our own SAP lab is designed precisely to advance this maturation process in a structured manner.
Through this approach, we ensure that the next generation of AI workloads – whether for HANA-based analytics, data science platforms or generative AI – can be operated not only with high performance, but also with the operational reliability that is characteristic of SAP. AI is profoundly changing cooling and availability concepts.
The task for data centre operators now is to implement this transformation in a controlled, data-driven manner and with a high degree of technological diligence.
Looking at your own infrastructure, which technical concept or architectural decision exemplifies your future-proofness and shows that your data centres are already prepared for upcoming requirements?
A key element that exemplifies the future-proofing of our data centre infrastructure is our modular high-density and liquid-cooling-ready architecture concept, which we are implementing as part of our comprehensive renovation programme. This concept combines the modernisation of existing sites with a clear technical vision for AI and high-performance workloads.
In concrete terms, this means that we are preparing our data centres so that they can support both traditional air cooling and advanced liquid cooling technologies in parallel – including scalable cooling water networks, redundantly designed pump paths and flexible zones in which rack power densities of well over 80 kW per rack can be achieved. At the same time, our approach remains fully backward compatible and allows existing server landscapes to continue operating without causing structural or operational disruptions.
This hybrid architecture principle provides maximum flexibility for future requirements: we can integrate AI clusters using direct-to-chip or immersion cooling. We can continue to operate classic workloads efficiently. And we can scale seamlessly between both worlds without needing to build new sites.
A second key element is our new SAP Liquid Cooling Innovation Lab, where we evaluate cooling systems, redundancy concepts and high-load scenarios under real operating conditions. The lab ensures that we do not rely solely on theoretical planning but instead develop proven designs before transferring them into production environments. This minimises risks, accelerates our learning curve, and significantly improves operational reliability – a critical success factor for highly sensitive platforms such as SAP HANA or AI training clusters.
Overall, this architecture clearly demonstrates that our data centres are not merely being modernised, but are also fundamentally aligned with the next generation of high-density, AI and HPC applications. We are designing our infrastructure to remain flexible, resilient, sustainable and technologically adaptable in the long term – and thereby to reliably meet the high demands of our customers and our own platforms well into the future.
Meet the Expert
Michael Würth will speak about the modernisation and use of existing properties at the German event “Future-Proof Data Centres in the Age of AI – New Challenges for Infrastructure, the Power Grid and Existing Facilities” on 14 April 2026. All information about the event and registration can be found here.


