05.05.2026

Data Centres in the Age of AI: How Infrastructure is Evolving

Artificial intelligence is significantly changing the demands placed on data centres. Higher power densities, new load profiles and rising energy requirements are presenting operators with structural challenges. Sascha Horn, Regional Strategic Account Manager for DACH at Vertiv, explains which adaptations are necessary and why a rethink in infrastructure planning is unavoidable. He will also be participating as a panellist at this year’s eco Data Center Expert Summit.

Mr Horn, what infrastructural adaptations are required to retrofit a conventional data centre for AI workloads, particularly with regard to cooling, power distribution and layout?

Conventional data centres quickly reach their limits when dealing with AI workloads. The key difference is that power supply, cooling and IT can no longer be planned separately – they must function as an integrated system. Power densities well above 50 kW per rack require new layouts, shorter paths for power and cooling, and increasingly the use of liquid-based cooling solutions. Although air cooling remains relevant, it is no longer sufficient on its own. At the same time, many operators must retrofit existing infrastructure whilst keeping it operational. This is precisely where modularity and scalability become critical success factors.

How can power densities of 50 to 100 kW per rack be delivered reliably and efficiently? And where do today’s power distribution and UPS concepts reach their limits?

These power densities can no longer be supported using conventional architectures. It is not enough to scale existing systems – the energy infrastructure must be fundamentally rethought. The focus is on taking a holistic approach to distribution, conversion and storage. Every additional conversion stage reduces efficiency, whilst at the same time the demands on flexibility and scalability are increasing due to highly fluctuating AI loads. Traditional UPS concepts are reaching their limits, particularly with regard to efficiency, space requirements and flexibility. The latest innovations in UPS design are introducing features such as dynamic grid support, which are crucial for strengthening the power grid and supporting alternative energy sources such as solar and wind power. The key no longer lies in optimising individual components, but in maximising the overall performance of the system.

In your view, in which scenarios are high-voltage DC architectures already economically and technically superior to 400 V AC today? And where do obstacles still remain?

As power densities rise, so too does the pressure to deliver energy with minimal losses. High-voltage DC can offer advantages here, particularly in environments with consistently high loads such as AI training clusters. Fewer conversion steps mean potentially higher efficiency and simpler distribution. In practice, however, widespread adoption is still being held back by a lack of standards, limited operational experience and an ecosystem that is not yet fully developed. That is why the focus at present is less on replacing AC with DC, and more on the targeted selection of the appropriate architecture depending on the application.

What impact do high-performance GPU interconnects have on the physical infrastructure in the data centre, for instance in terms of cabling, cooling and rack design?

GPU interconnects are fundamentally changing the architecture of data centres. The focus is shifting from conventional network structures towards highly integrated systems in which communication between the GPUs is central. This means that compute resources must be physically positioned closer together. This has direct implications for rack design, cabling and power distribution. At the same time, the demands on cooling continue to rise. Infrastructure is therefore no longer planned around generic IT, but specifically designed for highly integrated AI systems. This requires a significantly closer alignment between IT architecture and physical infrastructure.

How is the understanding of resilience in the data centre changing, particularly in the context of rising power densities, greater integration and new concepts such as “Bring Your Own Power” or battery storage?

Resilience is understood very differently today than it was just a few years ago. Instead of focusing on the redundancy of individual components, the emphasis is now on the stability and reliability of the overall system. Higher power densities and stronger integration also increase the potential scope of failures. Accordingly, new approaches to fault tolerance must be developed that operate at the system level. At the same time, energy concepts are becoming increasingly important: local energy generation, battery storage and “Bring Your Own Power” models not only help to secure the supply, but also to mitigate grid bottlenecks and manage peak loads. Resilience thus becomes the interface between infrastructure, energy and IT architecture – and it is precisely this interconnection that will determine the operational reliability of modern data centres in the future.

Infrastructure with Vision – Sascha Horn on Trends and Developments in the Data Centre Market