Digital infrastructures place high demands on power supply, cooling and system availability. Vertiv develops solutions specifically for these areas – such as data centres and industrial applications. In this interview, Sascha Horn, Regional Strategic Account Manager DACH at Vertiv, provides insights into current trends and developments in the data centre market.
What trends are you currently seeing in the construction of new data centres in the DACH region?
We are currently observing three important trends within the DACH region: firstly, the increasing demand for AI-enabled data centre infrastructure. Whereas power densities in traditional rack environments have tended to rise slowly but steadily – with racks above 20 kW being rare – we are now regularly seeing installations with 70 to 100 CW per rack. With modern AI applications, this can sometimes be 120 CW or more. Secondly, the issue of digital sovereignty is gaining importance – a development that the eco Association has been successfully promoting for years, but which also introduces new demands on data centre infrastructure. A third trend is certainly the rising connectivity and infrastructure requirements between cloud, edge and on-premise environments.
What innovative technologies does Vertiv use in the construction of new data centres – particularly in terms of energy efficiency?
In high-density environments, such as AI and edge computing, we increasingly rely on the integration of liquid cooling – either as a direct-to-chip solution for GPU-intensive AI workloads or in the form of immersion cooling with maximum power densities of up to 200 CW per unit. Liquid cooling not only enables PUE values close to 1.0, but also more efficient heat recovery. As a transitional technology, we also offer hybrid cooling concepts that allow operators to gradually switch from air to liquid cooling. Not only efficiency and reliability are crucial here, but also our holistic service approach. Our specialised service offerings for liquid cooling systems cover the entire life cycle – from installation and fluid management to monitoring and preventive maintenance.
How can data centres be designed in a modular and scalable way so that they can respond flexibly to future requirements?
The key to a scalable date centre infrastructure today lies primarily in the use of modular prefab systems that can be easily expanded as needed. We pursue this approach, for instance, with our Vertiv SmartRun, PowerMod and MegaMod concepts. Against the backdrop of digital transformation, modern data centres should be able to respond quickly to new requirements without sacrificing reliability or energy efficiency. Extensive decoupling of the various infrastructure levels is crucial for a high degree of flexibility: if cooling, power and IT infrastructure can be scaled almost independently of each other, the respective investment phases (e.g. for site expansion) can also be managed much more precisely.
How does Vertiv support the trend towards sustainable data centres – whether through technology, materials or construction?
Effective IT sustainability management is no longer an end in itself, but one of the key competitive factors in IT operations. Moreover, electricity in Germany is an expensive commodity and is becoming increasingly scarce in some regions. Operators must therefore work on their energy efficiency on several levels in order to meet extensive legal requirements, growing customer expectations and tight budgets. This is particularly true against the backdrop of increasing power densities – for example, through the integration of local AI applications. At Vertiv, we take a multi-dimensional approach to sustainability. In the area of cooling, the focus is primarily on the integration of highly efficient liquid cooling systems. We also offer a wide range of retrofit options for existing data centres. For instance, rear door heat exchangers can be easily integrated into existing racks to increase efficiency and performance per rack. We are also trying to break new ground in the area of materials. As a further example, our TimberMod module solutions are based on wood, a renewable and particularly sustainable building material that helps to minimise resource consumption and reduce CO₂ emissions.
Next to the cooling infrastructure, other infrastructure components also contribute to low power consumption and high availability. How is Vertiv positioned here?
Our state-of-the-art multimode double-conversion UPS systems achieve efficiencies of up to 99 per cent in ECO mode. Thanks to intelligent operating modes, the UPS system can automatically respond to the current power quality. In addition, our cabinet systems and enclosure solutions ensure effective separation of hot and cold air. The digital level is supported by our monitoring software solutions. Adaptive infrastructure management monitors all sensors and components in the data centre in real time and uses modern AI algorithms to identify energy-saving potential at an early stage and optimise PUE values.
How important are partnerships with energy providers, municipalities or other tech companies for successful data centre construction?
Collaboration is the foundation of the digital economy. Strong partner ecosystems are also indispensable, especially for the planning and operation of modern data centres. Vertiv has been working with local energy suppliers, municipalities and local providers at various levels for many years. On a technological level, we also partner with leading chip manufacturers such as NVIDIA to develop efficient liquid cooling solutions for the latest AI systems. Our global service network – with over 310 service centres worldwide and more than 4,000 field service engineers – is also a key factor in our success. The increasing complexity of liquid cooling systems will also require even more specialised expertise in the future. That is why we are bridging the gap between manufacturers, operators, and partners through our Liquid Cooling Services. To this end, our innovative EMEA training centre in Frankfurt will also open in June 2025.
