06.06.2023

Three Questions for Prof. Sandra Thomas, Provadis University of Applied Sciences

Prof. Sandra Thomas is a professor in the Department of Economics at the Provadis School of International Management and Technology AG and an expert in strategy and corporate management as well as technology and innovation management. In this eco interview, she talks about the influence of the data centre and telecommunications industry on the present business models and about the future of data processing. She will share yet more of her expertise at this year’s German-language Data Centre Expert Summit on 14 and 15 June, where she will give a keynote on innovation management, among other topics.

Prof. Thomas, you deal with technology and innovation management at the Provadis School in Frankfurt. Where do you see possible changes in business models in the data centre and telecommunications industry in the future?

From a technological perspective, both sectors are characterised by high innovation rates. The basic business models, however, have not changed much over time. Capacities and services continue to be marketed to a greater or lesser extent in subscription models to businesses and/or end customers. There is certainly potential for innovation here. Momentums arise both from the demand side and from technological development. Technologies such as edge computing, hybrid or multi-cloud solutions, virtualisation and data encryption can push forward business models that offer specialised services for orchestrating, optimising and securing the corresponding architectures. On the demand side, however, I observe equally significant stimuli. After all, both industries are enablers of digitalisation and virtualisation for all sectors of the industry. A wide range of business models has emerged. First and foremost, of course, the platform model, which is the ground for companies like Uber Airbnb or Delivery Hero. Not only platforms are relevant for the data centre and telecommunications industry, but also a number of other developments such as OTTs and FinTechs.

Gravity plays a decisive role in the establishment of digital ecosystems. However, data traffic hasn’t really appeared on economic balance sheets so far. What could a trade balance look like in terms of data import and export?

From a business perspective, the value of data has been recognised for years. Conventional exchange relationships between agents, i.e., goods or services for money, are regularly supplemented by data flows. Even entire business models, such as WhatsApp, are based on data exchange. From an economic perspective, there is still a gap in this area. We still measure trade balances or other indicators of economic performance almost exclusively in monetary terms. So far, there is no uniform method to capture data traffic in economic balances. Nonetheless, approaches that suggest converting data flows into monetary values in order to then take the values into account in the trade balance do not convince me conclusively. There are various numbers of influencing factors in determining the “true value” of data. I believe that the introduction of some kind of “data account” would be an interesting option at this point in time. Similar to bank accounts, governments and also companies could set up accounts to track and account for data traffic. This would at least make data flows transparent, and this information could complement today’s trade balances.

Moore’s Law has been a reliable constant for the last 50 years. With the advancement of AI, GPU processors and quantum computing, could this change fundamentally?

For several decades, Moore’s Law has indeed served as a guiding principle in the semiconductor industry. It states that the complexity of integrated circuits, as measured by the number of circuit components at minimum cost, doubles regularly. However, the future of Moore’s Law has become a subject of debate, particularly in light of the advancements in AI, GPU processors, and quantum computing. We are entering the areas of physical constraint in semiconductor technology, making it increasingly complex and expensive to shrink transistors further. The latter, the cost, is unfortunately regularly underplayed in the interpretation of Moore’s Law.

In recent years, however, we have seen a number of other events and developments that also influence costs. This also exerts pressure on the “law”. Regarding technological advancements, on the one hand, we witness the emergence of processors like AI or GPU processors. Their performance optimisation for specific tasks is not solely reliant on transistor scaling. On the other hand, we also observe technological approaches that go beyond traditional semiconductor technology, such as three-dimensional integrated circuits or quantum computing. These advancements challenge the applicability of Moore’s Law, either conditionally or not at all. In summary, one can certainly say that the previously known regularity in semiconductor technology will change, but how exactly and to what extent I dare not predict. The future of computing is likely to be a combination of advances in different areas, including artificial intelligence, specialised processors, quantum computing and innovative architectures that can reshape the industry landscape.

Thank you very much for the interview, Prof. Thomas!

 

Here you will find the programme for this year’s German-language Data Center Expert Summit.

 

Sandra Thomas