15.03.2024

AI and Cybersecurity – eco Expert Group Formulates 5 Theses

Artificial intelligence (AI) heralds the next evolutionary stage of cybersecurity. How will IT teams defend against cyberattacks in the future? The Security Competence Group of eco – Association of the Internet Industry met for an expert workshop and developed five key theses on how to deal with AI in the IT security landscape.

KI und Cybersicherheit – eco Expertengruppe formuliert 5 Thesen
  1. We need precise definitions of what AI is and is not. On the one hand, AI methods have been used in many systems for a very long time without these systems being labelled or perceived as AI systems. On the other hand, the term artificial intelligence suggests that systems could develop at least similar abilities to humans in terms of perception and decision-making. This scares some people and at times prevents an objective discussion of the opportunities and risks of AI.
  2. We underestimate the capabilities of AI systems. The experts warn of growing cybercrime and advocate proactive security strategies, with employee training playing a central role. This is becoming increasingly important: the main attack vectors are attacks via social engineering and psychological manipulation.
  3. Regulatory efforts through legislation are too opaque and granular. Given the slow pace of the EU AI regulation, it is important to create transparency in which cases and how AI should be regulated. The regulation of open source AI systems should be discussed more intensively, as these are subject to different standards and overly granular regulation misses the mark.
  4. AI makes it more difficult to identify phishing emails. Prompt injections in particular, in which the AI is freed from its fixed role description, are increasingly being misused for negative purposes. The experts pointed out various ways of overcoming these problems, such as automatically analysing the website structure of fake websites. This involves scouring the various contents of the website for anomalies. For example, input fields or design elements are checked. The texts are also checked for word frequencies and checked with a Bayesian filter.
  5. We have to adapt traditional security strategies for AI. AI security problems are often basically traditional cyber security challenges. Governance strategies and zero trust approaches are also important for secure AI implementations. The quality of training data plays an important role in the safety of AI systems. This means that data hygiene is becoming increasingly important. Standards are also helpful in making AI safer. These are developed, for example, by the European Telecommunications Standards Institute (ETSI), the Cloud Security Alliance (CSA) and the German Federal Office for Information Security (BSI).

All of the Security CG experts emphasise that a realistic assessment of the risks associated with AI is crucial in order to strengthen trust in these applications and develop them further. Only in this way can the benefits be utilised efficiently – without neglecting the downsides. Recommendations for the use of AI include training, the implementation of multi-factor authentication, regular security audits, transparent use of AI and the continuous development of defence technologies. The Security Competence Group is actively committed to supporting companies and organisations in the secure integration of AI into their security strategies.

AI and Cybersecurity - eco Expert Group Formulates 5 Theses