According to all the rules of Artificial Intelligence

According to all the rules of Artificial Intelligence

According to all the rules of Artificial Intelligence





Dr Jeannette Gorzala, BSc is a lawyer and founding partner of the law firm go_legal. She specialises in commercial law and AI. As Vice President of the European AI Forum, she represents more than 2,000 AI companies in Europe. She can be found on the web at:

The Austrian lawyer Jeannette Gorzala played a significant role in the development process of the AI Act. With this, Europe becomes the first continent to create a blueprint for the use of Artificial Intelligence (AI). In the interview, she shares her assessment of this regulatory framework and discusses the upcoming challenges.

Just recently, an agreement on the AI Act was reached in the EU. Is Europe on the right track with this?

Jeannette Gorzala  – The EU is a trailblazer with such a comprehensive regulatory framework and has taken the opportunity to set a precedent. It is already evident that this is being globally embraced: the AI Bill of Rights in the USA addresses the same critical areas and provides similar recommendations. The crucial aspect now is how this framework will be implemented. What will the execution and guidelines look like, especially in high-risk areas and for generative AI models? And how will they be enforced and monitored by the future AI authority?

Could regulations become a hindrance to innovation in Artificial Intelligence in Europe?

Jeannette Gorzala – Naturally, there are always concerns about overregulation, but it must be noted: The AI Act did not exist before, and yet we haven't seen major European players in generative AI. Europe recognized the trend too late, and in global comparison, we have invested too little. The brain drain is also extremely painful: Many European talents are drawn to the USA. Regulation is not the obstacle; it's about economic effects, and for that, we need location promotion. We need to start by promoting start-ups and business relocations and making it easier for qualified talent to immigrate. It is also often overlooked that the AI Act is not just a regulation, but also enables innovation spaces for experimenting with AI, such as the sandboxes for start-ups and SMEs. In addition, the research sector is exempt from the AI Act and developments in the high-risk area will be possible with accompanying measures, but under real conditions.

How can Europe catch up on the topic of AI?

Jeannette Gorzala – It doesn't make sense to play copycat now and try to build a Chat GPT for Europe. I believe the opportunity lies in the next wave, the multimodal models that, for example, combine text and image. Another interesting trend for Europe is smaller, more specialized models. The finance industry has different requirements than, for example, medicine or the security sector. I see an opportunity for such targeted models and in the open-source domain. We just need to start extremely quickly and with united efforts. The AI Act creates an internal AI market with equal rules for everyone, providing significant relief for developers and businesses using AI.

How do you view the guidelines that have been agreed on the subject of biometric surveillance?

Jeannette Gorzala  – All parties have endeavoured to reach a compromise here, it was one of the most difficult points. In my view, we have managed to find a solution that takes all positions into account appropriately: safeguarding fundamental rights and freedoms, but not banning these technologies completely for the security sector.

There is a two-tier risk system for the Large Language Models. Do you think this is a good solution?

Jeannette Gorzala  – We still know far too little about these new models, partly from a scientific point of view and partly because the providers are not very transparent. For this reason, it has so far been difficult to assess the risk. The AI Act addresses this point very well - through the documentation obligations and the duties to pass on certain information. It is precisely this transparency that will enable us to better understand how these models work in the future and how risk adjustment screws can be specifically addressed. The categorisation according to flops (floating point operations), i.e. the computing power used to train the AI models, is critical. Accordingly, only models with at least 1025 flops are categorised with systemic risk. Such a mathematical value is also easy to circumvent. However, there will be additional execution acts in future.



Header Photo: AdobeStock/nilanka

Help and contact
Schrack Seconet AG

Eibesbrunnergasse 18
1120 Vienna, Austria
Phone: +43 50 857

Show all locations

Support for our customers

International support
Contact Form

  • en
  • de
  • cs
  • hu
  • it
  • pl
  • ro
  • ru
  • sk
  • sv
  • tr
  • hr