February 7, 2022

Call to action: With nuanced amendments, the draft AI Act is able to promote AI Innovation across Europe

Xperi’s John Patterson, VP and Associate General Counsel (AGC), shares his recommendations on the draft AI Act and ways in which it could be adapted to support AI innovation across Europe

As we reset in 2022, it’s clear the past two years have been challenging for many industries. Businesses have had to pivot quickly to adapt to rapidly changing circumstances: technology has been an integral part of this, and it has continued to innovate and forge ahead especially in the artificial intelligence (AI) space. We have seen phenomenal progress and transformation in AI, and this looks set to continue throughout the year.

AI now permeates the daily fabric of the lives of all European citizens. When the European Commission (EC) released its “White Paper on AI” in 2020, it was aimed at realising an ‘ecosystem of excellence’ and an ‘ecosystem of trust.’ The white paper acknowledged the huge impact this powerful technology will have on the world – overhauling many industries, such as agriculture and healthcare, and greatly benefitting the environment.

The Commission followed the White Paper on AI with a draft AI Act, which addressed many of the issues brought up during the consultations. The draft AI Act is now in the negotiation stage, being subject to scrutiny by the European Parliament and has been subject to suggested changes under the Slovenian and French presidencies. However, we believe that further changes to the draft Act are needed to promote AI Innovation across Europe.

With the subtle changes proposed below on top of the changes proposed by Slovenian and French Presidencies, the AI Act would, we argue, be fit for the purpose of innovation, bringing into being a world-first piece of harmonising legislation capable of heavily regulating truly high-risk AI, without stifling the ability of European SMEs to innovate in this space.

The amendments to the draft AI Act, include:

  • Carrying out an initial risk assessment of AI systems using rules already laid out in the draft AI Act to ensure that the proposed rules on ‘high-risk’ AI only apply to AI systems that pose a risk of harm to health and safety or a risk to fundamental rights and the risks are not already mitigated.

The definitions in the Act are currently absolute, in as much as if you fall within the current description of high-risk, regardless of the actual risk, you must follow the burdensome and expensive requirements of the legislation. While the AI Act anticipates that there may be standards and other bodies coming up with guidelines for conformance, we argue that the AI Act can be improved at the outset by using provisions already drafted in the Act and doing an initial risk assessment to take into account the actual risk posed by the AI in question before burdening the users with the high cost of compliance. Various tests for working out whether AI is high-risk and a risk assessment are already included in the Act and, by making subtle changes to the definitions, these tests can be applied to give the Act the flexibility it requires to not unnecessarily stifle innovation in Europe.

  • Applying the same EUR 500,000 penalties for non-compliance to commercial and government organisations alike to create a fair playing ground and promote balanced AI innovation across Europe.

The Act establishes a framework of compliance for ‘high-risk AI’ and imposes on commercial organisations – but, interestingly, not governments – severe penalties for non-compliance. This is despite AI being an emerging technology. These penalties could discourage AI investment, particularly small-to-medium enterprises (SMEs), where they would suffer disproportionately from the proposed financial risks.

Call to action

We understand the dangers of high-risk AI and applaud the Commission’s courage and world leadership in proposing this legislation. Set out below are some subtle changes to the AI Act, which, if implemented alongside the changes suggested by the Slovenian and French Presidencies proposals, would we argue help to preserve innovation in the EU whilst protecting European citizens from harmful AI. We are calling on companies to support us in asking the EU to consider adopting these additional changes in its response to the commission.

Our proposed amendments to the draft AI Act

  • Article 6, Item a – (a) the AI system is intended to be used as a safety component of a product, or is itself a product, covered by the Union harmonisation legislation listed in Annex II Section A
  • Article 6, Item b – the product whose safety component is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment with a view to the placing on the market or putting into service of that product pursuant to the Union harmonisation legislation listed in Annex II Section A
  • Article 6, Item 2 – In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall also be considered high-risk when the AI system poses a risk of harm to the health and safety or a risk of adverse impact on fundamental rights that cannot be reasonably mitigated. Determination of the risk of harm to the health and safety or a risk of adverse impact on fundamental rights shall be made by carrying out a preliminary risk assessment of the AI system in question taking into account the criteria described in Article 7.
  • Article 7 – Article 7 Annex III and Amendments to Annex III
  • Article 7, Item 2 – When assessing for the purposes of Article 6 paragraph 2 or Article 7 paragraph 1 whether an AI system poses a risk of harm to the health and safety or a risk of adverse impact on fundamental rights that is equivalent to or greater than the risk of harm posed by the high-risk AI systems in Annex III for the purposes of Article 7 or cannot be reasonably mitigated for the purposes of Article 6 paragraph 2, already referred to in Annex IIIthe Commission in the case of Article 7 and the user/ provider in the case of Article 6 paragraph 2 shall take into account the following criteria
  • Article 8, Item 2 – The intended purpose of the high-risk AI system and the risk management system referred to in Article 9 shall be taken into account when ensuring determining compliance with those requirements
  • Article 71, Item 3 – The following infringements shall be subject to administrative fines of up to 3 000 000 500 000 EUR or, if the offender is a company, up to 6% of its total worldwide annual turnover for the preceding financial year, whichever is higher:
  • Article 71, Item 4 – The non-compliance of the AI system with any requirements or obligations under this Regulation, other than those laid down in Articles 5 and 10, shall be subject to administrative fines of up to 20 000 000 250 000 EUR, or, if the offender is a company, up to 4% of its total worldwide annual turnover for the preceding financial year, whichever is higher:
  • Article 71, Item 5 – The supply of incorrect, incomplete or misleading information to notified bodies and national competent authorities in reply to a request shall be subject to administrative fines of up to 10 000 000 5 000 000 EUR or, if the offender is a company, up to 2% of its total worldwide annual turnover for the preceding financial year, whichever is higher:
Back to Stories