October 7, 2021

Recap of Xperi AI Webinar: The challenges of generic AI regulation – an Industry Perspective

Blog Image 1 - Title Slide

On 22 September 2021 Xperi, in association with Déirdre Clune – MEP for Ireland South – held the first in their new series of webinars ‘The EU AI Act – an Industry Perspective’.

This first webinar, ‘The challenges of generic AI regulation – an Industry Perspective’ was designed to give an update on the current status of the European Union’s proposals for regulation in field of Artificial Intelligence (AI) and give a voice to representatives of a broad range of industries to discuss the act.

Moderated by the distinguished Professor Barry O’Sullivan from University College Cork in Ireland, the event opened with an introduction from Gabriel Cosgrave, General Manager for Xperi in Europe. 

Gabriel spoke about why Xperi is hosting the series of events in conjunction with a broad range of companies across many sectors. The need for industry to have a voice when it comes to their perspective on how innovation, cost and job creation could be impacted by the act. He stated that “Parts of the drafted AI Act and surrounding information can portray a negative connotation towards artificial intelligence, where in fact, AI when used for the right purpose, for example, as safety improvements, can help provide more enhanced solutions through advanced technology to our own people’s benefit.

Gabriel called out the need to differentiate between Edge AI and Cloud AI (aka. Offline and online AI) and the need for sectoral nuance when defining any framework, regulation or risk categorisation. He also highlighted the need to look carefully at the compliance regimes to ensure they are fit for purpose for all sizes of business. Gabriel closed a call to the law makers not to make the regulation overly broad – something that risks slowing down innovation in Europe and allows other jurisdictions, both east and west, to become the AI powerhouses of the globe.

Following on from Gabriel, Barry welcomed Déirdre Clune, MEP for Ireland South and the European Parliament sponsor of this event to make her opening comments. Ms. Clune spoke about how the Parliament is looking forward to working across the Parliament and with the Commission in developing the response and position on the proposed regulations. Ms. Clune herself will lead the representation from the European People’s Party.

Blog Image 2 - Déirdre Clune

Ms. Clune went on to state that there is a recognition of the potential that AI has for everyday lives, consumer experiences and for introducing efficiencies in the economies and democracies of Europe. She stated that “Artificial intelligence can really work in best interest, and we want Europe to be at the forefront in setting rules and global standards in this area. It is the first and we’re proud of the fact that Europe will be the first in leading in this area. We want human-centric, risk-based, balanced legal framework with high ethical standards, appropriate liability rules, legal certainty for developers and users. I think that’s important. I think it’s important that we wouldn’t have a fragmented approach across Europe that we would have a single standard.

Ms. Clune continued by posing a number of questions around the definition of AI, ensuring the correct classification of risk, the need for workable compliance with regulators who are sufficiently qualified and the need to be mindful there isn’t overlap and unintended consequences with existing legislation for instance with medical devices, financial services or product safety. Ms. Clune closed by stating that as the parliament prepares for its work this autumn, this webinar would be the ideal opportunity to hear the panel’s position and she was looking forward to further discussions on the topic.

Next up, Barry welcomed Salvatore Scalzo – Policy and Legal Officer in the field of AI at the European Commission – to give an update on the status of the current draft of the AI Act.

Salvatore started his presentation with a quote from Ursula von der Leyen, the President of the European Commission, who said “Whether it’s precision farming in agriculture, more accurate medical diagnosis or safe autonomous driving – artificial intelligence will open up new worlds for us. But this world also needs rules.” during her State of the Union address in 2020. Salvatore went on to explain the key regulatory concepts which underline the proposed framework. The Commission is relying on an existing regulatory framework that has been very successful over the past few decades. That being the classic internal marketing rules for placing products on the marketing and into services – in this case the product being AI based systems.

Whilst Salvatore confirmed that any activities by the military are out of scope of the act due to their enhanced complexities, the Commission is seeking to take a proportionate risk-based approach that provides a level playing field for both EU and non-EU players who’s products and services are available with the European Union. He presented the Commission’s definition of AI as “a software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”. A definition that has been inspired by the OECD (Organisation for Economic Co-operation and Development) definition of AI and is as neutral as possible so as not to rule out future developments whilst trying to cover all areas of AI and provide legal certainty by listing recognised techniques and approaches.

Salvatore then explained the four levels of risk – ranging from ‘unacceptable’ to ‘minimal or no risk’, the latter being described as the area the Commission expects a very large majority of systems to fall into. The core of the regulation itself is aimed at addressing the middle two layers, described as ‘high risk’ and ‘transparent risk’. He stated that for ‘unacceptable risk’ there are “There are certain practices in article five that we want to be prohibited. We refer to subliminal manipulation or exploitation of vulnerabilities which result in physical or physiological harm, the practice of social scoring by public authorities and the real-time remote biometric identification for law enforcement in publicly accessible spaces.”, although he caveated the last example there are some specific exceptions such as “the search for victims of crime, the threat to life or physical integrity of terrorism and serious crime, notably in relation to the EU arrest warrant framework, but there will have to be for those exception an ex ante authorization by a judicial authority of independent administrative body.

Blog Image 3 - Salvatore Scalzo

With regards to the ‘high risk’ categorisation Salvatore explained there are two categorisations that are being used. Firstly the “the safety components of products which are already regulated under product legislation within the EU and they’re subjected to third-party assessment under the relevant sectorial legislation” such as medical devices and machinery, and secondly “standalone AI systems which are used in certain areas, which are identified as possibly high-risk areas”, although the fact that a system is within one of the areas outlined in Annex 3 of the act as high risk, it is the actual use case that is important when determining risk. High risk applications will be permitted subject to compliance with AI requirements and ex-ante conformity assessments.

For what the Commission calls ‘transparent risk’, AI systems will be permitted but subject to information/transparency obligations. In this case Salvatore stated “that basically here we make a very minimal intervention. For us, these systems like chat box, deep fakes or emotional recognition system or biometric categorization systems, in our view, they’re not high risk. They just pose transparency risk. In a nut shell, we want that the persons exposed to those systems can be informed they are interacting with an AI system.”.

Finally for the fourth level, that is determined as ‘minimal or no risk’, the Commission is seeking to establish the possibility to set possible voluntary codes of conduct, but this would be on a voluntary basis.

Salvatore also spoke about how the regulation is being formulated to support innovation though the establishment of ‘regulatory sandboxes’ and by providing specific support for SMEs and start-ups with priority access to the sandboxes, considerations towards conformity assessment fees and harmonised technical standards which are both market driven and market orientated.

In closing Salvatore’s final words were about governance, stating the Commission “… foresee two levels. The day-by-day implementation will be at national level with national competent authorities and among authorities, there would be one national supervisory authority holding a kind of coordination role within each single member state. At the European level, we would envisage to create an artificial intelligence board composed of the authorities at national level, the European data protection supervisor and shared by the commission services. In a nutshell, the scope of this board would be to harmonize the implementation practices across the EU because we want the rules to be applied in the same manner across all the EU and within all member states to the best possible extent.

Following on from Salvatore it was the turn of John Patterson, Associate General Counsel at Xperi to present his remarks. John started by stating that while he understands “the aims of the proposed legislation to protect the European values and promote trustworthy and human-centric AI and applaud the commission’s aims to ensure that AI is a force for good and harmonized across EU states.

John went on to say that while the current draft has drawn criticism from both industry and consumer groups, the Act’s approach to addressing the risks of AI in four categories is commendable. But, in his opinion “… further work is needed to ensure that the definition of what is high risk is more nuanced.”.

John then discussed that, high risk AI currently includes product or safety components of products that are already covered by harmonised EU legislation – examples being cars, lifts/elevators and aircraft – and the legislation already ensures the products themselves are safe. John stated “If AI is to be included in an already regulated safety product, then arguably to impose all the high-risk requirements of the act may be overkill especially where some of the requirements, such as human oversight or keeping records of use, may be inappropriate, for example, in a real-time AI system used for in-car monitoring.”, highlighting the anomaly that “The exclusion of safety components from the AI Act is under contemplation under article two, but then it’s reintroduced in article six if third party conforming testing is required.” and saying “How these two articles interact with each other, in my opinion, could be made a little clearer.

Other areas that John spoke about were the cost of compliance, and the potentially huge fines that are being proposed for breach of the regulations. He pointed out the disparity between the fines for commercial vis-à-vis governmental organisations where commercial organisations could face fines of up to 6% of worldwide turnover or 30million euros and government organisations up to 500,000 euros.

In summing up John made some suggestions for change, “Number one, the definition of high risk should be based on a more nuanced view of the actual risk of AI, taking into account more factors than the areas listed in annex three of the Act. Number two, products that are covered by current safety legislation should be left out of the scope of the act. Number three, the cost and requirements of compliance should better reflect the risks of the AI in question. Lastly, the potential fines should be reduced so as not to inhibit good faith innovation using AI in the EU, perhaps in line with the EU’s half a million euros figure for honest breaches with the headline huge amounts being reserved for wilful and gross acts of non-compliance. 

Following John’s introduction, Barry invited the other members of the panel to give their positioning statements. First up was Dr. Petronel (Petro) Bigioi, Product CTO at Xperi, who gave some insight into his background working with in the field of AI and expressed his hope that the regulation would put Europe at the forefront of artificial intelligence origins and not the end. Petro stated that it has been “… demonstrated time after time after time that artificial intelligence used as an engineering tool to solve problems, it’s actually better than traditional methods and statistical methods developed more recently.

Blog Image 4 - Panel Participants

Petro then went on to explain that “not all AIs are the same.” and there are implementations on the server, implementations on the device side, implementations that you train offline prior to release that never change after they are release and then there are self-learning, self-adapting AI that continuously learn and update themselves independently. Talking about AI that is used solely in product development, Petro posed a question as to where the risk lies if the product itself complies with the safety regulations that are already in place?

When it came to the cost of compliance, Petro stated that most of the work that Xperi currently does in the driver monitoring space would be categorised as ‘high risk’ development, despite it being discretely contained within the vehicle with no ability to learn on the fly, giving the example that “I asked a couple of my project managers to actually run some simulations, figure out what is the cost should they redevelop the technology with all of these regulations in mind. One of them came back with 70% increase and the other one actually 100%, 2X increase in the R&D cost basically to put the same technology on the market in the new conditions.” He asked how this is sustainable for small companies who are the engine of bringing competition to the sector.

Finally, Petro echoed John Patterson’s comments about the cost of non-compliance and the need to differentiate between wilful and accidental incidents and the need to ensure the regulation is fair and protects the future and doesn’t seek to change the past.

Following Petro, Dr. David Moloney, Chief Scientist at Ubotica spoke. David, a veteran in the field of AI co-founded Movidius, a company that pioneered the use of acceleration for Edge AI in semiconductor devices and was eventually acquired by Intel in 2016. David started by expressing his concerns that the legislation appears to assume that AI is something that is fixed in time. In reality it’s evolving all the time and “it seems more appropriate that one should regulate according to the tier of risk, according to intent rather than the implementation of the system.

David went on to ask the question “Would this legislation cover bio-inspired AI systems such as neuromorphic systems? There are companies… like Opteron, the UK-based company has developed AI systems based on the honeybee brain that effectively don’t require large amounts of training data and do inference on the Edge in a similar way in terms of the output is similar, but the way at which you arrive at that output is completely different.” He then expressed that the cost of centralisation of data to be able to regulate compliance was flawed and potentially more expensive than even the cost of compliance.

To sum up David warned that we need to be practical and ensure we avoid unintended consequences saying “We have a backdrop now where strategic independence is increasingly important in semiconductors and other areas. If we don’t want to kill the goose that lays the golden egg and by regulation, we need to have some counterbalances and to have proportionate regulation and proportionate penalties according to the level of risk… Otherwise, you could effectively turn the EU into a wasteland in terms of people doing cutting edge AI research because they say, ‘If we develop this stuff here, it’s affected potentially by EU regulations and it may be just better to move everything off shore.’” And when that happens you “… have essentially no visibility into what’s happening there.

The final industry intervention on the panel came from Dr. Patricia Scanlon, the Founder and Executive Chair of SoapBox Labs, a Dublin based SME that powers voice experiences for children of all ages, dialects and accents using their own speech recognition technology that is powered by their own proprietary AI. Patricia echoed the comments of Petro and David and highlighted that as a start-up they have the ability to be nimble. One of the factors that has enabled this has been the ability to access deep learning computational power via the cloud, which has enabled them to attract both private, EU and Irish government investment.

However, Patricia expressed “the concern here as a start-up is what is this regulation and risk going to do to our ability to raise capital? What will that do to our competitors who are based in the US or in Asia? Do they now get a competitive edge over us? Would our board start forcing us to look outside of Europe for our clients and avoid the European space completely due to this?”. 

Patricia then went on to say that in terms of regulation, the example of GDPR has been positive for them as prior to this “data collection for children was like the Wild West” and “GDPR has given everybody a very good framework.” but she countered that “We’re onboard with regulation where it’s needed, but we’d just like to ask that people consider very carefully the areas, the risk categories as well, the cost of compliance for a company like us and, as was already said, that if you inadvertently cause an issue or one of your employees doesn’t do something correct that you aren’t overly penalized, even though the intent was not there to do that. It’s really important when we talk about the guys that have already talked about offline, where you built it once, deploy it, never touch it again.

When talking about online AI, Patricia highlighted there is a need for balance if start-ups and SMEs are to remain competitive on a global playing field, saying “Some AI systems deploy live unsupervised updates of models while out in the real world, as the system is exposed to new data in the space the models evolve. Without adequate evaluations, such live updates can introduce risk. Separately, state-of-the-art requires systems and models to be periodically updated. When we perform such updates they happen, safely and under the supervision of the company and are evaluated each time they are updated. We always test and use good evaluation data when updating. We always make sure that our models haven’t made anything worse, but we need to ensure that remaining compliant while performing such regular updates still allows us to have that nimbleness and not to be overly regulated and have that huge cost to compliance every time we want to bring out a new update, system or a new feature.

Following the interventions from all the panellists, Barry invited Salvatore back to the virtual lectern to respond to some of the points raised by the industry representatives. Salvatore began by reminding everyone that this is a proposal, and the Commission has a “very open spirit to improve things that maybe may not work perfectly.” He acknowledged that this is the first attempt worldwide to do this so finding the right balance is very difficult but there is a need to also counter the cost of not doing anything and having a total fragmentation of regulation as member states seek to regulate independently of a central EU framework.

Salvatore responded to some of the comments around the interlinking with existing sectorial legislations, saying “… at the moment, you have essential requirements in sectorial legislation which are at a very high level, but they’re not AI specific. What we are trying to do is to create just some reference terminologies and issues with regard to essential requirements without changing the enforcement instruments. What we are saying is that we want that you comply with a few specific AI requirements which maybe you are complied with already today under the general essential requirement. You do that within the current conformity assessment. We don’t want to change the enforcement instrument.

To conclude his responses, Salvatore also addressed the point about penalties and that the system being proposed are based on the system in place for GDPR, albeit with different percentages, and are in fact the maximum penalties applicable. However, in reality each member state would administer their own system taking “… into account many, many different considerations nuancing their penalty systems.” so “… it’s very unlikely that very high amounts are applied for situations that would not require that, so also in that context, I would like to point out the issue of how those provisions are the factor enforced. It’s also based on already existing legislation.

The final point of the from Salvatore was around the need for nuancing, saying “… in reality, the nuancing is often also made in the context of the harmonized standards. If you think sometimes there are the harmonized standards which have to operationalize very high-level essential requirements. They describe different measures depending on the level of risk, even within the high-risk category. It’s not the technical solutions, and the related compliance costs will be the same for all, but there will be usually also a nuanced approach taken when drafting the actual harmonized standard.

To close the webinar, Barry then Petro back to make a comment about alternatives to certification and give his final thoughts. Petro responded by suggesting a possible solution using a matrix that would help to really see where there were risks to safety or human rights. As his final point Petro went on to say “The final thing I would like to point is AI is not a market. It’s actually a tool to be deployed in products that are acting in different kind of markets, in automotive, in whatever, medical, etc, I have a feeling that this current legislation treats it as an end market and applies regulations of a market and not regarding it as a tool, as it should be. I think taking a step back and looking again at the framework is what needs to be done really.

Barry then brought the discussion to a close, summing up the main points before inviting Gabriel Cosgrave and Déirdre Clune to make their final comments.

Blog Image 5 - Gabriel Cosgrave

Gabriel took the opportunity to thank all the panellists before highlighting that the comments here today have highlighted that there is a concern from industry and the work the “… commission have obviously done a lot of good work to date and the AI regulation is generally a good thing and with good intentions. The industry ask, if you like, is to get down into the next level of details before we finalize this on how applications work, some of the differences you heard about being offline and online, Edge versus cloud, thinking about the start-ups is very important.”. He also offered up Xperi in an advisory capacity to the Commission and Parliament as the discussion continues over the next few months.

Finally, Ms. Clune summed up by saying that the webinar had produced “a lot of food for thought” and “… there’s a strong desire that we would encourage innovation and provide that type of ecosystem, but I hear what people are saying that they have concerns that that may not be the right approach for them. But as Salvatore has said… This is a proposal” and this “will give us something to chew on and to look forward to your next seminar, 13th of October, to engage further.

With that Barry brought the webinar to a close and reminded everyone to attend the next seminar in the series, Edge-AI and secure data protection and privacy – an Industry Perspective on 13 October.

Back to Stories