AI

AI security group publishes first findings

The ETSI Securing Artificial Intelligence (SAI) industry specification group has released its first report, giving an overview of the ‘problem statement’ in relation to AI security.
AI security group publishes first findings
Like

According to the organisation, the report focuses in particular on machine learning, as well as “confidentiality, integrity and availability”. It furthermore points to what it calls the “broader challenges” of AI, such as bias and issues around ethics.

Discussing the report, as well as the group’s methodology, a spokesperson said: “To identify the issues involved in securing AI, the first step was to define AI itself. For the group, artificial intelligence is the ability of a system to handle representations – both explicit and implicit – and procedures in order to perform tasks that would be considered ‘intelligent’ if performed by a human.

“This definition represents a broad spectrum of possibilities. However, a limited set of technologies are now becoming feasible, largely driven by the evolution of machine learning and deep-learning techniques. [They are also being driven by] the wide availability of the data and processing power required to train and implement such technologies.”

Chair of the ETSI SAI ISG, Alex Leadbeater, said: “There are a lot of discussions around AI ethics but none on standards around securing AI. Yet, they are becoming critical to ensure security of AI-based automated networks.

“This first ETSI report is meant to come up with a comprehensive definition of the challenges faced when securing artificial intelligence. In parallel, we are working on a threat ontology, on how to secure an AI data supply chain, and how to test it.”

The statement has been named ETSI GR SAI 004. The ETSI SAI is the first standardisation initiative dedicated to securing artificial intelligence technology.

Please sign in or register for FREE

If you are a registered user on Critical Communications Network, please sign in