Cybersecurity of AI and standardization

The European Union Agency for Cybersecurity (ENISA) believed that commonly used standards (such as ISO 27001/27002 and ISO 9001) can help mitigate many of the risks faced by AI.

However, there are two questions:

1)the extent to which commonly used standards need to be adapted to the specific AI context for a given threat. Although AI has some specific characteristics, it is essentially software; therefore, what applies to software can be applied to AI.

However, there are still gaps regarding the clarification of AI terms and concepts:
• Shared definition of AI terminology and associated reliability concepts (definition of AI is not consistent)
• Guidelines on how standards relating to cybersecurity of software should be applied to AI (e.g. data poisoning and data manipulation)

2)whether existing standards are sufficient to address the cybersecurity of AI or whether they need to be supplemented.

There are concerns about inadequate knowledge of the application of existing techniques to address threats and vulnerabilities arising from AI:
• The concept of AI can include both technical and organizational elements, not limited to software, such as hardware or infrastructure, which also require specific guidance.
• The application of best practices for quality assurance in software may be hindered by the opacity of some AI models.
• Compliance with ISO 9001 and ISO 27001 takes place at the organizational level, not at the system level, and determining appropriate security measures depends on a system-specific analysis.
• The support that standards can provide to secure AI is limited by the maturity of technological development.
• The traceability and lineage of both data and AI components are not fully addressed.
• The inherent characteristics of ML are not fully reflected in existing standards.

Finally, these are the most obvious aspects that should be considered in existing/new standards:
• AI/ML components can be linked to hardware or other software components to mitigate the risk of functional failures, which changes the cybersecurity risks associated with the resulting configuration.
• Reliable statistics can help a potential user detect a malfunction.
• Testing procedures during the development process can lead to certain levels of accuracy/precision.

Previous Post

Already thousands of AI impersonation scams

Next Post

The privacy loophole in your doorbell

Related Posts