top of page

AI security: This project aims to spot attacks against critical systems before they happen

Microsoft has unveiled a new open-source "matrix" that hopes to identify all the existing attacks that threaten the security of machine-learning applications.


Microsoft and non-profit research organization MITRE have joined forces to accelerate the development of cybersecurity's next chapter: to protect applications that are based on machine learning and are at risk of new adversarial threats.


The two organizations, in collaboration with academic institutions and other big tech players such as IBM and Nvidia, have released a new open-source tool called the Adversarial Machine Learning Threat Matrix. The framework is designed to organize and catalogue known techniques for attacks against machine-learning systems, to inform security analysts and provide them with strategies to detect, respond and remediate against threats.


The matrix classifies attacks based on criteria related to various aspects of the threat, such as execution and exfiltration, but also initial access and impact. To curate the framework, Microsoft and MITRE's teams analyzed real-world attacks carried out on existing applications, which they vetted to be effective against AI systems.


"If you just try to imagine the universe of potential challenges and vulnerabilities, you'll never get anywhere," said Mikel Rodriguez, who oversees MITRE's decision science research programs. "Instead, with this threat matrix, security analysts will be able to work with threat models that are grounded in real-world incidents that emulate adversary behavior with machine learning," 




Image source: CIO.com

Comments


bottom of page