IBM introduces open-source library for protecting AI systems

IBMโ€™s new AI toolbox secures AI from adversarial attacks

IBM released an open-source software library meant to help developers and researchers to protect AI systems including Deep Neural Networks (DNNs) against adversarial attacks. DNNs are complex machine learning models that has certain similarity with the interconnected neurons in the human brain.

The โ€œAdversarial Robustness Toolboxโ€ is a platform-agnostic artificial intelligence (AI) toolbox created by IBM that features attacks, defenses, and benchmarks to protect AI systems.

Current AI methods like recognizing objects in images, annotating videos, converting speech to text, or translating between different languages are based on DNNs. According to IBM, while DNNs are usually very accurate, they are vulnerable to adversarial attacks and can be used to misclassify or incorrectly predict outcomes that could benefit an attacker.

โ€œAdversarial attacks pose a real threat to the deployment of AI systems in security critical applications. Virtually undetectable alterations of images, video, speech, and other data have been crafted to confuse AI systems. Such alterations can be crafted even if the attacker doesnโ€™t have exact knowledge of the architecture of the DNN or access to its parameters. Even more worrisome, adversarial attacks can be launched in the physical world: instead of manipulating the pixels of a digital image, adversaries could evade face recognition systems by wearing specially designed glasses, or defeat visual recognition systems in autonomous vehicles by sticking patches to traffic signs,โ€ IBM wrote in aย blog post.

According to the researchers, outside of applications these adversarial attacks can affect the physical world by avoiding facial recognition systems and defeating visual recognition systems such as autonomous cars. IBMโ€™s Python-based Adversarial Robustness Toolbox aims to help protect AI systems against these types of threats, which can pose a serious problem to security-critical applications.

โ€œThe Adversarial Robustness Toolbox is designed to support researchers and developers in creating novel defense techniques, as well as in deploying practical defenses of real-world AI systems. Researchers can use the Adversarial Robustness Toolbox to benchmark novel defenses against the state-of-the-art. For developers, the library provides interfaces which support the composition of comprehensive defense systems using individual methods as building blocks,โ€ the researchers wrote.

โ€œWith the Adversarial Robustness Toolbox, multiple attacks can be launched against an AI system, and security teams can select the most effective defenses as building blocks for maximum robustness. With each proposed change to the defense of the system, the ART will provide benchmarks for the increase or decrease in efficiency,โ€ explained Dr. Sridhar Muppidi, IBM Fellow, VP and CTO IBM Security.

This week, IBM also announced the introduction of AI and ML orchestrationย capabilities to its incident response and threat management products.

Subscribe to our newsletter

To be updated with all the latest news

Kavita Iyer
Kavita Iyerhttps://www.techworm.net
An individual, optimist, homemaker, foodie, a die hard cricket fan and most importantly one who believes in Being Human!!!

Subscribe to our newsletter

To be updated with all the latest news

Read More

Suggested Post