Microsoft open-sources Counterfit, an AI security risk assessment tool


Join Transform 2021 this July 12-16. Register for the AI event of the year.


Microsoft today open-sourced Counterfit, a tool designed to help developers test the security of AI and machine learning systems. The company says that Counterfit can enable organizations to conduct assessments to ensure that the algorithms used in their businesses are robust, reliable, and trustworthy.

AI is being increasingly deployed in regulated industries like healthcare, finance, and defense. But organizations are lagging behind in their adoption of risk mitigation strategies. A Microsoft survey found that 25 out of 28 businesses indicated they dont have the right resources in place to secure their AI systems, and that security professionals are looking for specific guidance in this space.

Microsoft says that Counterfit was born out the companys need to assess AI systems for vulnerabilities with the goal of proactively securing AI services. The tool started as a corpus of attack scripts written specifically to target AI models and then morphed into an automation product to benchmark multiple systems at scale.

Under the hood, Counterfit is a command-line utility that provides a layer for adversarial frameworks, preloaded with algorithms that can be used to evade and steal models. Counterfit seeks to make published attacks accessible to the security community while offering an interface from which to build, manage, and launch those attacks on models.

When conducting penetration testing on an AI system with Counterfit, security teams can opt for the default settings, set random parameters, or customize each for broad vulnerability coverage. Organizations with multiple models can use Counterfits built-in automation to scan optionally multiple times in order to create operational baselines.

Counterfit also provides logging to record the attacks against a target model. As Microsoft notes, telemetry might drive engineering teams to improve their understanding of a failure mode in a system.

The business value of responsible AI

Internally, Microsoft says that it uses Counterfit as a part of its AI red team operations and in the AI development phase to catch vulnerabilities before they hit production. And the company says its tested Counterfit with several customer including aerospace giant Airbus, which is developing an AI platform on Azure AI services. AI is increasingly used in industry; it is vital to look ahead to securing this technology particularly to understand where feature space attacks can be realized in the problem space, Matilda Rhode, a senior cybersecurity researcher at Airbus, said in a statement.

The value of tools like Counterfit is quickly becoming apparent. Astudy by Capgemini found that customers and employees will reward organizations that practice ethical AI with greater loyalty, more business, and even a willingness to advocate for them and in turn, punish those that dont. The study suggests that theres both reputational risk and a direct impact on the bottom line for companies that dont approach the issue thoughtfully.

Basically, consumers want confidence that AI is secure from manipulation. One of the recommendations from Gartners Top 5 Priorities for Managing AI Risk framework, published in January, is that organizations [a]dopt specific AI security measures against adversarial attacks to ensure resistance and resilience. The research firm estimates that by 2024, organizations which implement dedicated AI risk management controls will avoid negative AI outcomes twice as often as those that dont.

According to a Gartnerreport, through 2022, 30% of all AI cyberattacks will leverage training-data poisoning, model theft, oradversarialsamples to attack machine learning-powered systems.

Counterfit is a part of Microsofts broader push toward explainable, secure, and fair AI systems. The companys attempts at solutions to those and other challenges include AI bias-detecting tools, an open adversarial AI framework, internal efforts to reduce prejudicialerrors, AI ethics checklists, and a committee (Aether) that advises on AI pursuits. Recently, Microsoft debuted WhiteNoise, toolkit for differential privacy, as well as Fairlearn, which aims to assess AI systems fairness and mitigate any observed unfairness issues with algorithms.

VentureBeat

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative technology and transact.

Our site delivers essential information on data technologies and strategies to guide you as you lead your organizations. We invite you to become a member of our community, to access:

  • up-to-date information on the subjects of interest to you
  • our newsletters
  • gated thought-leader content and discounted access to our prized events, such as Transform 2021: Learn More
  • networking features, and more

Become a member



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *