back to top
Thursday, November 21, 2024

Careers

NIST Unveils Tool for Testing AI Model Risk

 

The National Institute of Standards and Technology (NIST) is the U.S. Commerce Department agency that develops and tests technology for the U.S. government and companies. The broader public has re-released a testbed designed to measure how malicious attacks—particularly attacks that “poison” AI model training data—might degrade an AI system’s performance.

NIST Purpose and Functionality of Dioptra

Called Dioptra (after the classical astronomical and surveying instrument), the modular. The open-source, web-based tool, first released in 2022, seeks to help companies train. AI models and those using these models assess, analyze, and track AI risks. NIST says Dioptra can benchmark and research models and provide a common platform for exposing models to simulated threats in a “red-teaming” environment.

Benefits of Dioptra NIST 

“Testing the effects of adversarial attacks on machine learning models is one of the goals of Dioptra,” NIST wrote in a press release. The open-source software, available for free download, could help the community, including government agencies. Small to medium-sized businesses conduct evaluations to assess AI developers’ claims about their systems’ performance.”

NIST Context of Release

Dioptra debuted alongside documents from NIST and NIST’s recently created AI Safety Institute that lay out ways to mitigate some of AI’s dangers, like how it can be abused to generate nonconsensual pornography. It follows the launch of the U.K. AI Safety Institute’s Inspect, a toolset to assess models’ capabilities and overall safety. The U.S. and U.K. have an ongoing partnership to develop advanced AI model testing jointly. Which was announced at the U.K.’s AI Safety Summit in Bletchley Park in November of last year.

Connection to Executive Order on AI

Dioptra is also the product of President Joe Biden’s executive order (EO) on AI, which mandates (among other things) that NIST help with AI system testing. The EO also establishes standards for AI safety and security, including requirements for companies developing models (e.g., Apple) to notify the federal government and share the results of all safety tests before they’re deployed to the public.

NIST Challenges in AI Benchmarking

We’ve written that AI benchmarks are challenging because they are the most sophisticated. AI models today are black boxes whose infrastructure. Training data and other vital details are kept under wraps by the companies creating them. A report out this month from the Ada Lovelace Institute, a U.K.-based nonprofit research institute. That studies AI found that evaluations alone aren’t sufficient to determine. An AI model’s real-world safety is partly because current policies allow AI vendors to choose. Which evaluations to conduct selectively?

Limitations and Future Potential of Dioptra

NIST doesn’t assert that Dioptra can completely de-risk models. However, the agency does propose that Dioptra can shed light on which sorts of attacks might make an AI system perform less effectively and quantify this impact on performance. As a significant limitation, however, Dioptra only works out-of-the-box on models that can be downloaded and used locally, like Meta’s expanding Llama family. Models gated behind an API, such as OpenAI’s GPT-4, are a no-go for now.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here