We are creating some awesome events for you. Kindly bear with us.

U.S. NIST Requests Information to Help Develop an AI Risk Management Framework

As a key step in its effort to manage the risks posed by artificial intelligence (AI), the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) is requesting input from the public that will inform the development of AI risk management guidance.

Responses to the Request for Information (RFI), which appeared in the Federal Register, will help NIST draft an Artificial Intelligence Risk Management Framework (AI RMF), a guidance document for voluntary use intended to help technology developers, users and evaluators improve the trustworthiness of AI systems. The draft AI RMF will answer a direction from Congress for NIST to develop the framework, and it also forms part of NIST’s response to the Executive Order on Maintaining American Leadership in AI.

The AI RMF could make a critical difference in whether or not new AI technologies are competitive in the marketplace. Each day it becomes more apparent that artificial intelligence brings a wide range of innovations and new capabilities that can advance the economy, security and quality of life.

Being mindful and equipped to manage the risks that AI technologies introduce along with their benefits is critical. This AI Risk Management Framework will help designers, developers and users of AI take all of these factors into account — and thereby improve U.S. capabilities in a very competitive global AI market.

AI has the potential to benefit nearly all aspects of society, but the development and use of new AI-based technologies, products and services bring technical and societal challenges and risks. NIST is soliciting input to understand how organisations and individuals involved with developing and using AI systems might be able to address the full scope of AI risk and how a framework for managing these risks might be constructed. The RFI mentions specific topics including:

  • The greatest challenges in improving the management of AI-related risks.
  • How organisations currently define and manage characteristics of AI trustworthiness.
  • The extent to which AI risks are incorporated into organisations’ overarching risk management, such as the management of risks related to cybersecurity, privacy and safety.

The AI Risk Management Framework will meet a major need in advancing trustworthy approaches to AI to serve all people in responsible, equitable and beneficial ways. AI researchers and developers need and want to consider risks before, during and after the development of AI technologies, and this framework will inform and guide their efforts.

For AI to reach its full potential as a benefit to society, it must be a trustworthy technology While it may be impossible to eliminate the risks inherent in AI, NIST is developing this guidance framework through a consensus-driven, collaborative process that they hope will encourage its wide adoption, thereby minimising these risks.

Responses to the RFI are due on Aug. 19, 2021. NIST also plans to hold a workshop in September where attendees can help develop the outline for the draft AI RMF. After releasing the initial draft AI RMF, NIST will continue to develop it over several iterations, including additional opportunities for public feedback.

Besides AI risks related to cybersecurity, privacy and safety, AI could also potentially be biased against a particular group. As reported by OpenGov Asia, to counter the negative effect of biases in AI that can damage people’s lives and public trust in AI, (NIST) is advancing an approach for identifying and managing these biases. NIST outlines the approach in a publication titled “A Proposal for Identifying and Managing Bias in Artificial Intelligence”.

The outline forms part of the agency’s broader effort to support the development of trustworthy and responsible AI. This series of events is intended to engage the stakeholder community and allow them to provide feedback and recommendations for mitigating the risk of bias in AI.

Send this to a friend