Generative AI and Government Regulations:

The National Institute of Standards and Technology has developed an Artificial Intelligence Risk Management Framework. This 43-page document includes the following points:

  • Responsible AI practices emphasize human centeredness and sustainability. They help to align decisions with values. Proper risk management builds public trust.
  • The AI Risk Management Framework (RMF) is a resource meant to help organizations manage the many risks associated with the development and use of AI systems, and to promote the responsible and trustworthy development and use of AI. 
  • The document discusses issues such as the risks involved in the use of AI, including computational costs, privacy, and cybersecurity.

The document covers “How AI Risks Differ from Traditional Software Risks”, including the following points:

  • The data used for building an AI system may not be a true or appropriate representation of the context or intended use of the AI system. The ground truth may either not exist or not be available. Training data may not be being representative of ground truth, or may harbor bias, affecting AI system trustworthiness.
  • The system’s dependency on larger and more complex data sets than most traditional software systems. 
  • Data sets may become outdated. Changes in the training data set during training may corrupt the system. 
  • Results not always easily reproducible, giving that such systems have an element of randomness (that is, the same prompt may give two different outputs).
  • Difficulty in explaining results and correcting errors due to such systems being emergent, stochastic, and large scale.
  • Computational costs for developing AI systems and their impact on the environment and planet, as the heat output of running such applications may over time significantly impact the climate.
  • Difficulty in predicting the side effects of such systems beyond statistical measures.

Part One elucidates how organizations can frame the risks related to AI. Part Two discusses the core framework, describing four specific functions to help organizations address the risks of AI systems: GOVERN, MAP, MEASURE, and MANAGE.

The document discusses issues such as the risks involved in the use of AI, including computational costs, privacy, and cybersecurity.
The document discusses issues such as the risks involved in the use of AI, including computational costs, privacy, and cybersecurity.

Author