#risk_management #AI - Related [[AI Guardrails for Summarizing Digital Phenotyping Data in Clinical Support]] --- Source: https://nvlpubs.nist.gov/nistpubs/ai/NIST.AI.100-1.pdf . National Institute of Standards and Technology (NIST) (US Department of Commerce) The NIST AI RMF is meant to be Voluntary Use - to improve trustworthiness in the design, development, use and evaluation of AI products/service/system. # why it matters for clinical AI safety / rollout (my context) 1. This document provide useful consideration on how to manage risk management framework associated with development, deployment of AI systems/service. 2. It is comprehensive, and very idealised. In real life, we have to pick and choose based on our context. 3. AI risk management is an ongoing cycle of Govern culture: Map out, Measure, Manage. 4. There will be tradeoffs to balance the different criteria to increase the safety, trustworthiness. # how you’d explain it to a busy clinician (30 seconds) We can use use the Map, Measure, Manage framework to list down potential risks associated with the deployment of AI systems/service. At different stages of AI lifecycle, different AI actors will have different perspectives to what is consider Risks. AI Risk management is an ongoing exercise. # So what I will do differently” (a concrete behavior / artifact) Map out the risks i can think of from my perspective as a AI deployer in hospital setting. Accept that it will be different from the AI developer perspectives. I can't tackle all the risks, so i need to prioritized. --- # NOTES The data that AI is trained on change over time. AI systems can amplify, perpetuate or exacerbate inequitable or undersirable outcomes. ## Part 1 - Consider Risk - Risk to People, Organisation, Ecosystem - What are the challenges for AI Risk Management - If Risk are not well defined, or understood = hard to measure quantitatively or qualitatively, then hard to manage. - What are reliable metrics? - Risks level/perspective are different at different stages of the AI lifecycle: from a AI developer, deployers perspectives.. - Risk measure in controlled/research phase may be different in real world settings. - AI systems are are Inscrutable (hard to explain, opaque) complicate risk measurement - If AI design to replace/augment human activity, then need to have a baseline to compare Human Vs Human+AI. - What are the Risk Tolerance of organisation or AI actors? - Prioritization of Risk: What risks need to be managed first? "The OECD has developed a framework for classifying AI lifecycle activities according to five key socio-technical dimensions, each with properties relevant for AI policy and governance, including risk management [OECD (2022) OECD Framework for the Classification of AI systems — OECD Digital Economy Papers]." -- what is this paper? AI Risks and Trustworthiness - To be trustworthy, AI need to be valid and reliable, safe, secure and resilient, accountable and transparent, explainable and interpretable, privacy-enhanced, and fair with harmful bias managed.![[Screenshot 2026-03-14 at 12.32.13 PM.png]] - There will tradeoffs when we try to balace all these different criteria. - "Trustworthiness characteristics explained in this document influence each other. Highly secure but unfair systems, accurate but opaque and uninterpretable systems, and inaccurate but secure, privacy-enhanced, and transparent systems are all undesirable. A comprehensive approach to risk management calls for balancing tradeoffs among the trustworthiness characteristics. It is the joint responsibility of all AI actors to determine whether AI technology is an appropriate or necessary tool for a given context or purpose, and how to use it responsibly. The decision to commission or deploy an AI system should be based on a contextual assessment of trustworthiness characteristics and the relative risks, impacts, costs, and benefits, and informed by a broad set of interested parties." - - **Trust** requires **Accountability** (Knowing who is responsible if things go wrong). - **Accountability** requires **Transparency** (Being able to see how the system works). ## Part 2 : Core and Profiles ![[Screenshot 2026-03-14 at 12.41.26 PM.png]] This is an on going cycle of mapping, measure, manage risks.. Very detailed and comprehensive list of "what we should do"