# Singapore's Model AI Governance Framework (2nd Edition) Source: https://www.pdpc.gov.sg/-/media/files/pdpc/pdf-files/resource-for-organisation/ai/sgmodelaigovframework2.pdf Related: #AI #Conceptual_Framework - [[NIST -Artificial Intelligence Risk Management Framework (AI RMF 1.0)]] - [[How to think]] - --- - Developmed by Singapore, - Model AI Governance Framework, in 2019 at World Economic Forum Davos - Human Centric Approach - Four Broad Areas: - Internal Governance Structures and Measures - Human involvement in AI Augmented Decision making - Operations Management - Stakeholder Interaction and Communication - This Model Framework already seek to combine other exisiting and common AI ethical principles # Introduction - Also a voluntary model framework, up to implementing organisation to adopt. Meant to be flexible/ - Model Framework - ISAGO - helps organisations assess the alignment of their AI governance practices and processes with the Model Framework. ### Guiding Principles - Decision-Making Process should be Explainable, Transparent, and Fair - AI solutions should be Human-centric - Organisations should detail a set of ethical principles when they embark on deployment of AI at scale within their processes or to empower their products and/or services - Refer to Annex A The Model Framework ### Internal Governance Structures and Measures - This talks about that the organisation set up, or adapt exisiting internal governance structure to ensure oversight over organisation use of AI. - Whether it should be centralized governance, or de-centralised. - Define clear roles and responsibilities for the ethical deployment of AI - Personnel should be aware of their roles/responsibiltiies, trained, guided to discharge their duties. - Risk management framework - such as [[NIST -Artificial Intelligence Risk Management Framework (AI RMF 1.0)]] - Decide on the level of human involvement - Manage AI training and selection process - Maintenance, monitoring, documentation of AI models being deployed - Training, evaluation - Again, a list of should. ### Human involvement in AI Augmented Decision making - define the objective of using AI, then weight them against risks - consider cultural aspect, social norms, values (if it's deployed in multiple countries - Risk impact assessments -- remind me of the continuous cycle in [[NIST -Artificial Intelligence Risk Management Framework (AI RMF 1.0)]] #### Three Broad Approaches of Human Involvement in AI augmented Decision Making - Human in the loop - [[AI Guardrails for Summarizing Digital Phenotyping Data in Clinical Support#^fdc67b]] - Human Out of the Loop - There is no human oversight over the execution of decisions. AI has full control without the option of human override - Human Over the Loop - Human monitors and supervise, oversee what the AI is doing, can intervene and take over decisions. The model framework proposes a design framework, a metric to decide whether we need human in, out, or over the loop. ![[Screenshot 2026-03-14 at 3.01.01 PM.png]] ### Operations Management What are the responsible measures in the operations aspects of AI adoption. ![[Screenshot 2026-03-14 at 3.02.49 PM.png]] ![[Screenshot 2026-03-14 at 3.03.03 PM.png]] - Datasets -- are the data sets representative, accurate, objective (non biased?) - Who/which department is involve in the design, decisions on which model to deploy? Who is accountable for this? - Understand the lineage of data -- where data come from, curated, moved. - Data provenance record - Ensure data quality - Minimizing inherent bias - Selection Bias - Measurement Bias - Periodic Reviewing and updating of datasets ==Re-read the section (page 46 - 51)== ### Stakeholder Interaction and Communication - General Disclosure -- Model Framework encourage disclosure on the use of AI to consumer, explain how AI is being deployed - Policy for Explanation - Encourage to develop policy on what explanations to provide to individual and when to provide them. - Explain and being transparent is to build trust. Continous review if the communication strategies is effective - Option to Opt Out - 3.52 - Organisations may wish to consider carefully when deciding whether to provide individuals with the option to opt out from the use of the AI product or service, and whether this option should be offered by default or only upon request ## Annex A - Ethical Principles - https://www.pdpc.gov.sg/-/media/files/pdpc/pdf-files/resource-for-organisation/ai/sgmodelaigovframework2.pdf