Governing is an essential component to effective AI usage, especially within larg organizations or when the use of AI for a product has greater potential to cause harm in its design. Applications of AI need to be evaluated based on their risk to do harm and be used ethically.
AI Governance¶
AI governance ensures ethical, safe, and responsible development and deployment of artificial intelligence technologies. It encompasses a set of rules, standards, and processes that guide AI research and applications, aiming to protect human rights and promote fairness, accountability, and transparency.
Governance helps to ensure AI systems are ethical, consistent with individual, company and societal principles, value-producing with successful results that benefit customers and businesses, and compliant: adherent to local, regional, national, and international laws.
Effective governance at appropriate institutional levels will improve results, whiel minimizing risks to customers and businesses. The challenge is in understanding what is right for your business.
Why govern?¶
In order have the greatest potential positive impact in your use of AI, governance is essential. The larger the organization, the greater the importance of governance to help minimize needlessly duplicated internal systems and efforts. Even for smaller organizations, effective governance from the beginning will enable your organization to more reasonably create and deliver effective and responsible AI-enabled solutions.
How to Govern¶
- Establish an appropriate body of leadership and a surrounding community that supports the development of AI that is both responsible and effective.
- Create or adopt a set of AI principles that align with your company,
- Creast or adopt a set of procedures for creating, evaluating, and managing your AI systems.
- Create, license, or otherwise use AI _ML ops observability platforms/tools that you will use to implement and maintain AI-enabled projects that is consistent with your procedures and principles.
- Transparently communicate the development and status of your AI-enabled system with internal and regulatory bodies.
Preparedness¶
It is possible, if not likely, that more powerful Generative and General AI will come about. Consequently, it is essential to prepare for it in such a way to scientifically and effectively mitigate any potential risks, including catestrophic risks. As part of this OpenAI has established a preparedness framework that they are working with. Other companies may wish to follow suite. This framework, in summary, considers three things. 1. The categories and classes of risk. 2. A scorecard model that indicates the level and class of risks 3. The governance to minimize risks enable effective action upon risk emergence or identification
Categories and classes of risks¶
The classes of risk are mentioned as the following.
- Low
- Medium
- High
- Critical
The meaning of these classes depend on the categories and are thoroughly described in the framework
The categories are partioned into the following:
- Cybersecurity
- Chemical, biological, radiological and nuclear (CBRN)
- Persuasion
- Model Autonomy
- Unknown unknowns
Score cards¶
These Describe the risks + categories before and after risk mitigation
Governance¶
Governance consists of
** Safety baselines**:
- Asset Protection
- Deployment restrictions
- Development restrictions
Operations:
An operational structure that coordinates actions and activities of a Preparedness team , a Safety Advisory Group (SAG), The OpenAI leadership, and the OpenAI Board of Directors.
Common Elements in AI Governance¶
Ethics: Principles to aim towards¶
Responsible Development and Monitoring¶
Risk identification and Mitigation¶
Risk severity table from here
Lifecycle Maintenance¶
Observability¶
Feedback¶
What Governmence looks like¶
There are a number of resources all around the internet that may faciliate in understanding what should. be done. One example is the AI-Governance provides an example 'Hourglass Model' for organizations to organize their AI
The different components have associated tasks, which we take from here, helps to identify the different tasks that should be done throughout the lifecycl eof AI products.
These actions are described here
AI Governance To Do List
## A. AI System
T1. AI system repository and ID
T2. AI system pre-design
T3. AI system use case
T4. AI system user
T5. AI system operating environment
T6. AI system architecture
T7. AI system deployment metrics
T8. AI system operational metrics
T9. AI system version control design
T10. AI system performance monitoring design
T11. AI system health check design
T12. AI system verification and validation
T13. AI system approval
T14. AI system version control
T15. AI system performance monitoring
T16. AI system health checks
## B. Algorithms
T17. Algorithm ID
T18. Algorithm pre-design
T19. Algorithm use case design
T20. Algorithm technical environment design
T21. Algorithm deployment metrics design
T22. Algorithm operational metrics design
T23. Algorithm version control design
T24. Algorithm performance monitoring design
T25. Algorithm health checks design
T26. Algorithm verification and validation
T27. Algorithm approval
T28. Algorithm version control
T29. Algorithm performance monitoring
T30. Algorithm health checks
## C. Data operations
T33. Data pre-processing
T34. Data quality assurance
T31. Data sourcing
T32. Data ontologies, inferences, and proxies
T35. Data quality metrics
T36. Data quality monitoring design
T37. Data health check design
T38. Data quality monitoring
T39. Data health checks
## D. Risk and impacts
T40. AI system harms and impacts pre-assessment
T41. Algorithm risk assessment
T42. AI system health, safety and fundamental rights impact assessment
T43. AI system non-discrimination assurance
T44. AI system impact minimization
T45. AI system impact metrics design
T46. AI system impact monitoring design
T49. TEC expectation canvassing
T50. TEC design
T47. AI system impact monitoring
T48. AI system impact health check
## E. Transparency, explainability and contestability (TEC)
T51. TEC monitoring design
T52. TEC monitoring
T53. TEC health checks
## F. Accountability and ownership
T54. Head of AI
T55. AI system owner
T56. Algorithm owner
## G. Development and operations
T57. AI development
T58. AI operations
T59. AI governance integration
## H. Compliance
T60. Regulatory canvassing
T61. Regulatory risks, constraints, and design parameter analysis
T62. Regulatory design review
T63. Compliance monitoring design
T64. Compliance health check design
T65. Compliance assessment
T66. Compliance monitoring
T67. Compliance health checks
AI Governance Stakeholders¶
There are numerous and varied stakeholders that may be a part of any governance solution. Here is a general list that will necessarily vary depending on business structure:
- C-Suite level:
- CIO - Chief Information Officer
- CISO - Chief Information Security Officer
- CPO - Chief Privacy Officer
- CDO - Chief Data Officer
- Legal - Ensuring AI Compliance and security
- Communication - Presenting internal and external representations of stances towards AI
- System or application owner(s) - Those building overal products
- Software Architects and Developers
- AI/ML Engineers and Researchers - Creating AI solutions
- Data Scientists and Domain Experts - Helping to understand enable Data for use in AI systems
- UX - User Interfacing and Experience
- Users - Those who use the AI