Super cool AI city scape by ne0ncity_by_cha5yn_dfajf5t-fullview.jpg

AI

Responsible AI and AI Risk Management Frameworks

The number, types, capabilities, and use of AI-enabled systems and applications are exploding. Unfortunately, the gap between the power of these systems and their governance is wide and growing daily. This poses significant risk to organizations and their employees who use these technologies. Numerous Responsible AI (RAI) and AI Risk Management Frameworks have been proposed to address this, their number is growing rapidly, and their structures can vary widely. This has led to a great deal of confusion and hesitance among organizations wanting to implement RAI programs to ensure the safe and ethical development, deployment, and use of AI systems. This paper proposes a solution that helps them get moving confidently and quickly, including the development of a living, searchable RAI frameworks database. It would also include the development of a Framework Profiler that can be used to query the database and quickly define the elements of the most effective and efficient RAI framework to govern their specific AI systems and applications.

Background and Challenges

It seems like every week is met with another breathtaking announcement about a new AI technology. Unfortunately, the gap between the capabilities of these systems and their responsible governance is wide and growing by the day. Here are some observations supporting this claim:

  • In 2023, funding for generative AI surged, increasing nearly eightfold from the previous year, reaching $25.2 billion.[1]

  • In June 2024, a Right to Warn letter posted online and endorsed by Yoshua Bengio, Jeffrey Hinton, and Stuart Russell warned, “AI companies have strong financial incentives to avoid effective oversight, and we do not believe bespoke structures of corporate governance are sufficient to change this.”[2]

  • In its 2024 Artificial Intelligence Index Report, Stanford University’s Human-Centered Artificial Intelligence (HAI) group warned that, “Robust and standardized evaluations for LLM responsibility are seriously lacking.”

  • An Emerging Technology Observatory survey of AI-related articles published between 2017 and 2022 discovered that only 2% of them focused on AI safety.[3]

  • Jeffrey Hinton — considered one of the godfathers of AI — recently said that he was proud that his former student, Ilya Sutskever, fired OpenAI’s Sam Altman — presumably over AI safety concerns at the company. Sam Altman has since returned to OpenAI and Ilya’s superalignment AI safety team has been disbanded.

  • Other examples of the elimination or reduction of AI safety teams include, but are not limited to, cuts at Stanford’s Internet Observatory, Meta, Amazon, Alphabet and Twitter.[4]

The gap between AI system capabilities and governance poses significant risks, and we only recently have begun to understand the breadth and depth of the problem. Consider the following examples:

  • MIT recently released its AI Risk Repository database which included 777 AI-related risks.[5] This is a significant (and growing) number.

  • The U.S. Assistant Secretary of the Army for Acquisition, Logistics, and Technology (ALT) recently issued a Request for Information (RFI) to enhance the Army’s understanding of best practices, industry capabilities, and potential sources for implementing a multi-tiered AI Layered Defense Framework (LDF).[6] There are well over 1,000 pairings of mitigations and risks in the Army’s database.

  • The University of Stuttgart recently published a list of 379 risks they identified during a review of scholarly papers from 2021 through 2023.[7]

  • In June of this year, the AIR 2024 taxonomy of risk derived from the policies of 8 governments and 16 companies was published, identifying 314 risk types.[8]

Clearly, the number and diversity of risks involved with AI systems are far greater than those in any other discipline. Not only do responsible AI practitioners need to be concerned with cybersecurity and other technical risks, but they must also manage a wide variety of risks in the economic, social, ethical and even existential realms.

The good news is a slew of AI governance and risk management frameworks have been published in the last several years to help companies order their governance efforts. The bad news is these new frameworks have created a hard-to-navigate patchwork of requirements, each with its own jargon, taxonomy, and ways of visualizing the challenge. According to the Chief Innovation Officer at a leading cybersecurity assurance firm, we are “drowning in guidance,” which he described as “overwhelming” for regular organizations.[9]

The combination of these and older standards/frameworks have led to a good deal of confusion and hesitance on the part of organizations needing to implement Responsible AI (RAI) programs. Consider the following small sample of important frameworks and related standards, research, directives, guidelines, taxonomies, and databases.

U.S. Government & DoD

Government-Led Frameworks (Non-U.S.)

Corporate and Industry

Academia

Standards Bodies

Non-Profits, Think Tanks, Research Institutions, etc.

International Organizations

Risk Databases

Incident Databases

Research for this paper quickly uncovered well over 100 frameworks and related documents. Various organizations have identified far more. For example, The AI Standards Hub at The Alan Turing Institute[10] contains nearly 300 AI-related standards. While the database is searchable and quite impressive, its content appears limited to that generated by standards organizations like ISO, IEEE, and NIST. A search of the database does not return frameworks from private sector companies like Microsoft, research organizations like CSET, universities like MIT, or frontier AI developers like OpenAI.

Another example is the database of national AI policies and strategies maintained by OECD/GPAI.[11] At the time of this writing, the live repository contained over 1,000 AI policy initiatives from 69 countries. This is a valuable albeit overwhelming amount of material.

Conclusion and Next Steps

This work has uncovered a need and appetite for a database focused on RAI frameworks and related guidance. Something that is freely available online, kept up-to-date, well-researched, searchable, and uniquely queryable. This paper proposes the need for and development of such a database in the following four phases.

  • Phase 1 summarizes the findings of this market research and describes future work. This phase is complete with the publication of this paper.

  • Phase 2 involves the completion of the first version of the database of frameworks. While this effort is well underway (as evidenced by the frameworks listed above) the database is not ready for sharing. The goal is to publish it online within the next 60 days. Consideration is being given to the pros and cons of opening the database to editing by the public or a select group of individuals.

  • Phase 3 involves the creation of a web-based interface for viewing and searching the database.

  • Phase 4 involves training an AI model on all the frameworks in the database to create what we call a Framework Profiler. The Profiler will allow organizations to enter things like the relevant stage(s) of development/deployment of their AI systems, their area of focus (e.g., data, algorithms, compute power, applications…), and other variables and receive a recommended framework that is the ideal blend of the many currently available frameworks. Some early testing of this concept using Google’s NotebookLM or a similar platform will likely occur.

Partners and funding are currently being sought for Phases 2 through 4. If you would like to discuss participating in the project, please reach out to me at info@c4ai.ai.

FOOTNOTES

[1] HAI_AI-Index-Report-2024.pdf (page 5).

[2] A Right to Warn about Advanced Artificial Intelligence.

[3] AI safety — ETO Research Almanac.

[4] See Meta, Amazon, Twitter layoffs hit teams fighting hate speech, bullying and Closing the Stanford Internet Observatory will edge the US towards the end of democracy | John Naughton | The Guardian

[5] The AI Risk Repository.

[6] ai-ldf-rfi-instructions.pdf.

[7] Mapping the Ethics of Generative AI — A Comprehensive Scoping Review.

[8] AI Risk Categorization Decoded (AIR 2024): From Government Regulations to Corporate Policies.

[9] CyberWire Podcast episode 2186, 11/7/24, Canada cuts TikTok ties.

[10] AI Standards Hub — The New Home of the AI Standards Community.

[11] OECD’s live repository of AI strategies & policies — OECD.AI

Ed Melick