skip to content

OMB's Guidance on GenAI Implementation

Advancing Responsible AI Governance and the Implications for Government Attorneys

by Gina Jurva

In a landmark move, Vice President Kamala Harris recently announced a significant step forward in the federal government's commitment to responsible artificial intelligence deployment and governance. The issuance of the White House Office of Management and Budget's (OMB) first government-wide policy aims to mitigate risks associated with AI while harnessing its benefits across federal agencies. This policy, a core component of President Biden's Executive Order on AI, sets a new standard for AI governance and innovation in the public sector.

The primary objective of the policy is to advance governance, innovation, and risk management within federal agencies' utilization of AI. 

The guidance, formerly delivered as Memorandum 10-24 Advancing Governance, Innovation, and Risk Management for Agency Use of Artificial Intelligence, arrives amidst the rapid integration of AI technologies within the federal government. Current applications include machine learning for monitoring global volcanic activity, wildfire surveillance, and wildlife tallying via drone imagery, with numerous other applications in development. 

Furthermore, the Department of Homeland Security disclosed its intention to broaden the scope of AI utilization. This expansion encompasses the training of immigration officers, safeguarding critical infrastructure, and intensifying efforts in drug and child exploitation investigations.

Many other executive departments and agencies are actively investigating strategies to reap the benefits of generative AI, in a responsible, well-planned manner. (Indeed, the federal government has released a list of over 700 agency use cases that are active or in development.)

As government attorneys navigate the realm of responsible AI governance, they must critically examine the broader societal impacts of AI adoption.

Here's a breakdown of the key provisions and their potential implications for government attorneys:

Addressing Risks

The policy mandates federal agencies to implement concrete safeguards when using AI that could impact Americans' rights or safety. By December 1, 2024, agencies must assess, test, and monitor AI impacts, mitigate algorithmic discrimination, and provide transparency into AI usage. 

Widespread integration of AI technologies in government agencies promises to bolster efficiency and efficacy in service delivery. AI-driven automation holds the potential to streamline bureaucratic processes, alleviate administrative burdens, and enhance government service responsiveness. 

Advancing responsible AI governance extends beyond mere legal compliance.

Yet, alongside these advancements, concerns arise regarding equity and accessibility. Government attorneys are tasked with addressing issues of AI bias and discrimination, ensuring that AI systems do not perpetuate or exacerbate existing societal inequalities.

Government attorneys will play a pivotal role in ensuring agencies comply with these safeguards and navigate legal complexities surrounding AI usage.

Enhancing Transparency

Federal agencies are required to enhance public transparency by releasing expanded inventories of AI use cases, reporting metrics about sensitive AI usage, and notifying the public of AI waivers. 

The infusion of AI into decision-making processes within government introduces novel challenges related to accountability and transparency. As AI algorithms influence policy recommendations and resource allocations, it becomes imperative for government attorneys to uphold transparency and accountability standards. This entails crafting legal frameworks for auditing AI systems, establishing mechanisms for public oversight, and safeguarding against potential misuse of AI-generated insights.

Overall, this transparency requirement ensures accountability and may require government attorneys to advise on legal implications of public disclosures and waivers.

Advancing Responsible AI Innovation

The policy encourages agencies to remove barriers to responsible AI innovation and address societal challenges using AI technology. The ethical dimensions of AI adoption in government demand thoughtful consideration. As AI systems grow increasingly sophisticated, government attorneys confront complex ethical dilemmas concerning privacy, consent, and individual autonomy. Striking a balance between AI's potential benefits and safeguarding individual rights necessitates the advocacy for robust ethical guidelines and principles. 

These guidelines should govern AI deployment in the public sector, ensuring alignment with democratic values and the preservation of human dignity. Government attorneys will likely provide legal guidance on navigating regulatory frameworks, ethical considerations, and promoting innovation while ensuring compliance.

AI-driven automation holds the potential to streamline bureaucratic processes, alleviate administrative burdens, and enhance government service responsiveness. 

Strengthening AI Governance

Federal agencies are tasked with designating Chief AI Officers and establishing AI Governance Boards to coordinate AI efforts and ensure accountability. Government attorneys will advise on legal and regulatory compliance related to AI governance structures and oversight mechanisms.

Advancing responsible AI governance extends beyond mere legal compliance. Government attorneys will play a critical role in guiding agencies through legal complexities, ensuring compliance with the policy, and promoting ethical AI practices. As the public sector embraces AI technology, government attorneys are at the forefront of shaping AI governance and innovation to serve the public interest effectively.