PROJECT
Ethics of AI in the Public Sector
This project, housed within the Pranava Institute will focus on developing strategies and tools on ethical and governance concerns around the development and deployment of AI systems in the public sector. This project is partly supported by a generous grant from the University of Notre-Dame and IBM's Tech Ethics Lab.
Project Focus
Large scale AI systems, particularly multimodal Large Language Models(LLMs), hold immense potential in transforming how governments regulate, deliver essential services, and interface with citizens. LLM-powered technology solutions are likely to play a crucial role in making public services more accessible, enhance and personalise public service delivery of goods such as education and health, improve hiring and personnel management, and make policy processes more participatory across governments.
Research indicates that the use of AI in public services is fast becoming widespread- the UK is using AI-powered chatbots for making health related information more accessible, China has deployed AI systems to ensure smoother traffic in cities, the Government Technology Agency of Singapore (GovTech) is using Generative AI as ‘co-pilots’ within government to routinise bureaucratic tasks3, and an AI-powered chatbot is helping villagers in India gain access to over 170 government programs including pension payments and scholarships4, in 10 of India’s 22 official languages.
However, deployment of Gen AI solutions in the public sector, whether to improve public service delivery, augment state capacity, or create new public goods, varies in its larger purpose from end user or enterprise applications and give rise to substantially different ethical challenges when compared to its application in the private sector. Public deployments of AI need to be citizen-centric, and keep the public good at the core through enabling democratic values like trust, accountability, transparency, protection of rights and ensuring adequate public oversight mechanisms.
There is a compelling need for an actionable, practitioner-friendly ethical framework for the public deployment of large scale AI models in government that can be used by government stakeholders across specialisations tasked with the responsibility of making key decisions on the design and deployment of these solutions.
This project seeks to create an ethical framework for public deployment of Generative AI which will translate ethical principles and use existing guidelines (such as those developed by Singapore, the OECD, UNESCO, NIST, etc.) into a practical, actionable and accessible tool which key decision-makers in government and companies can use as an actionable ethical fitness check to mitigate potential harms before deployment. The framework, which aims to center ethical, democratic and community stakeholdership in the public deployment of large scale AI systems, will be developed through an integrative literature review process, involving global experts on AI and government representatives, in order to create an output that serves the needs of the practitioners. The framework seeks to support key decision makers in actively centering key ethical values in public deployment, ensuring stakeholder engagement across the deployment process to ensure transparency and accountability and providing forecasting tools to anticipate key risks of ethical harms and preparing active prevention and mitigation strategies. With a focus on ease-of-use, and relevance for public agencies, the framework will result in a practitioners playbook for the ethical deployment of Generative AI in the public sector which will be presented in a workbook-style, application focussed manner in order to help practitioners apply it to use-cases they encounter. This effort seeks to bridge the critical theory-practice gap between Ethical AI guidelines and their application at scale in public systems.
The framework will draw from literature in the fields of responsible and ethical AI, data protection and privacy, cybersecurity, human- centred design, and public administration and to bridge the gap between ethics, public systems, and Generative AI research. The project methodology will include an integrative literature review proces, to identify key concepts in relevant technical and social scientific literature. The research will include focus groups to gather practitioner perspectives, especially the challenges that practitioners face in diverse public sector settings, cull expertise in applied ethics, and brainstorm strategies to how an ethical framework for Gen AI in the public sector can be made actionable and accessible. A culminating stakeholder consultation which will seek to bring together academics, computer scientists, citizens, policy practitioners, and public officials will focus on using a draft structure of the framework to seek feedback and comments from stakeholders and the research community, in order to refine and build the final fitness check.
Research Questions
-
How can current guidelines and global best practices be used to build an actionable ethical framework that is sector-agnostic, and easy to use such that companies and government agencies can test public-sector AI applications on ethical grounds?
-
What is a baseline ethical fitness check do government agencies need to perform before the use of Generative AI in public sector use-cases is approved?
-
What are ethical risks, potential harms, and mitigation strategies that need to be forecasted before deployment of Gen AI in the public sector?
How can the framework act as an ethical safety net, enabling government officials and companies to forecast ethical harms and prepare mitigation strategies?