How can we ensure that AI deployments in the public sector are ethical and responsible?
This project seeks to create an ethical framework for public deployment of Generative AI which will translate ethical principles and use existing guidelines into a practical, actionable and accessible tool which key decision-makers in government and companies can use as an actionable ethical fitness check to mitigate potential harms before deployment.
Large scale AI systems, and multimodal Large Language Models (LLMs) in particular, hold immense potential in transforming how governments work by making public services more accessible, personalise public service delivery of goods such as education and health, improve hiring and personnel management, and make policy processes more participatory across governments.
Governments across the world are already using Gen AI for a wide range of use cases. However, harms and ethical concerns of public deployment of AI technologies are fundamentally different compared to enterprise or end-user solutions. Use of AI in providing health services, in credit and finance and facial recognition technologies have previously resulted in harms such as privacy violations and exclusion of marginalised communities without creating adequate safeguards or human safety nets creating fundamental challenges to the ethical stewardship that governments owe to citizens. Therefore there is a compelling need for an actionable, practitioner-centric ethical framework for the public deployment of large scale AI models in government that can be used by government stakeholders across specialisations tasked with the responsibility of making key deployment decisions.
​
This project seeks to create an ethical framework for public deployment of Generative AI which will translate ethical principles and existing guidelines into a practical and accessible practitioner’s playbook which key decision-makers in government and companies can use as an ethical fitness check to mitigate potential harms before deployment. The framework, which aims to center ethical, democratic and community stakeholdership in the public deployment of large scale AI systems, will be developed through an integrative literature review process, involving global experts on AI and government representatives, in order to create an output that serves the needs of the practitioners. The framework seeks to support key decision makers in actively centering key ethical values in public deployment, ensuring stakeholder engagement across the deployment process to ensure transparency and accountability and providing forecasting tools to anticipate key risks of ethical harms and preparing active prevention and mitigation strategies. With a focus on ease-of-use, and relevance for public agencies, the framework will result in a practitioners playbook for the ethical deployment of Generative AI in the public sector which will be presented in an application focussed manner in order to help practitioners apply ethical principles across diverse use-cases. This effort seeks to bridge the critical theory-practice gap between Ethical AI guidelines and their application at scale in public systems.
Meet the Team
Georgina Curto Rex is an Assistant Research Professor with the University of Notre Dame's, Lucy Family Institute for Data & Society. Her research focuses on the design of AI socio-technical systems that provide new insights to counteract inequality, which builds on her previous positions, including as a consultant for the Barcelona City Council Local Development Agency.
Titiksha Vashist is Co-founder and Lead Researcher at The Pranava Institute. Titiksha drives research projects at Pranava, focussing on emerging technology regulation, social implications of technology and bringing design and ethical perspectives to technology governance. Titiksha also heads strategic partnerships at the Institute, and has worked with global organisations and think tanks including UNESCO, Konrad Adenauer Stiftung, Tactical Tech, Global Innovation Gathering, All Tech is Human, and academic institutions.
Shyam Krishnakumar is Co-Founder and Strategic Advisory Lead at the Pranava Institute. Shyam combines a deep understanding of geopolitics and political dynamics, comfort with New Delhi’s regulatory ecosystem and deep technology
background to provide risk advisory and foresight across a range of rapidly evolving technology sectors, including
data and platform regulation, digital markets and antitrust, electronics and semiconductor manufacturing, telecom
and 5G and Crypto/Web3 regulation.
Lena Shadow graduated from the University of Notre Dame in 2024 with her bachelor's in Sociology. Currently, she is a public policy master's candidate at Duke University. Her primary interest lies in the monitoring and evaluation of gender equality policies and women's empowerment programs. Recent coursework and projects have introduced her to both the harms and benefits of technological systems for monitoring and evaluation.
Dhanyashri is a design researcher at The Pranava Institute. Her work revolves around ethical design practices, consumer behaviour and technology informed by care ethics. An orthodontist by training, Dhanyashri brings a unique perspective to her design and research work, currently focussed on ethics in technology. She holds a Master’s in Human-Centered Design from the Srishti School of Design, Bangalore
Project Activities
01.
Expert interviews with researchers and practitioners
02.
Multi-stakeholder engagements and
focus groups
03.
Workshop for problem mapping and solution-building
04.
Facilitating dialogues across stakeholder groups on AI Ethics
05.
Build a Fitness Check for Ethical AI in the Public Sector