The Paris Artificial Intelligence Action Summit, in its focus on the public interest, trust, and governance, acknowledges that trust in AI depends on an effective feedback loop among AI developers, regulators and the broad public: necessitating more participatory approaches to AI development and governance. Scholars and policymakers have emphasized the importance of integrating diverse voices, particularly from marginalized and underrepresented communities, into AI decision-making processes to mitigate the bias, inequality, and ethical blind spots. A shift towards a participatory model challenges traditional, top-down and expert-driven approaches to AI governance and raises important questions about power, agency, representation, and accountability. Amid this incipient “participatory turn” in both the development and governance of AI our goal is to showcase work that demonstrates what it looks like, how and why to do it, as well as what can go wrong.
Through an open call we’ve sourced papers and presentations across three key themes:
We encourage both academic and practitioner submissions, and welcome submissions in three formats. Based on submissions we will put together a series of panels and workshops, and a poster session. For panels, panellists will be encouraged to read and respond to each other's work during the discussion section.
The Paris Artificial Intelligence Action Summit, in its focus on the public interest, trust, and governance, acknowledges that trust in AI depends on an effective feedback loop among AI developers, regulators and the broad public: necessitating more participatory approaches to AI development and governance. Scholars and policymakers have emphasised the importance of integrating diverse voices, particularly from marginalised and underrepresented communities, into AI decision-making processes to mitigate the bias, inequality, and ethical blind spots. A shift towards a participatory model challenges traditional, top-down and expert-driven approaches to AI governance and raises important questions about power, agency, representation, and accountability. Amid this incipient “participatory turn” in both the development and governance of AI our goal is to showcase work that demonstrates what it looks like, how and why to do it, as well as what can go wrong.
We invite short papers, posters, presentations and case studies for an interactive research symposium to take place on 8th February 2025 ahead of the 2025 AI Action Summit (Paris, Feb 10th/11th). Submissions should respond to three key themes:
Submissions are also encouraged to consider one or more cross-cutting themes of (a) operationalising theory into practice, (b) evaluation, evidence and impacts and/or (c) learning for policy and practice.
We aim to convene researchers and practitioners of participatory methods in AI for the purpose of international community-building. Interested policy-makers, advocates, and civil society organisations are also invited. Through showcasing the state of current work and critique of participatory AI, this gathering will build a stronger international community of practice. It will also begin the work of formulating a shared vision for policy action.