https://www.youtube.com/watch?v=kyVSVYbWrA0&t=0s
“If you want to understand the big issues, you need to understand the everyday practices that constitute them.” (Suchman, 2019).
This paper motivates the integration of ethnographic methods in participatory design (PD) for AI systems, with a focus on accessing and including the difficult-to-reach communities, and establishes the role of an embedded digital ethnographer throughout a system’s lifecycle. Ethnographies have long been recognized as an approach to comprehending the “use-contexts” of designs, documenting real-life settings and practices (Blomberg et al. 2003; Suchman et al. 2002). As compared to PD’s orientation to incorporating users’ inputs into crafting future design, ethnographies focus on understanding the current “enacted local practices” (Crabtree 1998). In other words, ethnographic methods can be complementary to PD; the former surfaces participants’ lived experiences and incorporating their current practices as part of design considerations, without obfuscating localized knowledge of even the most marginalized groups.
However, recent advancements in AI technologies and the culture of AI development invite us to rethink and innovate ethnography and PD. The rapid development of AI technologies calls for methodological innovations; digital ethnographies increasingly consider the full sociotechnical assemblage of AI systems, accounting for both social and material practices of non-human and human actants (Seaver 2017, Kitchin 2017). More troubling is our lack of understanding of current human practices around AI systems. On the one hand, the proprietary and competitive first-mover-wins-all culture renders AI developers’ design and development practices opaque and closed-group, obscuring external inspection and excluding community participation. Ethnographers have attempted to account for developer practices via internal (e.g., AI Lab Studies in Jaton 2021; Ethnographic Audit Trails in Wu 2024) or external methods (e.g. Ground-Truths Tracings in Kang 2023). On the other hand, a wide variety of advanced AI systems are being deployed; yet, we still don’t have a sufficient understanding of how end users, especially those in non-Western and developing countries, are engaging with AI systems. Okolo et al. (2024) represent a valuable yet rare attempt to study how the most remote and low-skilled communities interpret and adopt AI-driven tools.
Recognizing emergent challenges and incorporating existing innovations, this paper presents a methodological framework that posits digital ethnographers next to participatory designers in making both AI developers’ and marginalized groups’ practices visible and, hence, informative in AI design, development, and deployment: On the one hand, digital ethnographers can outline the theoretical foundations on which developers rely (Ugwudike 2022), encourage reflexivity (Blomberg and Karasti 2012; Langlois et al. 2023), and motivate for the incorporation of user perspectives. On the other, ethnographers embed themselves in difficult-to-reach communities--as opposed to users who are readily available to be invited to the labs--and capture their overlooked on-the-ground practices and rationale, allowing PD to account for them.
**This paper will be an adaptation from our previous paper draft, "Human-centric approach to societal impacts of AI," attached as additional artifacts below.