AI for social good: Reflections after the Google Impact Summit

The first-ever Google.org Impact Summit showcases how civil society is using AI tools for social good.

By Danna Ingleton on

On 4 September 2024, I attended the inaugural Google.org Impact Summit. This summit was focused on the positive uses of AI for social change and brought together Google’s AI accelerator projects, other grantees, related donors and many others. In this blog post, I will share some of what I learned and reflect on from the Summit, as well as ask some questions that still linger with me even weeks after the event. 

One of my favourite parts about attending this summit was to truly find myself in a community of peers. You could not turn around without bumping into brilliant people with even more brilliant projects leveraging machine learning and AI tools to solve a problem they are passionate about. Whether it was Climate Policy Radar showing me how they are using AI to parse and search huge quantities of climate policy documents, Tarjimly explaining how they use ML to expedite human translation processes, or Justicia Lab using AI to automatically ‘translate’ immigration legislation into comprehensive instructions for newcomers, they all had one thing in common with us at HURIDOCS; they want to leverage the existing and emerging ML and AI tools to make people’s lives better. 

Making and keeping these connections, and building on them are key for the sustainability of all of our work. We do not have the financial or human resources to go it on our own, nor to ‘fail forward’ as some of the larger tech giants famously do. Rather, we need to share learning and tools in an open way as possible in order to keep moving our community forward. 

My second reflection from the Summit was that it was distinctly all positive voices about AI. Yes, everyone had ethics at the top of their list and ‘keeping the human in the loop’ was the phrase ringing through the halls. But what was missing was a deep dialogue between civil society actors warning of the perils of AI with those trying to leverage AI for good. 

I fear the negative consequences if these conversations stay siloed.

For example, I worry that the very right and very important messages about the possible dangers and the need for regulation will act as a deterrent for civil society to start using and learning these tools. Or, potentially worse, that debate about the ‘good or bad’ eclipses the need for education on these tools in a way we did not address at the outset of the internet, as one speaker at the event pointed out. The digital divide grew then and can continue to grow very easily now. 

I would like to see a space where these two perspectives can come together to unpack how civil society experimenting with ML and AI can be a case study in doing this thing ethically. To discuss how we can fight ‘apples with apples’. Take the issues of “plausible deniability” or “reality apathy” that WITNESS recently unpacked in a report about the dangers of AI in conflict zones. Organisations working in those areas are impacted in many ways. Including that their evidence of violations and war crimes is no longer believed because of plausible deniability. 

If everyone thinks that everything is fake, how can we prove that human rights violations are real? How can human rights-grounded technology developers like HURIDOCS and our partners work on this problem? How can we accompany civil society around the world to use emerging technologies to fight back? How do we fight apathy with education and denial with capacity building? These, and many more, are the questions that we need to collectively answer. 

HURIDOCS is a Google.org AI Impact and AI for the Global Goals grantee to integrate machine learning models and expand Uwazi so that civil society can collect and organise information for safeguarding human rights and fundamental freedoms.


Posted in: