HURIDOCS Advocates for Digital Literacy in AI and Human Rights Dialogue

HURIDOCS Executive Director Danna Ingleton's participation in a Human Rights Council 56 event exploring the role of digital literacy in navigating the complexities of AI's.

By HURIDOCS Team on

On June 27, 2024, HURIDOCS Executive Director Danna Ingleton joined the Human Rights Council 56 side-event “Digital, Media and Information Literacy: A Key Element of a Human Rights-based Approach to Artificial Intelligence.” 

Hosted at the Palais des Nations, the event was organised by the Republic of South Korea’s Permanent Mission in Geneva, and co-sponsored by the Polish Mission in Geneva and UNESCO and delved into digital, media, and information literacy (DMIL) within the context of Artificial Intelligence in promoting human rights. The aim was to enhance understanding of AI’s impact on content production and dissemination, adapt DMIL strategies to evolving AI technologies, and foster cooperation among diverse stakeholders for effective AI governance.

The event featured Ambassador Seong Deok YUN of Korea as the opening speaker, followed by a panel moderated by him. Key speakers included Irene Khan, Special Rapporteur on Freedom of Expression; Adeline Hulin, Head of Unit, Media and Information Literacy and Digital Competencies at UNESCO; Piotr Czepulonis, 1st Secretary at the Permanent Mission of Poland to the United Nations Office in Geneva; and Danna Ingleton, Executive Director of HURIDOCS, who joined the discussion remotely. Ingleton emphasised the imperative of collaborative efforts to safeguard and promote human rights in the era of AI, underscoring the integration of DMIL into AI frameworks to maximise its benefits while minimising risks.

In her speech at the event, Ingleton stressed the need to understand the context and risks that civil society faces amidst the fast-growing new technologies. She highlighted the plethora of misinformation, disinformation, deep fakes and generative AI, and the urgent need to empower civil society and encourage experimentation and adaptability to ensure the protection of human rights. Ingleton also spoke about HURIDOCS’ role in supporting documentation efforts through innovative technologies like machine learning, emphasising the need for enhanced critical digital literacy among civil society actors to navigate these complexities.

Read Danna’s full speech:

Thank you very much for the introduction and, your excellency,  for the invitation to speak on this panel today. Thank you also to all of the panelists for their insightful contributions so far. And of course a huge thank you to all of you both in the room and joining us remotely. It is such an important topic and it is heartening to see this kind of turnout.

I am honoured to be representing HURIDOCS at this event. HURIDOCS stands for Human Rights Information and Documentation Systems, is a 41-year-old global nonprofit that helps human rights defenders all over the world to collect, systemise and make sense of their data on human rights violations for the purpose of more effective and efficient justice and accountability. 

At HURIDOCS we accompany civil society with documentation strategies and we also develop our own open source software, right now it is our flagship documentation tool Uwazi, recognised as a UN Digital Public Good. 

One of the reasons I was asked to join here today is because we have been developing and deploying machine learning tools into our work. We are experiencing ourselves and seeing first-hand the realities of learning to work with these tools and in our present context. So I am speaking about the literacy of us civil society actors.

To answer the main question here today about LITERACY, we must first unpack the risks civil society is facing with the proficient and exponential growth and adaptation of technology, and the larger context in which all of this is happening. 

Not since the start of the Industrial Revolution has the world found itself in such a technological flux and confusion about the future. Exponential technological development is rapidly making hard-won laws obsolete and regulatory regimes inadequate to protect individuals against little understood or ill-defined risks of artificial intelligence (AI).

The persistent media coverage and increased hype surrounding AI and machine learning (ML) have humanity grasping to understand the real-world consequences. This creates fear and uncertainty when so many potential futures

And whether AI will eventually spell boon or doom for us, right now we know we are facing certain risks of which I would like to highlight FOUR. 

The FIRST is that the very nature of our relationship with data and information has changed. In the past, one of the biggest issues facing HRDs was finding enough reliable information to document and hold to account those responsible for human rights violations. Now, the challenge has flipped – the sheer deluge of information which is comprised of not just different kinds of data types but some factual, some misinformation, and a lot of downright disinformation; and it is harder and harder to make sense of it all in order to do our work.

This leads to the SECOND issue which is that developments in technologies such as deep fakes and others are used as a weapon to discredit civil society. This of course has firm roots in the ever-increasing criminalization of human rights defense work.

Generative AI is capable of producing so much information so quickly that it makes proving information that we have collected as correct is difficult if not impossible. This means that civil society is not just fighting misinformation about them but is once again on the back foot and having to not just COLLECT information but also PROVE its veracity in a world of fake information and mistrust.

The THIRD, and this is one that is of the utmost importance for all of us fighting to make civil society fit for purpose going into the future. We MUST EMPOWER, INSPIRE and ENCOURAGE civil society to understand and engage with emerging and frontier technologies so that we can fight apples with apples against our adversaries. 

There is this general TECH APATHY that so many of us feel. ‘Technology’ is this amorphous thing that seems so far out of our reach and only lives in the realms of tech giants. But that isn’t true  – technology is made by regular people and we regular people also have power over it and we need to always be reminded of that!

So when we speak about LITERACY it needs to be from a critical technical perspective – not training people on specific tools or specific technologies or how to identify the work of specific AI tools but to approach technology with the confidence to experiment, to adapt and be encourage to try new things, even if they, even if we, fail sometimes along the way. 

The FOURTH  thing I want to discuss is that we all need to support and take part in cross-sectoral learning and resource sharing. This is at the heart of any sustainable technical literacy question for our sector. We – whether it is those working on human rights, humanitarian, environmental, etc – are clearly not as well resourced as other sectors yet we are in the same fight, we are living in the same world. This highlights once again that we need to work together to share knowledge, resources, tools, learnings, wins, losses and everything in between. 

So what does this all mean when the rubber hits the road?

Literacy in a world of AI means stronger data literacy generally. For us at HURIDOCS it specifically means that we are:

1. Supporting the development of adaptable documentation tools for groups to organise information in digestible formats and to store it in a secure way to support authenticity inquiries; 

2. It also means supporting localisation efforts – working to ensure the information in our databases is accessible in multiple languages and formats. This is one area that HURIDOCS is leaning into machine learning – we are developing and deploying automatic translation features for Uwazi. 

3. We are also developing and deploying machine learning functions that help civil society actors by significantly reducing the tedious, manual process of sifting through huge amounts of information. For example, together with our partner at UPR Info, we use machine learning to label and track the progress of thousands of recommendations and voluntary pledges made by UN member states during the UPR. What does this mean for them? Updating the database after each UPR cycle has gone from taking 3 months to taking 5 days. That means that activists around the world have faster access to information that is important to their advocacy at a pace that is akin to the world around them.

4. And lastly, for us at HURIDOCS supporting literacy in a world where the digital divide is only growing, means that, despite the difficult financial situation for NGOS right now we try where we can to provide our services pro bono. This is in an effort to ensure it is not just those who are well funded who can access these tools, to these opportunities for increased literacy- thereby encouraging more diversity in this space. 

This is all very important groundwork for generative 

Thank you very much and I look forward to the question period and discussion.

— Danna Ingleton


Posted in: