In light of artificial intelligence becoming increasingly popular among the CSOs, new benefits as well as significant risks arise for civil society and their partner organizations. Although international standards are put in place by the EU Artificial Intelligence Act and the Council of Europe Framework Convention on Artificial Intelligence, certain uses and risks are not covered in much detail, especially beyond the European borders.
To address the most common problems, DSLU developed the “Human-Rights-Compliant Use of Artificial Intelligence Systems: Toolkit for Civil Society” (available here). This document aims at aiding the CSOs in preventing and mitigating the risks that stem from the irresponsible use of AI systems, ensuring compliance with all the relevant regulatory standards and human rights requirements. The Toolkit comprises several key areas, crucial for responsible AI use:
- human rights impact assessments and risk assessments (including procedural requirements to the organization of such processes);
- protection of privacy and digital security (protection of personal data, responsible interactions with the publicly available AI systems, training of systems and their adjustments by CSOs, threats of surveillance and persecution);
- ensuring equality (on both training and deployment stages, considering various vulnerable and marginalized groups, persecution, and systemic discrimination);
- freedom of expression (dangers stemming from disinformation campaigns, issues with content curation and prioritization, moderation of content, generative AI, and labeling);
- intellectual property (lawful use of AI systems and their products, responsible design of AI inputs and verification of AI outputs, license agreements);
- human oversight (procedural requirements for various types of systems, staff training).
Additionally, our Toolkit contains a practical Checklist for the responsible selection of AI systems that CSOs can use as a quick and comprehensive tool to filter the potentially risk-inducing AI instruments and effectively mitigate the prima facie risks. This checklist covers all the main questions that any CSO should answer before deciding upon choosing to use any AI system.
The Toolkit is available for use via this link.
DSLU aims to support CSOs in achieving compliance with all necessary standards and to help civil society mitigate the potential adverse impacts of the unregulated use of AI systems.
If you have any questions or feedback, please contact our Senior Legal Counsel Tetiana Avdieieva at [email protected].