Running Up That Hill: Artificial Intelligence in Ukrainian Public Sector

When will the Ukrainian state-in-smartphone application Diia have its own Siri? Will the political agreement on the AI Act halt the use of Clearview AI? Can AI help choose the “dream job”?

Our senior lawyer Tetiana Avdieieva tries to provide answers to these and many other questions in her analytical study “Running Up That Hill: Artificial Intelligence in Ukrainian Public Sector.” The document outlines relevant AI regulation initiatives at both international and national levels in Ukraine and other states. The study also provides an overview of the AI projects already in practice or the pipeline for the near future in the public sector in Ukraine, analyzing their safety for human rights and democratic principles.

Based on the results of the research and taking into account the priority of AI both in the defense sector and other public spheres, DSLU prepared recommendations for policy-making in the field of AI at the state level:

  • In the future, it is necessary to develop a general regulatory framework that will adequately incorporate international standards in the field of AI at the national level. At the same time, DSLU supports postponing the implementation of the AI Act to determine how Ukraine can adapt to the requirements of the document without accessing European institutions, as well as to understand how the regulation should be applied in practice. With this in mind, DSLU welcomes the introduction of ‘soft’ regulation, including the development of general and sectoral recommendations and the encouragement of self-regulatory institutions.
  • In addition to general AI regulation, thematic and targeted amendments to legislation have to be designed and implemented to provide citizens with adequate safeguards against abuse. Notably, this is especially relevant for the sectors of justice, law enforcement, and the military.
  • It is necessary to adjust the legal framework on public procurement to AI systems — for example, to develop by-laws that set minimum standards for developers of AI systems designed for the public sector, requirements for conducting tenders, and other guarantees. Notably, DOZORRO implements a machine learning model to identify high-risk procurement. However, it is necessary to regulate the procurement modalities of the AI systems themselves, especially considering that many developments involve public-private partnerships.
  • It is necessary to develop an acceptable model of the regulator in the field of AI and adequately fit it into the functioning legal system. In particular, the availability of thematic regulators such as the National Council for Television and Radio Broadcasting and the future regulator in the field of personal data protection should be considered. Development of the regulator’s model also involves the design of the complaints system — the definition of the range of subjects of complaints, topics, and which body will consider them.
  • It is necessary to get rid of AI systems that work on Russian, Belarusian, Chinese, or other software originating from countries with high index of human rights violations. The government must make sure that after discovering any connection between AI systems and such countries, they are not used at least in the public sector. Notably, this is about at least reacting to the investigation of Schemes regarding the TRASSIR system, and ideally, about banning such systems on the territory of Ukraine even for private actors. The government must also ensure that such software does not enter the Ukrainian market — and in particular the public sector — as a foreign product imported through third countries.
  • The government needs to develop in advance a strategy for the transition period between martial law and peacetime after the Ukrainian victory, which will provide for legal and practical mechanisms for terminating the use of technologies exclusively intended for martial law. Notably, it is necessary to warn foreign companies that provide wartime AI services about a smooth and clear algorithm for terminating the use of some services or applying completely different standards to them, such as a more restrictive approach in assessing the proportionality of interference with human rights, etc.
  • The government should avoid using technologies that by default violate human rights, such as international standards in the field of personal data protection, anti-discrimination, create risks of persecution, etc. For example, alternatives to Clearview AI should be explored, especially given that the AI Act bans data scraping, and considering other problems related to Ukraine’s European integration processes.
  • The government should develop a system of technical standards that provides minimum requirements for system developers and assess the prospects of creating a body responsible for the certification of AI systems or assigning such duties to an already existing body.
  • The government should assess the risks of AI systems used in the public sector at all stages of the lifecycle of such systems. Various methodologies may be used as guidance, including HUDERIA, a methodology where Ukraine is already included in the test phase. Impact assessment should take into account the possibility of a chilling effect on human rights, and the government should aim to minimize it and develop safeguards against abuse.
  • The government should develop a unified system of labelling AI-generated content to ensure greater transparency regarding the functioning of the systems, as well as to avoid irresponsible dissemination of false information.
  • When providing public services with the help of AI systems, the government must ensure the ability to easily and clearly communicate with human operators. In particular, situations where the automated system is loopy or takes too much time to talk to the bot should be avoided. Examples of such systems in the private sector include popular banking chatbots described in the study and telephone operator systems that have very inconvenient algorithms preventing the exercise of the right not to be affected by automated systems.
  • The government should provide open source for AI systems used in the public sector, except for technologies in the field of security and defense, to enable bug tracking by independent experts and effective public control of the type and features of the applicable systems, and their impact on human rights.
  • After announcing the initiative to introduce an AI system in the public sector, the government should regularly update information on the results of test periods, problems encountered in the application of the system, the stage of project implementation, etc., to ensure transparency and allow effective public control over the implementation of projects, their reliability and safety.
  • The government should continue international cooperation both at the level of collaborations with companies and within the framework of regulatory initiatives such as the Council of Europe’s Committee on Artificial Intelligence. This will make it possible to monitor the cutting-edge trends and shape the international agenda, taking into account Ukrainian experience in the application of AI.
  • The government should ensure the participation of civil society in discussing ideas for the introduction of AI systems in the public sector, and take into account the results of such discussions when making political and legislative decisions, and developing visions and strategies.
  • The government should invest efforts in strengthening the digital literacy of public servants. Notably, training projects within Diia.Education serve as good practice in this field. However, more targeted training is also encouraged, especially for public servants who regularly deal with complex high-risk AI systems.
  • When creating training programs for AI developers, the government should include in the curriculum courses on ethics and international standards of human rights protection to ensure that AI systems that are developed are ethical and compliant with human rights and democratic principles by default.

You can read the full text of the study and a detailed analysis of all initiatives via this link.

Email address for comments and questions: [email protected].


This analytical study was compiled with the support of the European Union and the International Renaissance Foundation within the framework of the “European Renaissance of Ukraine” project. Its content is the exclusive responsibility of the authors and does not necessarily reflect the views of the European Union and the International Renaissance Foundation.