June 1, 2025 6 min read

Ukraine Signs AI Convention – What Does It Mean for Business? A Lawyer’s Analysis

On May 15, Ukraine’s Ministry of Digital Transformation announced the country had signed the Council of Europe’s Framework Convention on Artificial Intelligence and Human Rights. This document outlines the principles that must guide national legislation and the use of AI in the public sector. But what does this mean for the business community? Petro Bilyk, Partner and Head of the AI Practice at Juscutum, explains in a column for Scroll.media.

The Council of Europe Convention on Artificial Intelligence (CETS 225) recently became part of international law. It has already been signed by over 15 countries, including the United States, Canada, the United Kingdom, Japan, Israel, and EU member states. On May 15, 2025, Ukraine joined the list. This move is a continuation of Ukraine’s commitment to international AI safety standards, in line with the Bletchley Declaration.

The Convention aims to ensure that innovation is grounded in fundamental freedoms, human dignity, and the rule of law. For the first time, it sets mandatory standards for the entire AI lifecycle — from development to deployment and operation — focusing on transparency, non-discrimination, privacy protection, reliability, and safety.

Importantly, the Convention enters into effect before the EU’s AI Act is fully enacted.

For companies involved in AI development or implementation, this represents not only a new legal obligation but also a chance to revisit strategies and gain a competitive advantage in global markets.

What steps can businesses take to meet new international standards and maximize their benefits? Let’s explore.

What Does the Convention Define?

The Convention defines “AI system” and establishes core ethical and operational standards to be integrated throughout the development, testing, and operation of AI systems:

  1. Human dignity and autonomy – Technology must not override individual choice or dignity.
  2. Equality and non-discrimination – Fair treatment must be ensured regardless of gender, race, age, or social status, with a focus on gender equity and protection of vulnerable groups.
  3. Privacy and data protection – Alignment with international and local data protection standards ( Article 11), including transparent data collection, processing, and storage policies.
  4. Transparency and oversight –Labeling of AI-generated content, open access to decision-making logic descriptions, and the possibility of audits in accordance with Articles 8 and 15.
  5. Accountability – Clear roles and procedures for accountability when AI negatively affects human rights (Articles 9, 14).
  6. Reliability and safe operation – Ongoing testing of quality, safety, and system resilience at all lifecycle stages (Article 12).
  7. Regulatory sandboxes – Use of regulatory sandboxes for effective testing and improvement of AI within a safe legal framework. (Article 13).

All signatory states must implement legislative and administrative measures to uphold these principles, establishing a binding regulatory framework applicable to both public and private entities. Article 3 explicitly extends the Convention’s scope to private actors, not just public authorities.

Does the Convention Apply to Businesses?

Short answer: absolutely.

The Convention explicitly applies to the private sector. It distinguishes between activities performed by companies independently and those conducted under public contracts.

Companies must implement risk management strategies when their AI-related activities may affect human rights or democratic processes. While the Convention does not itself impose direct legal liability, it obligates Ukraine to adopt laws ensuring users of AI systems have access to remedies when their rights are violated.

Moreover, Ukraine already has laws addressing issues related to AI misuse, such as unlawful surveillance, copyright infringement, and improper data processing.

There are exceptions. The Convention does not apply to AI systems used in defense technologies or for national security purposes. Similarly, it does not extend to AI used in purely research contexts —
unless such usage impacts human rights.

Still, the Convention creates new opportunities. Businesses that adopt its principles early can gain reputational and operational advantages. Adherence to standards builds trust with customers, investors, regulators, and reduces litigation risks.

Early adopters can help shape best practices, influence regulatory processes (Article 23), and contribute to industry standard development. The Convention also encourages innovation through sandbox programs, allowing companies to safely test new AI solutions and accelerate time-to-market.

Thus, companies that begin implementing these recommendations now will gain significant competitive advantages and be prepared for global regulatory changes in the field of AI.

Key AI Obligations for Businesses

  1. Human rights and non-discrimination.
    Companies must ensure their AI systems do not infringe on fundamental human rights or lead to discrimination (Articles 4, 10, 17, 18). Emphasis is placed on preventing algorithmic bias and implementing inclusive testing mechanisms.
  2. Democracy and rule of law
    AI must not be used to undermine democratic institutions, interfere in elections, or enable unlawful surveillance (Article 5). Businesses should avoid projects threatening judicial independence or freedom of speech.
  3. Transparency, accountability, and oversight
    Users must be informed when interacting with AI. Detailed documentation must be available to support appeals (Articles 8, 14, 15). Internal audits and regular reporting enhance trust in AI-driven decisions.
  4. Risk management and reliability
    Businesses should adopt systematic approaches for identifying, assessing, mitigating, and monitoring risks to human rights and democracy (Article 16). Ensuring algorithm quality and safety underpins business resilience.
  5. Data protection and privacy
    Companies must comply with data protection laws and local standards, minimize data collection, and limit access (Article 11).
  6. Right to appeal
    Accessible complaint and redress mechanisms should be established, with disclosure of decision logic upon user request (Article 14).
  7. Safe innovation
    Regulatory sandboxes should be used to test innovative projects with regulatory supervision, reducing legal risks and accelerating product launches (Article 13).
  8. Implementation and oversight
    Each country must establish independent oversight bodies to monitor compliance (Article 26) and report results regularly (Article 24). R&D efforts are not covered until they impact human rights or democracy.
  9. Reliability
    Companies must implement testing and verification systems for quality, safety, and resilience throughout the AI lifecycle (Article 12).
  10. Safe innovation
    The use of regulatory sandboxes to test and develop AI solutions in a controlled environment, allowing experimentation without risking violations of human rights or democratic norms (Article 13).
  11. Documentation and reporting
    Businesses must document risk management procedures, impact assessments, and implementation outcomes. This supports internal and external audits and regulatory reporting (Articles 16, 24).

How Will Ukraine Implement the Convention?

The Council of Europe AI Convention allows for flexible implementation mechanisms. Ukraine may pass a dedicated law to enforce the Convention or introduce alternative regulatory and voluntary measures to ensure compliance.

Ukraine is likely to follow a bottom-up approach — issuing guidelines for responsible AI implementation, using the HUDERIA methodology for risk assessment, expanding its regulatory sandbox, and promoting self-regulation.

This approach prepares the country for stricter AI regulation already emerging in the EU, U.S. (e.g., in California and Colorado), and beyond. In the future, Ukrainian AI legislation will be based on the Convention’s principles and binding for businesses.

Ukraine must ratify the Convention through its Parliament, Verkhovna Rada.

How Can Businesses Prepare?

Steps companies should take now:

  1. Audit AI operations using the HUDERIA methodology, especially when handling sensitive data (healthcare, banking, law enforcement, education).
  2. Document processes – Include AI functionality, data sources, risk assessments, and mitigation measures.
  3. Update policies – Internal policies, terms of service, and other documentation should align with the Convention.
  4. Ensure transparency – Deploy technical solutions for auditing, monitoring, and AI-generated content labeling.
  5. Establishing effective response procedures and creating mechanisms for handling complaints and requests to review decisions made with the use of AI.

Conclusion

Compliance with these new international standards is essential for companies operating in or with countries that have signed the Convention or have business interests in these jurisdictions, as these countries will adopt their own regulations in line with these international obligations.
Companies that are the first to adapt to the new standards will gain a competitive advantage by earning the trust of regulators, customers, and investors.

Author: Petro Bilyk, Partner and Head of AI Practice at Juscutum

Don’t want to miss anything?

Subscribe to keep your fingers on the tech pulse. Get weekly updates on the newest stories, case studies and tips right in your mailbox.