analytics

AI Responsibility in Georgia: How Do We Compare to Global Ethical Standards?

As the use of artificial intelligence grows, it’s no longer just about innovation — responsible governance has become just

AI Responsibility in Georgia: How Do We Compare to Global Ethical Standards?

As the use of artificial intelligence grows, it’s no longer just about innovation — responsible governance has become just as critical. Countries are now evaluated not only on how they develop AI but also on how they protect human rights, ensure algorithmic transparency, and regulate ethical usage. To assess this, several global indices on responsible AI governance have been developed, examining dimensions such as legal frameworks, government action, civil society involvement, and human rights safeguards.

According to BTU’s 2025 report, Georgia ranks 60th out of 138 countries in the Global Index on Responsible AI Governance (source: Global Index on Responsible AI, 2024), with an overall score of just 17.8 out of 100. While this is relatively low, some components show more promising results: Georgia ranks in the top 29% globally in terms of legal frameworks and in the top 38% for civil society engagement. This indicates that while formal foundations and civic activity are present, the actual implementation capacity and institutional systems remain weak.

When compared with neighboring and peer countries, Georgia performs better than Armenia and Azerbaijan, especially in terms of government involvement and private sector activity. However, the gap remains wide compared to EU countries such as the Netherlands and Germany, which lead the index and have strong performance across all pillars — particularly in embedding ethical oversight into both policy and business practices.

At a glance, Georgia’s position exceeds that of many countries with similar income levels. But given the sharp rise in AI interest locally — especially in startups and education — it raises the question of how well these emerging systems are being governed.

In practice, responsible AI governance isn’t just about having laws on paper. It requires technical expertise, internal policy mechanisms, and state capacity to assess risks and act as both innovation enabler and public interest protector. This is where Georgia’s weakest components lie: in human rights safeguards and capabilities for responsible AI, both of which place the country in the lower 50% globally.

BTU’s report also highlights that Georgia’s private sector is increasingly engaged in AI development, particularly startups working with consumer platforms and data-driven services. However, ethical considerations are often overlooked in these cases — the report notes an almost complete absence of specialized roles like AI ethics specialists or AI output reviewers, meaning that standards are usually left to individual developers’ discretion.

At the government level, Georgia has drafted general digital strategies, but no dedicated AI oversight body or ethics authority exists to monitor AI systems. In contrast, several EU countries have established AI offices that oversee public and private AI deployments — a practice Georgia has yet to adopt.

Overall, Georgia’s ethical readiness for AI sits in the middle ground: legal frameworks and civil participation are in place, but institutional capacity, independent oversight, and human rights protections still require significant development. This is crucial not only for protecting users but also for building long-term trust in the country’s growing AI sector — especially as these systems increasingly influence recommendations, content, and decisions in daily life.

The BTU’s full study — “AI Sector in Georgia: Current Trends and Future Potential” — is available at the following link.