The Trust Gap: Quiet AI Paradox in Georgia

A recent simulation conducted by BTUAI (Business and Technology University, Tbilisi) and Pollitics (Station F, France) reveals a striking contradiction at the center of Georgia’s emerging AI landscape. According to the modeled results, 17.8% of Georgians trust artificial intelligence systems but refuse to accept AI making important decisions about them. This is the second-largest attitude group in the simulation—larger than both full enthusiasts and outright rejecters. In other words, nearly one in five citizens occupies a carefully balanced middle ground.

The study was built using a constraint-based synthetic population grounded in official Georgian demographic statistics. Instead of surveying real respondents, researchers generated a statistically consistent virtual population reflecting age, gender, education, and regional distributions. Binary questions about trust and decision-making authority were posed to these personas. Each received contextual information, and a large language model produced probability-based responses that were aggregated to simulate national patterns. The approach does not replace empirical polling, but it offers a structured way to explore emerging social tensions where reliable data are still scarce.

The 17.8% group is not digitally excluded or technologically anxious. They are predominantly urban professionals aged 30–44, slightly more male (54%), and concentrated in Tbilisi, Imereti, and Adjara. Most have at least secondary education, many hold university degrees, and 41% use e-commerce. They interact daily with algorithmic systems—online banking, recommendation engines, logistics platforms, fraud detection tools. They understand AI’s functional value.

Yet they draw a boundary. This position mirrors patterns observed globally. International surveys show that while many people believe AI can improve efficiency and innovation, acceptance drops sharply when algorithms are used for hiring decisions, credit approvals, medical diagnoses, or judicial assessments. High-profile cases of algorithmic bias—such as discriminatory hiring tools or predictive policing controversies—have shaped public awareness worldwide. Concerns about transparency, fairness, and accountability are not abstract; they are grounded in documented experiences.

Georgia’s socio-political context adds another layer. The country has undergone rapid institutional reform over the past three decades, and public trust in institutions has fluctuated over time. In such environments, delegating authority—especially automated authority—requires more than technical proof of accuracy. It requires legitimacy. For many in this demographic, AI is acceptable as a tool that supports human decision-makers, but not as a final arbiter.

The trust-without-acceptance stance therefore reflects conscious boundary-setting. It suggests that citizens differentiate between capability and authority. They may trust AI to analyze data faster than humans, detect patterns, or optimize systems. But when outcomes directly affect employment, finances, or legal standing, they prefer human oversight and the possibility of appeal.

For policymakers and businesses, the implication is clear. Expanding AI adoption in Georgia will depend not only on infrastructure and digital skills but also on governance frameworks that ensure transparency and responsibility. Hybrid models—where AI supports but does not replace human judgment—may align more closely with prevailing attitudes.

The simulation conducted by BTUAI and Pollitics does not predict the future, but it highlights an emerging tension. Georgia’s AI development may not be shaped by extremes of enthusiasm or rejection. It may instead be defined by citizens who trust the technology—yet insist that the final decision remains human.

Recent Posts