Albania’s AI Minister

Blog - AI in Politics

Albania’s AI Minister: Visionary Political Reform or Blind Flight?

When Albania announced in the summer of 2025 that “Diella” would become the world’s first artificial intelligence to hold ministerial rank, a murmur swept through the political landscape.

But: Diella is not just another chatbot. It's a political experiment with global implications for us all.

The idea sounds enticing: decisions that might otherwise be influenced by personal networks, deals, or pressure could be standardized and made transparent through algorithms. If an AI evaluates tenders, logs exclusion criteria in a traceable way, and documents results in machine-readable form, corruption could indeed be made significantly more difficult. 

 

Such use of technology promises efficiency, clear rules, and possibly even greater fairness for smaller companies that previously had little chance in the shadow of powerful networks. Enter: Diella, the new AI “Minister for Public Procurement” from Albania.

 

Who is Diella, really?

 

Diella (meaning "Sun" in Albanian) is an artificial intelligence system developed by the Albanian National Agency for Information Society (AKSHI) in cooperation with Microsoft. It was initially introduced on 19 January 2025 as a virtual assistant on the e-Albania platform. In that role, it helps citizens access a variety of online public services - through both text and voice interaction - including issuing digitally-stamped documents. 

 

The avatar representing Diella is depicted wearing traditional Albanian costume, and the likeness and voice are provided by actress Anila Bisha (in use until at least December 2025). By September 2025, Diella had assisted in issuing more than 36,600 digital documents and nearly 1,000 services. 

 

The dark fact about the Albanian political culture behind Diella: she was appointed “Minister for Public Procurement” – precisely the area where corruption and abuse of power have traditionally been most rampant. With Diella, Albanian Prime Minister Edi Rama promises nothing less than “100 percent corruption-free tenders” (AP News, Reuters). But like every technology, Diella has a dark side – and this one is politically explosive.

 

Legality, Legitimacy, Accountability?

 

The Albanian opposition calls the move unconstitutional. Even supporters raise the key question: Who is accountable if the AI makes mistakes? The official answer is that the Prime Minister or the supervising authority remains responsible. But without clear rules, that is little more than a claim. For an AI minister to function under the rule of law, legislation would have to clearly designate a human officeholder as accountable, establish defined appeal mechanisms, mandate independent audits of code and data, and ensure transparent logging of every decision. These elements are still missing. Albania’s experiment shows: the leap forward came faster than the legal and institutional groundwork.

 

Opportunities and Risks in Global Comparison

 

Albania is not an isolated case. AI has already crept into political processes worldwide: in India, chatbots and deepfakes steer election campaigns; in the U.S., AI is used for personalized propaganda and disinformation; and across Europe, administrations already deploy algorithms to assess social benefits, manage customs risks, or issue traffic fines. Everywhere the same pattern emerges: technology delivers efficiency, while simultaneously raising new uncertainty about fairness, transparency, and democratic control.

 

If Diella proves effective, it is easy to imagine other areas of government following suit. Especially in domains dominated by data and rules, a “digital minister” could soon take over – from tax auditing to subsidy allocation to traffic enforcement. A fully AI-run cabinet may sound like science fiction, but the idea of gradually transferring more and more decision-making to machines is not far-fetched.

 

The Scenario: Governments Run Entirely by AI

 

Now imagine Albania’s experiment is not an exception but the start of a global development. More and more countries automate single ministries – treasury oversight, customs, subsidies, traffic enforcement. At some point, governments no longer consist of people at all, but of algorithms.

 

In such a world, laws would no longer be debated but computed from data and probabilities. Political programs would no longer be negotiated but optimized mathematically. Elections would lose their meaning, as no one could be elected or voted out of office. Citizen complaints would be captured in real time but processed only statistically. Empathy, responsibility, political stance – all of it would vanish.

 

The consequences would be profound. Democracies would lose their legitimacy, because no human representatives would remain to embody values, compromises, or accountability. Politics would shrink into technocratic administration, while power would lie with those who design the algorithms and control the data. The risk of a “data oligarchy” would be real: a handful of corporations or state agencies could hold de facto political power without ever being elected. And for citizens? Politics would be dehumanized, decisions delivered by a black box – efficient, yes, but faceless and voiceless.

 

Democracy on the Brink?

 

Seen in a positive light, Diella could demonstrate that government processes become cleaner, faster, and more transparent. Especially in corruption-prone countries, such a step could restore trust. Yet the downside is dangerous: if people feel that machines make decisions about their lives without anyone taking responsibility, mistrust will grow, not trust. Democracy would not be strengthened, but hollowed out.

 

Diella is not a neat tech experiment, but a stress test for democracy itself. It forces us to ask how much responsibility we are willing to transfer to machines – and where the line must be drawn.

 

Perhaps the true message is not that AI will revolutionize politics, but that we must confront a deeper question: Have we come to mistrust ourselves so much that we would rather hand responsibility over to AI?

 

 

FAQs - What you always wanted to know

 

Can an AI truly prevent corruption?

AI can raise barriers: standardized rules, automated logging, less room for personal influence. But corruption can shift to other stages – for example, in how tender requirements are written or how results are interpreted.

 

Who is accountable if the AI makes a mistake?

In legal terms, always a human authority – such as the Prime Minister or a designated civil servant. In practice, accountability can become blurred, leading to complex disputes about responsibility and liability.

 

What’s the biggest risk of AI in politics?

Efficiency may come at the expense of legitimacy. If citizens feel that anonymous algorithms govern their lives, trust in democracy could erode. The challenge is to keep AI as a tool of governance – not as its replacement.

Wir benötigen Ihre Zustimmung zum Laden der Übersetzungen

Wir nutzen einen Drittanbieter-Service, um den Inhalt der Website zu übersetzen, der möglicherweise Daten über Ihre Aktivitäten sammelt. Bitte überprüfen Sie die Details in der Datenschutzerklärung und akzeptieren Sie den Dienst, um die Übersetzungen zu sehen.