image source, Getty Images

image caption, Governments are exploring whether AI can provide reliable advice.

  • the author, Peter Garcia
  • the role, Technology Reporter
  • Reporting from Lisbon

Long before the advent of ChatGPT, governments wanted to use chatbots to automate their services and advice.

Those early chatbots were “simplistic with limited conversational capabilities,” says Colin van Noordt, a Netherlands-based researcher on the use of AI in government.

But the emergence of creative AI in the past two years has revived the vision of a more efficient public service, where humanoid advisors can work around the clock, in benefits, taxes and other areas where government interacts with the public. I can answer questions.

Generative AI is sophisticated enough to provide human-like answers, and if trained on enough quality data, it could in theory handle all sorts of questions about government services.

But creative AI has become notorious for making mistakes or even nonsensical answers — so-called hallucinations.

In the UK, the Government Digital Service (GDS) has conducted tests on a ChatGPT-based chatbot called GOV.UK Chat, which will answer citizens’ questions on a range of issues related to government services.

However, there were problems in “a few” cases of the system generating false information and presenting it as fact.

The blog also expressed concern that there may be misplaced confidence in a system that may be wrong at times.

“Overall, the responses did not reach the high level of accuracy required for a site like GOV.UK, where factual accuracy is critical. We are rapidly iterating the experiment to address accuracy and reliability issues. “

image source, Getty Images

image caption, Portugal is testing an AI-powered chatbot.

Other countries are also experimenting with systems based on generative AI.

Portugal released the Justice Practical Guide in 2023, a chatbot designed to answer basic questions on simple topics like marriage and divorce. The chatbot has been developed with funds from the EU’s Recovery and Resilience Facility (RRF).

The €1.3m ($1.4m; £1.1m) project is based on OpenAI’s GPT 4.0 language model. Besides covering marriage and divorce, it also provides information on setting up a company.

According to figures from the Portuguese Ministry of Justice, 28,608 questions were asked by the guide in the first 14 months of the project.

When I asked him the basic question: “How do I start a company,” he did well.

But when I asked something difficult: “Can I set up a company if I’m under 18, but married?”, he apologized for not having the information to answer that question.

A ministry source admits they still lack confidence, though wrong answers are rare.

“We hope to overcome these limitations with a decisive increase in the confidence level of the responses”, the source told me.

image source, Colin van Noordt

image caption, Colin van Noordt says chatbots should not replace civil servants.

Flaws like these mean many experts are advising caution – including Colin van Noordt. “This goes wrong when chatbots are deployed as a way to replace people and reduce costs.”

It would be a more sensible approach, he added, if they were seen as “an additional service, a faster way to find information”.

Sven Nyholm, Professor of Artificial Intelligence Ethics at the Ludwig Maximilians University of Munich, highlights the issue of accountability.

“A chatbot is not interchangeable with a civil servant,” he says. “A human being can be accountable and morally responsible for his actions.

“AI chatbots cannot be held accountable for what they do. Public administration needs accountability, and that’s why it needs humans.”

Mr. Nyholm also highlighted the issue of reliability.

“Newer types of chatbots give the illusion of being intelligent and creative in a way that older types of chatbots never did.

“Every now and then these new and more impressive forms of chatbots make silly and stupid mistakes – it can sometimes be hilarious, but it can also be potentially dangerous if people trust their recommendations. Is.”

image source, Getty Images

image caption, The Estonian government is at the forefront of using chatbots.

If ChatGPT and other major language models (LLMs) are not ready to offer substantive advice, perhaps we can look to Estonia for an alternative.

Estonia has been one of the leaders when it comes to digitizing public services. It has been building digital services since the early 1990s, and in 2002 introduced a digital identity card that allows citizens to access state services.

So it’s not surprising that Estonia is at the forefront of introducing chatbots.

Nation is currently developing a suite of chatbots for government services called Bürokratt.

However, Estonia’s chatbots are not based on major language models (LLM) like ChatGPT or Google’s Gemini.

Instead they use natural language processing (NLP), a technology that predates the latest wave of AI.

Estonia’s NLP algorithms break down a request into smaller segments, identify keywords, and predict what the user wants.

At Bürokratt, departments use their own data to train chatbots and check their responses.

“If Bürokratt doesn’t know the answer, the chat will be handed over to a customer support agent, who will handle the chat and answer manually,” says Kai Kalas, head of the personal services department at Estonia’s Information Systems Authority. “

It is a system based on ChatGPT with multiple limited capabilities, as NLP models are limited in their ability to mimic human speech and detect nuances in language.

However, they are unlikely to give wrong or misleading answers.

“Some early chatbots forced citizens to choose options for questions. At the same time, this allowed for more control and transparency over how the chatbot worked and answered”, explains Colin van Noordt.

“LLM-based chatbots often have a much higher quality of interaction and can provide more substantive responses.

“However, this comes at the cost of less control of the system, and it can also provide different answers to the same question,” he adds.

Source link