Latest Breaking News
Showing Original Post only (View all)All civil servants in England and Wales to get AI training [View all]
Source: Guardian
All civil servants in England and Wales will get practical training in how to use artificial intelligence to speed up their work from this autumn, the Guardian has learned.
More than 400,000 civil servants will be informed of the training on Monday afternoon, which is part of a drive by the chancellor of the duchy of Lancaster, Pat McFadden, to overhaul the civil service and improve its productivity.
At the same time, the size of the civil service is being reduced by tens of thousands of roles through voluntary redundancy and not replacing leavers. The government said officials would be tasked with figuring how they could use AI technology to streamline their own work wherever possible.
Officials are already piloting a package of AI tools called Humphrey named after the senior civil servant Sir Humphrey Appleby from the 1980s TV sitcom Yes, Minister.
-snip-
Read more: https://www.theguardian.com/technology/2025/jun/09/all-civil-servants-in-england-and-wales-to-get-ai-training
That character named Humphrey was Machiavellian. And the AI that will be used is an LLM, or large language model. Generative AI. Very fallible, like the AI overviews we've heard about so often for the mistakes they make and nonsense they spout.
From the UK business magazine Raconteur in February:
https://www.raconteur.net/technology/can-the-civil-service-get-to-grips-with-humphreys-genai-bs
The suite was named in homage to the Machiavellian permanent secretary Humphrey Appleby from the classic sitcom Yes Minister. While the tongue-in-cheek naming has raised some eyebrows, it may be more appropriate than it first appears.
A powerful aid to understanding how LLMs function is outlined in an article for the academic journal Ethics and Information Technology titled ChatGPT is Bullshit. The author compares the output of LLMs to, well, bullshit hereafter referred to as BS.
-snip-
Should we have any concerns about these tasks being performed with a certain amount of BS? What is the potential harm, for example, if a summary of laws includes a plausible-looking citation that turns out to be entirely fictional?
-snip-
It may be beneficial for those using Humphrey to pose the question: do I have a task thatd be most suitable for an excellent BS artist?
One of the things the UK government is planning to use Humphrey for - in this case a tool named Consult in the Humphrey AI suite - is analyzing what they call consultations, or responses from the public to policy proposals. Consult has already been used in a trial by the Scottish Parliament to summarize the feedback they received.
https://www.computerweekly.com/news/366623956/Humphrey-AI-tool-powers-Scottish-Parliament-consultation
A lot of people like to use genAI for summaries, thinking it'll save them time and they'll still understand what's being said. I've tried it with articles a couple of times, and the summaries were so bland as to be almost meaningless and could have fit a hundred other articles, since the summary omitted important points. They struck me as little better than hashtags in sentence form.
And summaries can include hallucinations. I posted an article here a year or so ago about businesses being reluctant to use one heavily promoted AI tool to summarize meetings because the AI would invent people who weren't in the meeting and subjects that weren't discussed.
At least businesses were catching that sort of thing, because results were reviewed. And in that Scottish trial of the Consult AI tool to analyze feedback from the public, humans also checked, and Consult's results weren't identical to human analysis, but they decided the differences were "negligible."
The problem is that with generative AI, getting results that are only slightly off one time doesn't mean the results won't be completely off the next time. And according to the article that last link goes to, the government will not be reviewing Consult's summaries very carefully in the future. So Consult might be wildly off in telling lawmakers what the public thinks.
The UK government also hopes to use Humphrey to answer, for instance, "the 100,000 calls that the tax authorities get daily" - https://techcrunch.com/2025/01/20/uk-to-unveil-humphrey-assistant-for-civil-servants-with-other-ai-plans-to-cut-bureaucracy/ - because of course that's exactly what you'd want a hallucinating chatbot for.
As I posted the other day, the UK government appears to be working for the AI companies instead of the UK public - https://www.democraticunderground.com/100220374273 - and they're basically going full speed ahead with something like the automation of the US government DOGE is attempting, but with AI-brainwashed ministers instead of the baby tech lords Musk drafted to help wreck the government here.
