All civil servants in England and Wales to get AI training
Source: Guardian
All civil servants in England and Wales will get practical training in how to use artificial intelligence to speed up their work from this autumn, the Guardian has learned.
More than 400,000 civil servants will be informed of the training on Monday afternoon, which is part of a drive by the chancellor of the duchy of Lancaster, Pat McFadden, to overhaul the civil service and improve its productivity.
At the same time, the size of the civil service is being reduced by tens of thousands of roles through voluntary redundancy and not replacing leavers. The government said officials would be tasked with figuring how they could use AI technology to streamline their own work wherever possible.
Officials are already piloting a package of AI tools called Humphrey named after the senior civil servant Sir Humphrey Appleby from the 1980s TV sitcom Yes, Minister.
-snip-
Read more: https://www.theguardian.com/technology/2025/jun/09/all-civil-servants-in-england-and-wales-to-get-ai-training
That character named Humphrey was Machiavellian. And the AI that will be used is an LLM, or large language model. Generative AI. Very fallible, like the AI overviews we've heard about so often for the mistakes they make and nonsense they spout.
From the UK business magazine Raconteur in February:
https://www.raconteur.net/technology/can-the-civil-service-get-to-grips-with-humphreys-genai-bs
The suite was named in homage to the Machiavellian permanent secretary Humphrey Appleby from the classic sitcom Yes Minister. While the tongue-in-cheek naming has raised some eyebrows, it may be more appropriate than it first appears.
A powerful aid to understanding how LLMs function is outlined in an article for the academic journal Ethics and Information Technology titled ChatGPT is Bullshit. The author compares the output of LLMs to, well, bullshit hereafter referred to as BS.
-snip-
Should we have any concerns about these tasks being performed with a certain amount of BS? What is the potential harm, for example, if a summary of laws includes a plausible-looking citation that turns out to be entirely fictional?
-snip-
It may be beneficial for those using Humphrey to pose the question: do I have a task thatd be most suitable for an excellent BS artist?
One of the things the UK government is planning to use Humphrey for - in this case a tool named Consult in the Humphrey AI suite - is analyzing what they call consultations, or responses from the public to policy proposals. Consult has already been used in a trial by the Scottish Parliament to summarize the feedback they received.
https://www.computerweekly.com/news/366623956/Humphrey-AI-tool-powers-Scottish-Parliament-consultation
A lot of people like to use genAI for summaries, thinking it'll save them time and they'll still understand what's being said. I've tried it with articles a couple of times, and the summaries were so bland as to be almost meaningless and could have fit a hundred other articles, since the summary omitted important points. They struck me as little better than hashtags in sentence form.
And summaries can include hallucinations. I posted an article here a year or so ago about businesses being reluctant to use one heavily promoted AI tool to summarize meetings because the AI would invent people who weren't in the meeting and subjects that weren't discussed.
At least businesses were catching that sort of thing, because results were reviewed. And in that Scottish trial of the Consult AI tool to analyze feedback from the public, humans also checked, and Consult's results weren't identical to human analysis, but they decided the differences were "negligible."
The problem is that with generative AI, getting results that are only slightly off one time doesn't mean the results won't be completely off the next time. And according to the article that last link goes to, the government will not be reviewing Consult's summaries very carefully in the future. So Consult might be wildly off in telling lawmakers what the public thinks.
The UK government also hopes to use Humphrey to answer, for instance, "the 100,000 calls that the tax authorities get daily" - https://techcrunch.com/2025/01/20/uk-to-unveil-humphrey-assistant-for-civil-servants-with-other-ai-plans-to-cut-bureaucracy/ - because of course that's exactly what you'd want a hallucinating chatbot for.
As I posted the other day, the UK government appears to be working for the AI companies instead of the UK public - https://www.democraticunderground.com/100220374273 - and they're basically going full speed ahead with something like the automation of the US government DOGE is attempting, but with AI-brainwashed ministers instead of the baby tech lords Musk drafted to help wreck the government here.

SheltieLover
(69,396 posts)
highplainsdem
(56,533 posts)SheltieLover
(69,396 posts)
Wicked Blue
(8,012 posts)to do their jobs
highplainsdem
(56,533 posts)LudwigPastorius
(12,627 posts)Bullshit. They are being made to train their replacements.
Nigrum Cattus
(607 posts)Just like when companies were moving to other
countries and requiring the employees to train their
replacements over seas. They don't understand that
they are destroying customers for whatever they
produce. A.I. doesn't buy food, clothes, cars, etc.
HUMANS DO !
cab67
(3,383 posts)Mine isn't one of them. But if changes its mind, they can expect a brief email from me explaining that AI will remain forbidden in my classes.
highplainsdem
(56,533 posts)top of being unethical, illegally-trained tools controlled by robber barons.
The few AI-loving teachers I've met are mostly paid shills, who either are getting money directly from AI companies, or who have found a niche teaching AI use and seem to think there's more job security there than in traditional teaching. But even those sometimes express real qualms about the negative effect AI is having on education.
ChatGPT was released only 2-1/2 years ago. In about another year we'll be seeing college kids graduate whose passing grades are almost entirely due to cheating with AI.
I saw an AI company CEO on Twitter advise students a couple of years ago that they SHOULD use AI to cheat - "And after you graduate, AI will be your superpower!" Despicable.
cab67
(3,383 posts)AI seems to be competent at writing intro sections to papers. That's actually something my colleagues are doing, though they're approaching the results very cautiously.
I've only had a couple of students use AI to answer questions on homework assignments. In both cases, it was for an upper-level course. And in both cases, the answers were so far off from accurate, I laughed out loud. I didn't know they were AI-based until the students themselves asked why their answers were marked wrong.
At this point, can produce something that reads like it's in English, but if there's any scientific content, it can lead a student very, very astray,