Research by Harry Booty suggests that a mix of cultural and pragmatic considerations are at play
It is now slightly more than three years since large language models (LLMs) in their current form burst into our collective consciousness and began to remake the information world.
In the meantime, we’ve heard much about how significant AI will be for the future of work, how far it will change society, how the accuracy of AI output (or lack of it) raises ethical concerns and how LLMs will require huge investment in training if they are to run at scale.
However, after two years of working with and studying LLM tools I am interested in a more practical question, namely how we can use these tools ethically and effectively to be better civil servants.
As Sir Anthony Finkelstein discussed in the Sixth Edition of Heywood Quarterly, the effective use of AI at scale is dependent on a digital infrastructure that provides the foundation for efficient use of the latest technology and software. But there are multiple other behavioural, social, organisational and professional factors that can foster or frustrate the adoption of AI by civil servants wholly apart from the Government’s technical ability to fully utilise this technology.
AI and government comms
Perhaps I say this as I am not a digital professional. Instead, like roughly 7000 other civil servants, I am a member of the Government Communications Service (GCS).
Our profession, covering numerous areas such as media management, strategy, crisis management, social media and more, is arguably amongst those most at risk from LLMs. Generative AI’s abilities to create text, audio and video cut right to the core of our function of generating content to explain and defend government activity. In addition, they simultaneously turn up the dial on mis-, dis- and mal-information that make it harder for effective communications to cut through.
Faced with that, why would a communicator adopt AI? And how might an organisation adopt it well? It was those specific questions that I have lately sought to understand further.
Research aims
As well as being a civil servant for thirteen years, I recently spent two years studying part-time at the University of Birmingham.
Here I undertook independent research on the motivational, organisational and behavioural factors affecting adoption of AI by UK civil service communicators. Between January and August 2025, I conducted primary and secondary research to try and understand this topic and break those big questions down into academic and practical insights about what is helping (or hindering) the UK Government communications profession engage with AI.
For my primary research phase I undertook both primary quantitative and qualitative research (ninety-three survey responses, fifteen one-hour interviews and one six-person focus group) to ask what serving civil servant communicators thought about AI, how they were using it and the factors underpinning their views and use of LLM tools.
For reference, these tools were predominantly Copilot as well as the dedicated ‘Assist’ tool that was created by GCS as a tailored LLM tool for civil service communicators.
The findings
Not surprisingly perhaps, there was pervasive and near-universal uncertainty about the full impact of AI on communications work, with all interview respondents expressing excitement balanced by trepidation when it came to future employability and job satisfaction.
AI use was shown to be widespread, with 69% of survey respondents reporting weekly (49%) or daily (20%) use. However, nearly all my research sample was approaching AI through what I termed ‘pragmatic adoption’ – many felt it made their jobs easier now and wanted to stay up to date on these tools into the future to safeguard their employability. Whilst 42% were either positive or very positive about the impact of AI on the civil service, a third (32.5%) were neutral – emphasising the high degree of ambiguity in this area.
The professional identity of communicators, meanwhile, appeared to be a significant influence on respondents’ view of AI tools. 62% of survey respondents felt that they were saving at least 30 minutes a week by using AI – and AI was viewed favourably when it was framed as a tool to aid rapid task delivery. This same efficiency, though, was perceived by some at the interview stage as a threat, given the possible need for fewer human communicators in government in future. So far, so predictable; skilled workers have worried about the wholesale introduction of automation for at least 215 years.
More interestingly, however, I found essentially zero correlation between tenure in role and attitude to AI – meaning that age or length of service did not affect attitudes to AI in any meaningful way. The cliched image of an older worker holding on to their typewriter, landline or even BlackBerry in the face of technological change does not seem to hold true here. I suspect this may be down to the way LLMs are used – via natural language processing you can ‘talk’ to them like a human rather than code them like a software programme.
Building on that, I also gained valuable insight into which tasks civil service communicators felt were appropriate for AI and which ones they felt needed a human touch. I termed this the ‘authenticity-efficiency paradigm’ – an informal and intuitive scoring and ranking of tasks by whether authenticity or efficiency was most important, and some correlation between that reasoning and the willingness to subsequently use AI.
Examples where authenticity was prized included responding to ministerial requests, finalising a piece of communications such as a press release prior to external distribution, or acknowledging and actioning challenging feedback. Efficiency was consistently preferred for large scale qualitative analysis (e.g. summarising and discussing themes from multiple think tank reports) or producing lower-level communications at scale (such as creating a social media post series on a specific policy area).
Finally, I found that an informal form of what I called ‘outsourced moral licensing’ was taking place in the way my research respondents have been choosing to adopt AI. By this I mean that the proactive approach to AI by all levels of leadership in the UK Government, as well as the dedicated initiatives such as the creation of the Assist tool mentioned above, have created both the permission and the expectation that communicators should be experimenting. This approach has effectively given delegated institutional approval to try it despite general concerns such as accuracy and data security.
Maybe this says little more than ‘leadership matters’ – but when one looks back at the academic study of appropriate technology adoption, it could be something more profound. For example, as Ben Green of the University of Michigan wrote in an excellent 2021 article, it is increasingly unrealistic to expect simple human oversight to suffice in an era of ever more complex machine intelligences; instead the times require a system of institutional oversight that creates an organisational framework within which digital tools can be safely and effectively used. My research sample, however small, suggests this institutional framework could be growing from the ground up.

What it means
Taken together, the findings show that LLM adoption among civil service communicators is being shaped less by technical capability and more by a blend of motivations, identitarian factors and organisational culture.
The communicators I studied tended to adopt AI because it offered immediate practical value, fit their task-oriented professional identity and came with clear approval (explicit and specific or informal and systemic) from leadership. At the same time, uncertainties about accuracy, authenticity and long-term professional impact continue to sit just beneath the surface. The overall picture is one of a workforce experimenting with new tools, not out of ideological commitment or generational divides but through a pragmatic balancing of convenience, caution and professional norms.
While I am cautious about generalising too widely from a modest sample within one civil service profession, the themes that emerged – around motivation, professional identity and institutionally-conditioned pragmatism – may well speak to wider patterns across government as we continue to confront what AI means for public sector work today.
Harry Booty is a civil servant and strategic communications professional currently working for the Welsh Government.





