AI and the Future of Work Part 1

Wednesday, 08 October 2025
By Patricia Lustig & Gill Ringland

generated-image-1759918208117

This is the first Part of four connected blogs on AI and the future. This - Part 1 - focuses on what AI is good for and identifies some issues. Part 2 focuses on the context of work in 2040. Part 3 focuses on the future of work itself and finally Part 4 uses two different professions (IT and Healthcare) to illustrate how AI affects each profession differently.

We begin with the definitions in use describing AI methods and tools. We follow this with thoughts on what different AI methods and tools are good for, and some issues and considerations.

AI Glossary[1]

Expert Systems:

Early successful applications of AI were based on what were called “Rules-Based Expert Systems” technology[2]. These assembled information from experts in a field to inform decision making by people in that domain.

Neural Networks:

These are a machine learning program, or model, that makes decisions in a manner similar to the human brain. A RNN (recurrent neural network) is trained on sequential data to create a model that can make sequential predictions based on this input.

Large Language Models (LLMs):

These excel at processing, understanding, and generating human language. They are trained by scanning wide ranges of data sources. The well-known problems of LLMs are data bias, data privacy concerns and the acquisition methods and quality of data resources used for training and deployment.

Generative AI (GenAI):

GenAI tools are built on LLMs to generate new content based on specific prompts or instructions. Chatbots are examples of this. Their use in the creative industries is causing major disruptions and challenges to intellectual property regimes.

AI Agents:

An AI agent is a system that can autonomously perform tasks on behalf of a user or another system. Agentic AI is the framework within which AI agents operate - from the algorithms to the architecture.

AGI (Artificial General Intelligence):

Richard Susskind defines AGI[3] as “full-scale, human-level performance by machines.” He suggests that it should be more strictly referred to as ‘Near-AGI Hypothesis’ since it could match almost all cognitive tasks, but fall short on a small number. AGIs do not currently exist, and are a major step change to today's technology.

Automate and Augment

generated-image-1759918327926

A valuable distinction in considering the best use for AI is whether the AI system replaces a human (automate) or supports human decision making (augment).

Augmentative AI is using the results of an AI system to inform human decision making.

Automative AI replaces human work and the AI system makes the decisions itself.

LLMs, GenAI and AI agents can all be used to either augment or to automate.

What AI is good for

Which kind of AI you use is dependent upon the outcome you want.

LLMs are good at automating text-based tasks like legal advice. They can generate human-like responses, for instance, providing 24/7 customer support. They can create content, analysing data, and powering intelligent assistants. ChatGPT is based on an LLM, and is AGI.

Augmentative AI helps a person do a part of their job, for instance in healthcare, where analysis of images is mapped to human decision making. They can help analyse mammograms more quickly than people can. “Artificial intelligence has emerged as a tool to enhance diagnostic accuracy and reduce radiologists' workload in screening programs”.

Structural issues

AI systems are known to often include code errors, give wrong advice, implement bias, and to wipe out databases. GenAI characteristics include “hallucination” - giving credible information that is wrong or misleading and could cause people make poor choices.

AI is implemented in software, and so it has the same error, vulnerability and failure characteristics as software in other domains. If AI lacks the data it needs, it cannot work.

Practical issues

Implementation costs range from relatively inexpensive for low usage open-source LLMs to quick escalation in price for enterprise-grade use. There are ongoing costs as needs grow and requirements change. You may need to consider energy costs, systems maintenance, model training and updates.

You may need to hire experienced staff and/or upskill your own. As systems evolve, this will need to continue. Scalability and rollout need extensive planning.

Ethical considerations

walter-frehner

Who is responsible for meeting the costs if an AI causes system failure? It is likely to be the organisation providing the AI service. That organisation may try to recoup costs from the software supplier.

Operational systems using AI are subject to new standards, e.g. EU AI Act. This assigns applications of AI to three risk categories. First, applications and systems that create an unacceptable risk, such as government-run social scoring are banned. Second, high-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to existing legal requirements such as Civil Rights, Human Rights, and more specific legislation. Other applications have been deemed to be low risk and are largely left unregulated. Like the EU’s General Data Protection Regulation (GDPR) in 2018, the EU AI Act could become a global standard.

Finally (Caveat)

AI systems do not think or reason like humans do: the desired outcomes will be achieved differently. In How to Think About AI, Richard Susskind explores the possibility of AI based judgement, empathy and creativity. He suggests that AI can handle uncertainty and utilise the process of judgement based on huge bodies of data to remove uncertainty, leading to what he terms “quasi-judgement”.

AI systems can appear to be as empathetic as humans, but “…it seems, cannot put themselves in the shoes of humans and vicariously experience their emotions”.

AI is clearly creative today, but in a different way to humans: the outcomes may not be the same.

Part 2 focuses on the context for work in 2040 from three aspects: the geopolitical, the organisational, and the social or personal. We share one plausible scenario – not a prediction, but a backdrop to explore what work in 2040 might look like. We are at the beginning of a new technological revolution: AI will change the way we work and will create new ways to work.

[1] Based on searches using Perplexity https://www.perplexity.ai/

[2] Susskind, R, How to Think About AI: A guide for the perplexed. Oxford University Press. (2025)

[3] Ibid

svg.lf_footer_svg{ height: 30px; width: 30px; }