This briefing note captures key points from a roundtable discussion on 24 July 2025 at Z/Yen's London office. Eleven practitioners from consulting, financial services, human resources, technology, and academic industries explored the current drivers, challenges, exemplars, and needs of AI governance. Organised by Henry Price and Martin Ho (SI Units) and chaired by Professor Michael Mainelli (Z/Yen), this unique roundtable contrasted AI disruptors with professional services adopting AI, leveraging SI Units' academic research on AI scientometrics and Z/Yen's thought leadership in international AI enterprise management standards.
"Boring AI" delivers the highest value: In the medium term, narrow, well-defined applications such as data entry automation deliver more consistent returns than cutting-edge implementations.
The data pollution crisis is real: AI-generated content flooding the internet creates a cycle of declining quality that compounds errors and amplifies biases. Practitioners need to invest resources in high quality data.
No single governance mechanism suffices: The Swiss Cheese Model might be best for now - multiple overlapping safeguards and multi-stakeholder coordination are essential for effective AI risk management.
The costs of human oversight are underestimated: The reality of human-in-the-loop requirements, especially in mission-critical AI applications, will undermine promised cost savings and requires workflow redesigns.
"Free brains... if we can just get free brains to do jobs"
Previous industrial revolutions disrupted manual labour while leaving knowledge workers largely untouched. The current AI revolution inverts this pattern, threatening to be highly disruptive to knowledge workers across all sectors. As companies and governments face pressure to adopt AI, our discussion provided breathing space to ask: Are we embracing AI appropriately and cost-effectively? What challenges make adoption difficult? How can we improve?
Our discussants framed the challenges and opportunities of AI applications in Yin and Yang — understanding how organisations create value (yang) while mitigating risks (yin) in their AI journey. This balance represents not a destination but a continuous discipline of adjustment between opposing yet complementary forces. In this briefing, we consider AI as the broad capability of computational systems to perform tasks typically associated with human intelligence, recognising that most discussions focus on large language models (LLMs) and their implications for governance.
Participants use AI diversely: automating administrative processes (structured documents, timesheets, spreadsheets), generating ideas (synthetic debates), conducting research (text analysis, tracing East India Company vessels), providing professional services (AI-based recruitment, mortgage automation), learning (AI-assisted lectures), and coding (ML support vector machines, vibe coding). The comments in this discussion stemmed from these experiences.
Four factors enable recent AI breakthroughs: algorithms, connectivity, semiconductor chips, and data. Deeper examination revealed surprising insights. Algorithms: fundamental approaches haven't progressed as much as people think—innovation is in application. Connectivity: the most underappreciated enabler, transforming isolated experiments into global platforms. Chips: an accidental architecture that somehow became standard. Data emerged as the true battleground— where the real action is happening. The shift from clean datasets to an internet increasingly polluted with AI-generated content poses existential challenges. Looking ahead, energy may also become a fifth crucial factor.
The workforce needs specialist training to realise AI value: Most knowledge workers will be AI ‘’disruptees’ rather than disruptors. Z/Yen's Lord Mayor's Ethical AI Initiative course has trained 20,000 people from 1,00000 firms across 60 countries, offered through professional institutes - Chartered Institute for Securities & Investment, British Computer Society, the Law Society, ACCA, Institute & Faculty of Actuaries, and soon the Royal Institution of Chartered Surveyors.
Investment patterns have shifted dramatically: When ARPANET was created, over half of US R&D came from the public sector; today it's less than 25%, with most cutting edge AI applications realized through private firms. This divide reinforces governance needs due to potential conflicts between public good and profit motives.
AI adoption promises economies (more revenue), efficiency (more outputs), effectiveness (better outputs), and innovation. Yet reality proves complex. Expectations that AI will simply find patterns disconnect from data science heavy lifting required. Domain-specific fine-tuning delivers better performance at lower cost than general models. Predictive maintenance reduces downtime; quality inspection exceeds human accuracy. Despite potential savings, most cost-benefit analyses fail initially because human-in-the-loop costs systematically exceed projections.
"Boring AI" is where value is realised: What succeeds surprises everyone. Highest returns come from "boring AI"—invoice processing, email categorisation, data entry automation deliver consistent value precisely because they're narrow, well-defined, reliable. Process optimisation over flashy demos captures this reality. While professionals anticipate AGI, practical applications quietly optimise processes without fanfare. [show this as normal text not a text block]
KEY INSIGHT: The highest AI returns come not from cutting-edge applications but from "boring" implementations—narrow, well-defined tasks that deliver consistent value precisely because they avoid complexity.
Organisations face an impossible choice: pause to develop governance while competitors race ahead, or deploy systems without adequate safeguards. Our participants noted a prisoners’ dilemma scenario where one cannot afford to stop and wait because competitors will advance regardless. This creates systemic challenges where innovation pressure outpaces governance ability, leaving organisations deploying systems they do not fully understand or control.
When AI trains on AI-generated content, quality degrades cumulatively—errors compound, biases amplify, diversity compresses into synthetic patterns. The human resource sector illustrates this: LLMs generate job descriptions, other LLMs produce resumes, job-reviewing LLMs evaluate them. The "AI sludge flooding the internet" creates recursive degradation. Semantic incompatibility transcends technical challenges. Model decay requires continuous care. Monoculture risk from dominant models creates systemic vulnerabilities. Legacy integration consumes unexpected resources. Data quality issues multiply as organisations discover their data isn't clean, complete, or compatible.
"People are flooding the internet with AI materials and polluting the lake"
One provocative prediction: the traditional internet will be effectively unusable within six months to two years. As AI-generated content overwhelms human-created material, finding authentic information becomes computationally prohibitive. Users need AI intermediaries to navigate AI-polluted landscapes, creating recursive loops of artificial mediation—fundamentally changing how we interact with digital information.
The human cost emerged centrally. One participant's dark humour exposed the dehumanising lens: businesses see AI as "free brains... if we can just get free brains to do jobs." Another noted exhaustion from constantly justifying human relevance: "Everything is like we're adding value... speaking like some drone." AI discourse itself becomes dehumanising, reducing workers to economic units competing against "free" machine labour. Continuous upskilling creates cognitive arms races. Change fatigue grows. Knowledge workers who championed previous automation now face displacement, creating resistance technical solutions cannot address.
The Trust Crisis
One participant observed an AI system confidently taking opposite sides of the same argument. If machines can persuasively defend any position, how do we determine what is true? This uncertainty spreads through entire organizations: legal teams struggle to assign liability, compliance cannot verify requirements, and customer service fails to explain outcomes. In the end, could the damage from unethical AI practices harm a major technology company more than any technical failure ever would?
"Should we adopt a Turing Police or a Butlerian Jihad, i.e. a ban on thinking machines? "
Legal and Regulatory Uncertainties
Participants contrasted fictional metaphors—Gibson's Turing Police versus Herbert's Dune Thinking Machine ban, following a Butlerian Jihad—reflecting divergence between binding regulation and self-governance. Deregulation approaches contrast with strict frameworks like EU's AI Act and principles-based middle paths like UK regulatory sandboxes. One practitioner noted it's like using maritime law for aerospace — applying frameworks designed for one reality to something entirely different. Compliance uncertainty, liability black holes, procurement challenges, and IP gaps constrain adoption.
KEY INSIGHT: AI adoption and governance challenges are multifaceted and intertwined. Frontier AI companies face increasing data scarcity as high-quality pre-AI internet content becomes exhausted. CTOs must plan for comprehensive AI lifecycle impacts across all departments and employee levels. Corporate leaders need to balance innovation with precautionary principles and determine appropriate self-regulation levels.
Theme #3: Coordinate governance mechanisms for value protection
The Swiss cheese model shows no single mechanism governs AI completely. "Every safeguard mechanism is like cheese with holes in it." No single defence suffices against AI risks. Bias propagation at scale, privacy erosion, labour displacement anxiety, and tension between corporate and social responsibility create complex ethical landscapes. Firms must reflect on unique challenges, tailor governance mechanism configurations, and “stack the cheeses” together.
"Every safeguard mechanism is like cheese with holes in it."
Treating ethics as mere compliance proves commercially dangerous. Ethical frameworks need iterative continuous feedback loops, not static checklists. The principalist approach requires organisations to evidence consideration of multiple factors rather than reaching predetermined "correct" answers. A major technology company's ethical failure caused more business damage than any technical failure could have. This positions ethical AI not as constraint but competitive advantage—thoughtfully integrated ethics builds sustainable, trustworthy systems.
Financial Levers for Value Creation
Measure reality, not promises: Roadmap and calculate true lifecycle costs of AI adoption. Traditional IT cost models often overlook: human-in-the-loop costs, model decay rates, audit data quality for pollution, and hidden integration, continuous retraining, drift monitoring, and human oversight nal software budgets. Adopt usage-based pricing models replacing traditional licensing. Outcome-based contracts align vendor-customer incentives. Understand full lifecycle costs beyond initial development. Value tracking mechanisms identify where AI creates versus destroys value.
Technical and Data Architecture Measures
Track data provenance: "Knowing where thy data comes from" becomes essential as quality determines everything downstream. Continuous drift monitoring catches decay early. Green computing metrics track carbon per inference. Decentralised governance experiments explore alternatives. Standard management protocols (ISO 42001), lifecycle practices, and blockchain-based verification create responsible scaling infrastructure. Build semantic layers bridging integration gaps, isolation zones protecting clean data from pollution, version control extending beyond code to encompass data, models, configurations.
KEY INSIGHT: The Swiss Cheese Model reveals that effective AI governance requires multiple overlapping safeguards—no single mechanism can address all risks, making coordinated defence layers essential for sustainable AI deployment.
Roundtable discussions revealed areas warranting deeper exploration: examining practical applications beyond hype cycles, exploring AI's reshaping of professional services and recruitment, investigating recursive challenges of AI systems governing themselves. The tension between science fiction metaphors and business reality — from Asimov's Laws to Gibson's Turing Police — suggests rich territory for understanding how cultural narratives shape governance approaches.
"Vibe coding" - developers relying on intuition rather than formal testing - raises quality concerns as AI-generated code proliferates. Predator-prey dynamics between AI systems attempting to game each other create computational arms races with no benefit. Energy consumption emerged as under-discussed: "the environmental cost of AI is conveniently ignored in most ROI calculations."
Shifting from traditional software licensing to AI-specific models represents uncharted territory. Organisations experiment with usage-based pricing, outcome-based contracts, hybrid models accounting for continuous retraining costs. "We're trying to price something that degrades like produce but sells like software—existing financial models simply don't work."
The Human Factor
Most concerning: junior roles disappearing before seniors gain AI-augmented productivity. "We're creating a generation that can use AI tools but doesn't understand fundamentals—what happens when tools fail?" This skills gap, combined with communication degradation where professionals find themselves "speaking like drones," suggests profound cultural shifts ahead.
Join the Conversation
The disappearance of junior roles before seniors achieve AI-augmented productivity creates a generational skills crisis — we are building a workforce that can use AI tools but lacks fundamental understanding for when those tools fail.
This briefing note begins ongoing dialogue about AI adoption and governance. SI Units and Z/Yen plan to continue this roundtable series, bringing together practitioners from diverse industries to share experiences and shape best practices.
"SI Units" is a specialist network science company. We are a high-energy group of researchers, specialising in bespoke projects connecting interdisciplinary expertise to produce robust intelligence for innovation translation, technology governance, technology roadmapping, and venture capital - www.siunits.co.uk.
Z/Yen is a commercial think-tank promoting societal advance through better finance and technology. Founded in 1994, Z/Yen specialises in research, development, and thought leadership in AI governance, financial services innovation, and distributed systems. The firm has been instrumental in developing international AI enterprise management standards, including ISO 42001, and partners with leading professional institutes to deliver AI education globally - www.zyen.com.
The views and opinions expressed in this article reflect the thoughts and opinions of the individual participants and are not necessarily those of SI Units or Z/Yen corporately.