I'm Afraid I Can't Do That, Dave - Why The Current Hysteria Over Artificial Intelligence?

Tuesday, 25 April 2023
By Bob McDowall

20230407_111437

The current discussions about artificial intelligence (AI) are characterised by hyperbole and hysteria - such are the fashionable attributes of the technology.

So-called prominent, successful “thinkers” and “thought leaders” conjecture, by contrast, that AI will solve society’s problems or destroy society. The media increasingly report that AI will threaten jobs and raise inequality and even that it is an existential threat to mankind (with little evidence). The more hysterical suggest a moratorium on AI research, which could reduce significant progress in the technology development.

The hyperbole which surround AI, derives in in part from the hucksterism of technology promoters, who have adopted an evangelical stance akin to religious belief (much the same as happened with crypto-currencies).

Inevitably, investors in AI are keen to see early and rapid adoption of AI technology from their own self-interest perspective. The more excitable elements go as far as to say AI is a solution to many of the fundamental problems of society and even that we may see a “singularity” with humans merging with machines.

While automation of routine processes continues apace, both in manufacturing and service industries, mass unemployment has not occurred. Labour productivity is not growing apace. Enterprise productivity is not growing even among logistics firms which are allegedly in the forefront of productivity efficiencies.

Reasons for this lack of rapid productivity are varied.

  • Most AI is embedded in visual systems rather than in wide scale general applications
  • Moore’s Law, defined as the number of transistors on a microchip double every two years may well be coming to an end of its life cycle. Investment in computer power is experiencing diminishing returns.
  • Many AI applications just aren't that innovative.AI has principally been deployed to fine-tune and refine existing products and services not introduce new ones.
  • Consumer demand has reduced in growth, certainly in the Western Hemisphere., which reduces the impact of AI services.
  • AI services are formed from learning from large amounts of data. Generating sufficient data to make the algorithms efficient or simply to afford to hire data analysts is not a simple exercise.
  • Much AI which appears, on line, to interact with customers actually has human beings acting as puppet masters behind the scenes.

AI's small impact to date does not, of course, eliminate larger impacts in the future. Unexpected and unanticipated developments in AI, particularly around Artificial General Intelligence (AGI) may lead to “rule by robots” as envisaged by some soothsayers but AGI is very different from contemporary AI which is “reactive” and “limited memory”, based on algorithmic response, data and machine learning.

Reactive and limited memory AIs try to identify patterns in data to generate prediction. AI cannot currently replicate human intelligence and certainly has no “theory of mind”. Computers are not able copy the thought process of humans or human emotions and are highly unlikely to do so in the future. Scientists and developers don’t have sufficient insight into the network behind the human thought procedure. It is highly uncertain that we will generate machines that can think like humans anytime soon.

So why are Governments concurrently trying to produce national strategies for AI development yet at the same time, led by the European Commission, potentially stifling development by building a framework of premature regulation rather than commencing with “industry standards” which can be adapted, even enshrined in regulation, once the risks are evaluated? Much harks back the cold war notions of “big Brother is watching you” enshrined in novels like George Orwell’s “1984” and Frank Kafka’s novels of human lack of power and control over their lives and destinies.

Governments in their haste for national AI strategies have tried to balance these against regulatory controls because applications based on algorithms that violate human rights, as defined by some jurisdictions are already being developed. Now may be the time to talk, to put in place standards and regulations to mitigate the risk of a society based on surveillance and other doomsday scenarios.

The USA and the EU, because they historically have broadly shared principles regarding the rule of law and democracy, have taken leadership on AI regulation but with differing objectives and guiding principles, which deliver different practical rules:

  • The USA focuses on procedural fairness, transparency and non-discrimination.
  • With the EU the focus is data privacy and fundamental rights.

Common rules for digital services operating across continents will not be achieved let alone a global consensus on the AI regulatory landscape.

Global Regulation will be a challenge because applications are broken down into three risk categories. Different ethical values are applied by jurisdictions. There are systems that pose an “unacceptable risk,” such as the Chinese social credit application. There are also applications that are “high risk” like resume-scanning tools that must adhere to legal requirements to prevent discriminations. Finally, other systems that are not considered high or unacceptable risk are not regulated.

Pragmatism - building empirical AI values and standards, reinforced by appreciate regulations, seems be losing the war in the battle of hysteria. Short term fixes to mitigate the scenarios presented by hysteria have clouded judgement in this growing important area of technology development.

Bob McDowall

svg.lf_footer_svg{ height: 30px; width: 30px; }