The concept of immersive realities has been embedded in fiction for decades. Star Trek’s holodeck is one well known example. The Matrix Films are among the blockbusters imagining these future worlds.
For decades now, technological developments in Extended Realities ( XR) have shown us some aspects of these “futuristic” worlds. An obvious example of a large industry that has grown with these technologies is the gaming world. Increasing computing power has seen the videogame move from simple animations to increasingly lifelike and, also, fantastical worlds. The move from screen based single player games to massive multi-player worlds using augmented ( AR) and virtual (VR) continues apace. For an understanding of the terms within XR see here.
It is nearly 30 years ago that I first saw a 2D virtual community where you could interact with others using an avatar of yourself. It worked on dial-up! That was WorldsAway. It certainly was worlds away from what is possible now. The concept of gamification has become more embedded within tech developments beyond the videogame in areas such as education. Today, the big bold vision for the future of the internet is the Metaverse. The term itself is 30 years old. Its supporters talk of dreams of us living and working in virtual realities without commuting. I have seen some fascinating demonstrations of the use in a wide variety of applications including psychology, safety training, medicine and remote monitoring. I confess that I am enthusiastic about the potential in these and other areas.
However, as readers of my earlier posts in this series will realise, this comes with a but, or however. When I read a new report, most weeks, about the take-off of the Metaverse and the value unlocked this decade, I urge caution. One such example is here, a valuation of $800bn.
So, why am I sceptical? First, we do not seem to learn from past dead ends. 3D TV was going to be a big thing. I loved it and had one at home, much used. However, it failed to get consumer traction. Perhaps the big daddy on the road to the metaverse was Google Glass. I sat in on a dragons den style pitching session where 2 organisations talked about their ideas for building businesses on the back of the Glass Technology. Both envisaged millions of downloads within a 5 year time frame. What happened? There were many concerns raised including privacy, inappropriate use, and safety considerations and wearers even became known as Glassholes. Niche uses of this and related technologies have been found but the big impact proposed still fells a long way off.
Nearly 20 years ago, Second Life was launched and is still around. This was a step forward in being able to show off the potential of immersive digital realities. At one conference I remember a speaker outlining research that showed that Commerce in Second Life, vCommerce, would be around $100bn inside 5 years. Spoiler: it hasn’t happened.
Now, developments in the hardware and software of VR helmets since the launch of second life have been stunning and the use in the games world is now well established. In particular the developments since the launch of Oculus, now owned by META, in 2012 have been extraordinary. Even if you are close to the technologies, the pace of change at the technological level has felt exponential.
So what are the barriers that are faced in realising the dreams of alternative immersive realities in the world of work, rest and play? First, the challenge is not the technology, but the consumer and business proposition. Put simply, do we want to live and work wearing an immersive helmet? How will such a world impact on productivity?
There is a parallel here to concerns over the games world. There have for more than 2 decades been numerous reports suggesting that games do/ do not affect personal health, sociability, education standards, safety, and promote aggression or sexually inappropriate behaviours. I could take a report from 2005 on games, substitute Metaverse, and both sides of the argument could be replicated easily. Gaming technologies have flourished against this backdrop. Could the same happen here, or are there other factors that might push the burden of acceptability one way or the other?
The costs of developing high quality content are still high, as they are with games. The structure of the games market with a few blockbusters a year, casual games and much middle market experimentation may be the outcome. I don’t claim to know. However, there are some issues that need to be understood to see where the limits may be. The first are human senses. Various attempts have been made with smell in media such as Smell-O-Vision. It’s fair to say that none have gone mainstream. So, how many potential applications needed to go beyond sight, sound and some touch technologies?
On the latter, I tried on a dataglove 20 years ago. It worked and was fascinating to experience virtual touch. I’m still waiting for it to move beyond niche applications. Perhaps not surprisingly, the most common application that was suggested was porn!.
The ability in software to get closer and closer to reality does lead to some serious challenges. There is the concept of uncanny valley , the notion that there is a point where a near real experience of a digital human creates a cold reaction. I have former colleagues who have experienced such reactions and didn’t like it. On an earlier blog I mentioned Eddie Obeng’s Qube environment. Eddie has deliberately chosen not to make the avatars more realistic as the technology has developed and it’s clear that it works in his field.
There are 2 aspects of human sight that I think may be important in understanding the rate at which we can converge on the metaverse deliverables and what the limits may be. The first is that the human eye has very fast peripheral vision, but fuzzy, and central vision which is slower but more detailed.. Throwing digital pixels at the problem only gets so far. In the same was as some people experience travel sickness, nausea or disorientation in the real world, such experiences are still reported after immersion in digital worlds for some people. I don’t know if there is an overlap here. Consider this. How many hours can a safety worker work in a virtual environment? What if say 5% get an adverse reaction very quickly? These human factors need to be better understood and researched if commercial potential is to be delivered.
One technology that has been developing over these past decades that may be a part of some solutions offers another limit, eye-tracking. It is simply not true that if you know in what direction someone is looking that you know what they are seeing. There is a famous experiment that demonstrates this attention blindness in an amusing way, Did you see the gorilla? If you don’t know it, you should. One example I came across was in a report on remote driving of a vehicle, a semi-autonomous car, somewhere in Asia. I’m not sure that a driver 30 miles away even with a high quality image can process what a local driver can on a real road, quickly and safely.
So, while I am enthusiastic about the potential of these families of advanced technologies, the age old business questions still apply:
I’ll go and put my VR helmet back on and see a world where the £ is rising and the sunlit uplands are everywhere to be seen. Bon voyage.
Read the other articles in this series here :