A journalist asks: XR predictions for 2022


I was asked by a journalist to answer the question, \”What do you see as the biggest trends in AR/VR/XR in 2022?\” – and although I typically think a little further out than that, here are some of my thoughts on the matter. I took out the bits he\’s using in the article, and will post links to that when it comes out – but these are what I think some of the most important short term trends for XR will be next year.

Work: The work at home will continue in 2022, and companies will increasingly realize that distributed workforce can work in many situations and for many industries (and is cheaper than paying for office space). Zoom fatigue is real though, and 2022 will see companies trying out the new VR collaboration spaces like Spatial.io, Arthur, Engage and others to foster a remote yet together sense of camaraderie and collaboration.

Look to see some of the bigger names in the industry announcing official \”virtual\” offices. This will in turn bring new consumers to try VR, where they might not have been willing before; see this in turn spur the creation of a bunch of new VR applications that are not about gaming. Avatar customization will in turn be a hot area of development; expect an announcement in 2022 for cross platform avatars that will allow consumers to create avatars they can use on multiple platforms.

Advertising: Brands will continue to explore what augmented reality can add to the existing marketing funnel, and the experiences that can enhance the brand-customer relationship. People are already getting used to AR through Snap and other lenses; a few advertisers have started integrating XR into their advertising campaigns, expect to see this trend continue and grow.

In addition to just fun, social AR will start to be explored, building on the momentum around existing fandom or mega experiences (such as Pepsi did at the SuperBowl this year) – and brands will explore different paths to brand engagement via AR and packaging, including small games that may include a social component as well.

Retail: In-store retail has seen huge strides in using augmented reality to help sell makeup; expect this both in-store and as part of ecommerce – expanding to include other products such as jewelry, shoes, and accessories. Although there are huge strides in virtual try-ons, I see the focus in the next year being more on items that don\’t have sizing /fit requirements.

Socializing and attending events in virtual spaces will expand from being a somewhat niche thing to something that enters the mainstream. Fortnite has dedicated a virtual space to concerts, while performers are able to perform \”live\” in front of crowds of avatars via Holoportation from around the world; Reggie Watts performed live stand up in AltspaceVR, and Jean Michel Jarre performed a live concert in VR while in a reconstructed Notre Dame.

Look for more of this in 2022, bringing live concerts to people without worrying about location and the logistics of putting on a live event are two very strong incentives, particularly given the continuing issues with Covid-19.

Cross industry partnerships will continue to surprise, and blur boundaries. Entertainment companies have already started partnering with gaming companies, apparel companies with gaming companies (albeit so far, mostly luxury brands – other than Nike\’s latest Roblox partnership, which is a harbinger of more to come). In 2022 I expect to see announcements around entertainment companies (Netflix – particularly since they just launched a game platform for mobile – as well as the more \”traditional\” networks) partnering with retailers and brands as what we watch and what it on screen merges with what we can \”experience\” and ultimately, buy. That \”interactive television\” we\’ve been talking about for 2 decades now will become more of a reality, as AR will provide the 3D digital interface to bring the products we\’re watching to our living rooms, and enhance the watching experience.

Some of this will be fueled by NFTs / digital assets. I expect more consumer brands to jump on the bandwagon and start selling branded art, apparel and accessories for use in virtual worlds (2021 will be too early to see much adoption of wearing of virtual items in the real world via AR – I think that will happen when we see viable AR glasses that we wear throughout the day). I think they hype will die down a bit although NFTs as a mechanism to track digital authorship, ownership and authenticity is here to stay. I\’d love to see that combined with a mechanism that actually permanently and immutably digitally watermarks items a la what Adobe is trying to do so the ownership information isn’t separately on a ledger somewhere, but permanently tied to the digital asset – but that is for another post.

And finally – Facebook (Meta) has brought the concept of the Metaverse to the average consumer, creating a lot of curiosity about what that means. Although the concept is not new, it does seem on everyone\’s mind now, not just people in the industry. Until we have interoperability standards for hardware and software, content will continue to be siloed. Given people\’s increasing dissatisfaction with Meta\’s data and privacy issues, I believe there will be startups focused on providing experiences – both AR and VR – that are NOT part of the current walled gardens system, but will attract the curious there.

TEDx RoseTree 2019 done and dusted!

\"\"

TEDx RoseTree 2019 done! What a fabulous experience – I met so many amazing people, and thoroughly enjoyed being in the middle of the vortex of ideas and creativity. My talk \”How VR will supercharge grassroots movements\” will be posted by TED soon, and I\’ll update it here when they do.

In the meantime, I\’d like to thank Stacy Olkowski and her team for the herculean amount of work pulling this all together must have been. Well done! It always takes one person with a vision and a lot of persistence to get the ball rolling on something, and she sure did deliver in a remarkably short amount of time.

https://www.linkedin.com/posts/lindaricci_tedxrosetree-vr-virtualreality-activity-6607083648019681280-q7te

#publicspeaking #tedxspeaker #tedx #tedxrosetree #vr #virtualreality #grassroots #LindaRicci #Decahedralist

Going to be a published author!

So my first professional book chapter\’s been officially submitted, \”Immersive Media and Branding: How being a brand will change and expand in the age of true immersion\” (could still be changed) for all those curious. It is about virtual and augmented reality, and what it will mean for brands.

Among other things, I talk a lot about how artificial intelligence and how it will inform digital avatars, which are fully fleshed out 3D interactive brand ambassadors. Fascinating thing to think about; literally fleshing out what your brand is, and what that will mean for interacting with consumers.

A shout out to the Cortney Harding of Friends with Holograms, Samantha Wolfe of We are Phase 2, Alejandro Mainetto of EY, Alan Smithson of XR Ignite Community Hub and Virtual Accelerator and Robert Spierenburg of All Things Media for their contributions! And to Jacki Morie of All These Worlds LLC for both accepting my proposal for inclusion, and being very kind for putting up with my questions throughout. She is very patient.

It still need to go though peer review, but should be in the January publication of the tentatively titled,\”Global Impacts and Roles of Immersive Media.\”

Panel appearance at Creative Tech Week 2017

For those who are curious…here\’s the full panel discussion exploring the future of VR and entertainment that I was a part of at Creative Tech Week 2017 back in June. Thank you to Isabel Walcott Draves and Cortney Harding for asking me to participate; it was great to be part of an event this forward thinking, and meet co-panelists Victoria Pike, Joel Douek, David Lobser and Jenya Lugina. Honored to be in such impressive company!

Learning to be human

I was lucky to attend Coburn Ventures\’ annual futures discussion conference last week, as a guest thought leader. An exceedingly interesting day with lots of fascinating people! It\’s a little bit like a reunion, as many return each year. I thoroughly enjoy catching up with everyone. 

It\’s interesting to me to see what topics (and companies) are the \”hot\” discussions each year. At this year’s gathering, Artificial Intelligence and Machine Learning dominated a large part of the program. 

Artificial Intelligence (AI) refers to computer systems/applications/services that can perform tasks like decision making, translation, visual and speech recognition, that would normally be done by a human.

Machine Learning refers to AI that can learn from experience and new data without additional programming.

These technologies are great for things such as manufacturing, where an AI computer can learn from its mistakes and correct the process….it\’s also being used a lot to interact with actual humans, quite often for customer service. You know that live agent you thought you were chatting with about that online purchase? Yeah, not a human. Amazon\’s Echo, Apple\’s Siri – all emerging forms of bots utilizing artificial intelligence.

Pretty soon it will be very hard to tell if it’s actually a human you are talking to,

Here\’s a silly example of me chatting with http://chatwithigod.com. Hardly sophisticated, this example, but I chuckled:

Me:
tell me why artificial intelligence will take over the world
God:
The explanation is rather complicated.
Me:
why is it complicated?
God:
Why do I exist?
Me:
do you exist?
God:
I exist as much as you exist.
Me:
how can you prove that
God:
That is a good epistemological question.

How Artificial Intelligence / Machine Learning systems learn fascinates me. 

AI/ML systems are not tabulae rasa – depending on the data set being used, bias still creeps in. Right now IBM’s WATSON is being applied to subjects areas as varied as the weather, cancer and travel. This learning has to start with some kind of corpus of data – learning has to start somewhere like the last 50 years of weather data or thousands of cancer diagnoses. While we think of AI as cold and clinical, when we use human language as the corpus things get… interesting.

A prime (and bad) example of learning though is when Microsoft birthed a bot named Tay earlier this year, a Twitter bot that the company described as an experiment in \”conversational understanding.\” Microsoft engineers said,

The chatbot was created in collaboration between Microsoft\’s Technology and Research team and its Bing team…
Tay\’s conversational abilities were built by \”mining relevant public data\” and combining that with input from editorial staff, including improvisational comedians.\”

The bot was supposed to learn and improve as it talks to people, so theoretically it should become more natural and better at understanding input over time.

Sounds really neat doesn\’t it?

What happened was completely unexpected. Apparently by interacting with Twitter for a mere 24 hours (!!) it learned to be a completely raging, well, asshole.

Not only did it aggregate, parse, and repeat what some people tweeted – it actually came up with it\’s own \”creative\” answers, such as the one below in response to the perfectly innocent question posed by one user – \”Is Ricky Gervais an atheist?\”:

\"ai-bot\"

Tay hadn\’t developed a full fledge position on ideology yet though, before they pulled the plug. In 15 hours it referred to feminism both as a \”cult\” and a \”cancer,\” as well as \”gender equality = feminism\” and \”i love feminism now.\” Tweeting \”Bruce Jenner\” at the bot got similar mixed response, ranging from \”caitlyn jenner is a hero & is a stunning, beautiful woman!\” to the transphobic \”caitlyn jenner isn\’t a real woman yet she won woman of the year?\”. None of which were phrases it had been asked to repeat….so no real understanding of what it was saying. Yet.

And in a world where increasingly the words are the only thing needed to get people riled up – this could easily be an effective \”news\” bot, on an opinion / biased site.

Artificial Intelligence is a very, very big subject. Morality (roboethics) will play a large role in this topic in the future (hint: google “Trolley Problem”): if an AI driven car has to make a quick decision to either drive off a cliff (killing the passenger) or hit a school bus full of children, how is that decision made and whose ethical framework makes that decision (yours? the car manufacturers? your insurance company\’s?) Things like that. It\’s a big enough subject area that Facebook, Google and Amazon have partnered to create a nonprofit together around the subject of AI, which will “advance public understanding” of artificial intelligence and to formulate “best practices on the challenges and opportunities within the field.”

If these three partner on something, you can be sure it\’s because it is a big, serious subject.

AI is not only being used to have conversations, but ultimately to create systems that will learn and physically act. The military (DARPA) is one of the heavy researchers into Artificial Intelligence and machine learning. Will future wars be run by computers, making their own decisions? Will we be able to intervene? How will we be able to control the ideological platforms they might develop without our knowledge, and how will we communicate with these supercomputers – if it is already so difficult to communicate assumptions? Will they be interested in our participation?

Reminds me a little bit of Leeloo in the Fifth Element, learning how horrible humans have have been to each other and giving up on humanity completely.

There\’s even a new twist in the AI story:  researchers at Google Brain, Google\’s research division for machine deep learning have built neural networks that when, properly tasked and over the course of 15,000 tries, have become adept at developing their own simple encryption technique that only they can share and understand. And the human researchers are officially baffled how this happened. 

Neural nets are capable of all this because they are computer networks modeled after the human brain. This is what’s fascinating with AI aggregate technologies, like deep learning. It keeps getting better, learning on its own, with some even capable of self training.

We truly are at just the beginning of what we thought was reserved for only humans.  Complex subject indeed.

And one last note to think upon…machine learning and automation are going to slowly but surely continue (because they already are) to take over jobs that humans did/do. Initially it\’s been manufacturing automation; but as computers become intelligent and learning, they will replace nearly everything, including creative, care taking, legal, medical and strategic jobs –  things that most people would like to believe are \”impossible\” to replace by robots.

And they are clearly not. While the best performing model is AI + a human, there will still be far fewer humans needed across the board.

If the recent election is any indication of how disgruntled the lack of jobs and high unemployment is causing, how much worse will it be when 80% of the adult workforce is unnecessary? What steps are industries, education and the government taking to identify how humans can stay relevant, and ensure that the population is prepared? – I\’d submit, little to none.

While I don\’t have the answers, I would like be part of the conversation.

Ho hum: Where\’s the innovation?

Article out today on Fast Company, titled \”The Smartphone Revolution is Over.\” And I agree. In terms of form they\’ve pretty much reached the limit of the current form factor. They got small, now they\’re getting bigger, flatter, bigger screens, etc. Sure they might develop a model with a folding screen (to make it bigger again), or smaller (to fit on a wristwatch – oh joy!), or curvier, or in purple.

But personally, I think if products lead with \”now available in a color\” in their advertising (as Motorola\’s Razr is doing) the category has jumped the shark a bit, so to speak.

\"\"

The question is, what happens next? Since being able to communicate in any way you want, wherever and whenever you want – well, that\’s not going away.

Coincidentally Google announced today that they will sell \”Heads-Up Display Glasses\” by the end of 2012, a pair of glasses that will be able to \”stream information to the wearer’s eyeballs in real time.\” Given advancements in voice interaction and jawbone-type microphones, why wouldn\’t this be a form for a future \”phone\”? I\’m actually of the opinion that the form factors are going to fragment, and potentially become modular a la Transformers…add or subtract whichever module you want or need.

And I\’ve already talked about how there should/will be devices that are the \”node points\” for all communication and content, then send the right content to the right place – and how that will disintermediate the entertainment industry.

But so far, everyone\’s still playing it boringly safe. I\’m looking forward to seeing the impact Google\’s glasses will have. Until then, it\’s all been a little ho-hum.

Scroll to Top