Reality, what is it really? Exploring Augmented Reality

\"\"I love the concept of augmented reality. I mean, isn’t watching Avatar in 3D Imax so much better than the gray reality when you come home to look at your walls?

Don’t you love the colors! – and can’t you feel your muscles twitching as you mentally jump from psychedelically colored palm frond to palm frond along with the Navi?  When I got home after the movie, all I could do is stare at my (boring) walls and wonder “where are my white floating squids?” Uch. Reality is tough, gray, cold – well, “real”.

But seriously, I think augmented reality has the potential to be the next mass (and I mean, MASS) addiction after social networking.

Currently every discussion around it seems to focus on the information it will bring…as interesting as it would be to have directions overlaid onto my wanderings (directly into my retina, or indeed – the optic nerve at some point) I think another obvious application is more akin to gaming in nature.

\"\"
Your neighbors, the Smiths.

Imagine you’re just in one of those moods, and instead of having to look at all the “regular” faces you pass on the street (gray, dour) you could instead decide that today is “sea monkey day”. Seriously, you’re in the mood for sea monkeys. So you program your “sea monkey setting” into your yet-to-be-determined data input module and voila! Everyone has a sea monkey head.

It’s the ultimate version of beer goggles.

The program could generate facial differences by interpolating from real faces, or by pulling data from various public profiles (the sentiment analysis of your current Facebook status interpolates: “bad mood”) and an unhappy (but potentially, comical) Sea Monkey face is projected. Etc. etc. You get the picture.

An additional idea would be being able to set your own markers so that AR programs interpret your data in a certain way that day. In a flirty mood / want to chat? Advertise with a certain color (how about, green face = available). We could color code the world and communicate without any words at all. After all, if our information from a wide variety of sources is going to be broadcast anyway (ref: http://lindaricci.com/01/04/not-just-a-pretty-face), why not control what we put out there in this way?

This could be seriously addictive. And seriously lucrative from an entertainment merchandising standpoint. Think about it: Now I don\’t have to just leave the Navi behind when I get home, I can superimpose licensed Navi images on my whole day. All I need is some giant pond fronds (why not my office chair??)

It makes sense as part of the “personalization” trend: everyone wants (information) how they want it, in the way they want it. How difficult is it to imagine that this will also include superimposing our own desires for what “reality” will look like that day?

Once it happens would you ever go back to just seeing things the way they “are”?  I don’t think so.

Not just a pretty face in the crowd: The future of Visual Search

\"\"I\’m fascinated with the potential for visual search a la Google Goggles. It\’s one of the newest ways to search and at the forefront of the next generation: it allows you to search from your cell phone by snapping a picture, and returns information about the building, object, business, etc. (true augmented reality). I first used it when passing a historic building, and was curious about it. My friend pulled out his phone, snapped a picture, and voila! – information about what it was, the architect, date and style, etc. So neat that I think I actually squealed.

I\’ve since used it again, with various success. It\’s definitely still an emerging technology, but over time the database of images and capabilities will improve. Want more info on a product? Take a picture. Need info about a business? Photograph the storefront. Put simply, this thing packs some serious power, and its capabilities stretch far.

I personally think when it does improve (along with voice interactive software) it will become as indispensable to everyday life as cell phones, texting, and search engines have become.

But then I started thinking about eventual convergences, and the inevitable trajectory it will take: integration with facial recognition software and other data points.

\"\"

Facial recognition software has had a huge influx of cash and interest since 9/11 for security reasons. It\’s here, it\’s improving, and pretty soon anonymity will be completely obsolete, if it\’s not already – at least to the companies who use it to scan airport passengers, law enforcement, and others who have made it a goal.

We live in an era where an overwhelming amount of data exists on each of us, from our social networking connections and comments, to our shopping habits at the supermarket. Cell phone usage, online searches, cookies on sites visited, credit card purchases – all of these create data which builds a picture of who we are.

But currently, these are still siloed. The grocery store isn\’t matching your checkout purchases with your Pandora list and identifying friends of yours on Twitter who are most likely to share your taste, then using the data to target them with advertising.

One of these days, though, facial recognition software will be one of the links connecting the dots between who you are with other data points such as your FB profile, and your Pandora list. At that point, if someone wants to know who that cute guy sitting at the next table is all they will have to do is take his picture – and know who he is, what music he likes, his address (courtesy of whitepages.com), books he\’s bought (thanks to amazon.com), his house value (zillow.com), online subscriptions, health risks based on his grocery purchases, etc etc. Spokeo.com and a few others are baby steps towards data aggregation – crude, often incorrect, and using identifiers which are imprecise, but it is the next logical step in data mining: analysis crossing across collection points, as opposed to little ponds.

This scenario – inevitable as it is – obviously has many potential pitfalls. It\’s great for companies (I\’d advise anyone with a talent for numbers to consider a career in data modeling!), but is a mixed bag for consumers. The privacy issues are obvious, but those aside, the personalization that the market increasingly is demanding is impossible without data mining and developing good predictive capabilities. On the one hand people are uncomfortable with their data being gathered (not like this wasn\’t always happening — it\’s just more extensive now), and on the other, good data mining will ensure that people are targeted with offers and services that are interesting and relevant to them.

It\’s a teetering tightrope walk. As a business strategist / consultant, I work with clients to develop strategies to take advantage of all that is legal, effective, and (personally) always try to do so with integrity. As consumers we should be trying to influence privacy legislation, to ensure that this future is one that not only makes our lives easier, but does so safely. The challenge is that data knows no national boundaries, so what effect will legislation be able to have? I don\’t have the answer, only want to add to the discussion.

Disruptive cataclysms? The impact of rapidly changing technology

\"\"

Technology – and the \”rapid changes\” everyone is talking about – is being hailed as a disruptive force. Most recently Mark Zuckerberg used the term to describe the future business landscape, and how Facebook (or rather, erm, \”social networking\”) was at the forefront of the next generation of businesses.

But there are two levels of where \”disruption\” is happening: not only at the business level, but also at the consumer. I\’m going to stick to consumers in this discussion, snce I\’m constantly hearing about people adapting to the \”rate of change\”, or rather, the (perceived) difficulties this is bringing.

To the average person technology has brought neat things to their lives at a dizzying rate, such as the ability to chat 24 hours a day with \”friends\”, communicate instantly in a few different ways, and rendered getting lost obsolete.

It\’s brought geographically disperse people with niche interests together (You knit clothes for your pet goat?? Me too!), brought us exotic food all year long, extended our lives, and for the most part – kept us healthy. The world has become infinitely smaller.  We can walk and talk and bank and read and chat and pat our heads while rubbing our tummies and drinking our coffee to go…

But it\’s also (among other things) made us work around the clock (well, in the US anyway), and created whole new areas of interaction etiquete that is as of yet, still being defined. And don\’t get me started on online dating.

\"\"I suppose to many people it does indeed feel like it is moving too rapidly (with the resulting frankensteinish stories on the news, today it\’s \”PASTOR SAYS FACEBOOK IS THE GATEWAY TO SIN!!! – crikey), but I keep returning to my core assertion, though, which is less flamboyantly sexy than many other who are predicting all sorts of new societies and seismic level cultural shifts as a result: technology only enables and enhances what we already do. So while I don\’t subscribe to the dystopian future where our computer overlords rule us through our dependency on them, I also don\’t believe that some huge shift in basic humanity is going to happen as a result.

I see one of two potential paths. Either:

  1. The impact of perceived rapid changes in culture will create a pendulum swing back to the uber conservative, as people retreat to comfort zones; I mean a serious Luddite movement, complete with agricultural faith-based communities and prairie dresses (god help, and excuse that pun). Rejection of modern life in full flower.      …or…
  2. People will embrace technological changes as they become an increasingly invisible driver of their every day experiences, not forcing any cataclysmic reaction whatsoever. And in a generation or so, the \”fast pace\” (ubiquitous, instant connectivity) will be all they\’ve ever known – eliminating the desire to \”return to a simpler life\”

My guess is some will go one way, some others. There\’s never one recipe for all personalities. Those who crave routine, tradition, and fear change will retreat. The others will continues to embrace the double edged benefits of our brave new world.

You can\’t force people to accept new technology though, or the changes to their lives that will be associated with it, unless they want it. I\’m a true believer in you can lead the horse to water, so to speak, but you can\’t make it drink….if the technology that\’s introduced is not adopted, it will fail,  regulating the \”speed\” of change naturally. It can\’t be forced on the unwilling. People are flocking to smart phones because it speaks to a basic human need to communicate, and increasingly, instantly.

While I\’m on a roll, though, I\’m actually going to challenge the entire assumption: that change is happening \”so rapidly\”.

I think the major shifts have already emerged:

  • Social networks becoming the personal authorities (requiring brands to figure out how to communicate and relate, vs message \”to\”)
  • Ubiquitous/instant communication (which will require cross- and trans platform technologies / infrastructure)
  • Personalized information (requiring good data and effective predictive algorythms) on demand

Businesses are incrementally improving on all of these (it\’s still in infancy), and figuring out how to seamlessly integrate all these things, how to gather, track and correlate data properly to best \”serve\” the customer (maximize profit), but I don\’t believe there will be any great \”leaps\” above and beyond these; no major paradigm shifts that leave these concepts in the dust…and that\’s because these are speaking to – at a DNA level – the most basic human needs: affiliation with a group <love>, and the powerful human ego.

So disruptive? For the business forced to figure out how to compete, and survive in an era of decreasing product life cycles, definitely.  

But to the consumer, who is ultimately holding the reigns, it only currently feels so because it\’s still all so disjointed – and visible – and confusing. As it all starts to work better and becomes more invisible and seamless, not so much. So the future money will be earned by the companies that can help  make the experience as close to \”breathing\” as possible – ideally consumers won\’t even notice it\’s there,  they\’ll just have the experience they want.

So that great human revolution won\’t be necessary; we\’ll all be too busy catering to our egos: chatting, opining,  connecting, and – *sigh* – blogging.

My way? Branding in a personalized world

I follow comments on articles and posts with not-so-always-as-unattached-as-it-should-be bemusement; quite often the article/post is more of a catalyst than an actual source of information.

I\’m struck by a thought tonight though, after a particularly vitriolic back-and-forth session on a Daily Show post: what will \”authenticity\” look like in the future, and how will we recognize it?

There used to be \”trusted\” authoritarian figures – Cronkite, Brokaw, those types. But with the advent of \”social media\”, our trusted advisers are friends, or others in our community (digital or otherwise). Fine. But as the noise goes up digitally (increasingly everyone has a loud opinion), will it perversely create closer \”real life\” ties as a \”safe\” refuge from the melee?

And if brands are currently scrambling to take advantage of the current channels/technologies and create relationships with customers, instead of pushing messages (a paradigm shift that very few have managed to successfully understand yet), how will they deal with this change?

Some say \”branding\” will be more important than ever, and in the short term perhaps they are right. But there is a whole generation of people who will not have brand relationship as we did for the first part of their life, and are developing their own \”digital communities\” from the beginning, growing up swimming in a sea of constant, instant communication. How will they find and become loyal to brands if the communities are formed – and distrustful of \”outsiders\” – from the start?

For that matter, how will be be exposed to alternate ideas & philosophies, something critical to maturity and intellectual growth?

I was thinking about this a while back, because of services like Pandora. So neat, really, to just start it with a few artists/songs you like, and then (theoretically) never have to hear another song you don\’t like. Personalization at it\’s best.

The problem with that is, there are whole genres of music I\’ve never even heard of, and end up liking when someone makes me aware of them. How is this going to happen if from the beginning of my life I\’ve had it only served up \”my way\”? How will I know what \”my way\” is? Particularly if I only interact with groups (virtual and otherwise) I already know, and \”trust\”.  

A lot to ponder.

Just a thought.

Update 11/16/10: Ted Koppel wrote an article for the Washington Post today titled \”Olbermann, O\’Reilly and the death of real news\” where he discusses the lack of \”trusted authority\” in a fragmented, 24 hour media world. Here\’s my favorite quote:

Broadcast news has been outflanked and will soon be overtaken by scores of other media options. The need for clear, objective reporting in a world of rising religious fundamentalism, economic interdependence and global ecological problems is probably greater than it has ever been. But we are no longer a national audience receiving news from a handful of trusted gatekeepers; we\’re now a million or more clusters of consumers, harvesting information from like-minded providers.

I love it when famous people agree with me 😉

Disintermediating the entertainment industry

I\’ve been thinking a lot about \”entertainment content\”, people\’s increasing demands for what they want / when they want it, and the proliferating host of gadgets that are on the market. I mean, we have a \”phone\”, a \”tv\”, and \”ipad\”, etc etc. There have been fits and starts towards true convergence for years now…I wasn\’t sure if it was going to be the computer people being the convergence drivers (the Origami micro PC was an attempt a few years ago), or the phone people, or the television people – as it turns out, the \”phones\” are where convergence has come from.

….At any rate, and despite all the convergence gadgets, entertainment content is still being delivered in a really channeled manner. I pay for tv, for my Internet enabled phone  (where I can stream tv shows), for Internet access and then Netflix for their streaming entertainment, and for the most part these four (TV, phone, Internet, Netflix) are four access point for the same content. This is obviously not efficient.

I\’m waiting for the day when I pay for one access point – and I think it will be through the phone. As soon as what we now call a \”phone\” is able to act as the funnel point for my entertainment needs and then send the information to the output device which is set up to interact with that data, the need for all these others will vanish. So – I will choose what I want to watch (when I want to watch it), tell the phone to stream it and output to the large screen on my wall. Or I will tell it to connect to a keyboard, and an external screen then work on a Word document.

I understand that this all has challenges: besides the obvious current bandwidth issues of the \”phone\” device (which can be solved), there\’s the challenges that the entertainment content people (20th Century Fox, etc) face in their current agreements with the existing/legacy distribution channels. The entire industry will be turned upside down, and every tier of the chain is madly scrambling to figure out how to manage what\’s happening. But it will happen because the people who have the most to gain – the phone companies – will push for it and have the fledgling support of consumers who are flocking to smart phones to back up their push.

I\’ll talk about how cloud storage is also going to enable these developments in another post….as well as how consumer demand for instant gratification is one of the biggest drivers behind all of this.

And also the ramification for brands and advertising. Which is huge.

Update 1/8/2011: At CES, Motorola unveiled the Atrix Superphone, which has docking capabilities that allow you to use it with a mouse and a keyboard as if it were a normal computer (with 4G capabilities and a dual-core Tegra 2 processor – yee haw!). Not only can they now run Word, Excel, etc AND communicate AND surf the internet AND stream entertainment etc etc – the only thing keeping this from happening was interface and processing power. With cloud storage local memory won\’t be needed (you stream it from your virtual memory on demand). Watch out laptop manufacturers – this is going to make you as obsolete as you did the traditional computer towers.

Scroll to Top