Augmented Reality: The Next Inflection Point in Computing

Ben Fathi
4 min readFeb 27, 2018

Once in a while, a new technology arrives on the scene that profoundly changes the way we interact with computers. The graphical user interface and the mouse were the inflection points that brought about the personal computing era. Touch played the same role for the smartphone era, later assisted by voice recognition and natural language processing.

I claim that augmented vision will do the same in the next few years, powering a new generation of human-machine interaction that will bring forth the next era of computing for consumers.

Augmented Reality, delivered through eyeglasses and utilizing image recognition software, will enable an entirely new and compelling set of social and commercial experiences. Google Glass was not wrong, it was just slightly ahead of its time: the “Apple Newton” of its generation. It, and its ilk, will be back, hopefully in a less dorky guise.

Even before we get to truly “smart” AR glasses, though, we’ll get used to products like Facebook’s new Ray-Ban glasses as an on-ramp to the social media world. You don’t need much more than a tiny camera and an internet connection (available through the phone in your pocket) to take selfies and document spring break shenanigans. Whether it’s an Oakley or a Gucci, cool brand name glasses like these will become commonplace for a new generation, privacy be damned. The old folks standing around with their iPhones out in front of them taking videos of their kid’s school play or on a selfie-stick (yuck!) will look quaint, if not annoying, by comparison.

Imagine shopping in a store while having instant access to relevant data: This item received 3.5 stars on Amazon and is available from two vendors online at the following prices. Snap your fingers once to purchase from Amazon, twice to buy from Walmart.

Narration is a poor substitute for this same data being superimposed visually on your retina but you get the idea. I worry about how brick and mortar businesses can compete effectively in such a world.

This is no longer a science fiction scenario. AR prototypes already exist on smartphones — assuming, that is, you’re willing to walk around the store holding the phone out in front of you like a nerd. The glasses, assisted by the phone in your pocket for compute, storage, and networking, will solve the form factor problem and will be the hot item for Christmas one of these days soon.

Imagine walking in a neighborhood: This house is on sale for $850,000. It’s been on the market for 23 days and its price was reduced by $50k last week. That house next door sold for $825,000. last month. Here are some interior photos.

Imagine bringing home a piece of furniture from IKEA, laying the components out on the floor, and then being led through the assembly process as your AR glasses recognize the parts and point out every step. Google Glass 2.0 is already offering similar scenarios for manufacturing, medical, and other professional fields. It’s a small step to see how the same technology could be used for consumer scenarios.

Imagine an option on that allows you to walk around your hotel room in Paris before you ever step on the plane. Imagine a visit to the Louvre where your glasses identify each painting and offer relevant commentary.

To be clear, all this data is available on the web today. I am merely suggesting that it would be a giant leap in human-machine interaction if and when it’s presented to us right through our glasses as we walk around in the real world.

Each and every object or person you see through the glasses can, and eventually will, present itself as a first class entity in this new augmented reality world. That’s a huge monetization opportunity as well as a disruptive ecosystem play.

Even more significantly, it’s an opportunity for us to revisit and fix the privacy mistakes we made in previous generations of computing — an opportunity that comes along no more often than once a decade. Facial recognition algorithms running on your phone/glasses will be limited to your contacts and will require cloud assistance for strangers. Those cloud services, in turn, can offer opt-out mechanisms for those of us who don’t want to be recognized. Worrying about abuse of facial recognition scenarios won’t stop AR from happening; besides, there are plenty of interesting scenarios possible for objects and experiences even without that ability.

It would be a mistake, however, to see a pair of glasses as simply a slave “heads up” display connected to your phone. Crucially, AR glasses pivot the user interface paradigm away from applications and toward objects and people — both real and imagined. What is needed is an operating system that similarly pivots the user experience to this new augmented world and relegates the phone in your pocket to a supporting role.

Virtual Reality based games (in the sense of fully synthesized environments) will continue to be important, just as they were for pushing the performance envelope in the PC era. But Augmented Reality, I predict, will eventually drive a much larger ecosystem by offering more compelling and relevant experiences and creating a larger opportunity for monetization. If you don’t believe me, all I have to do is remind you that Pokémon Go was downloaded 750 million times and generated $1.2 billion of revenue in less than a year.

I can’t wait to attend my first augmented reality concert or, better yet, to spend an hour with my grandson when he virtually visits our home.

Here’s a preview:



Ben Fathi

Former {CTO at VMware, VP at Microsoft, SVP at Cisco, Head of Eng & Cloud Ops at Cloudflare}. Recovering distance runner, avid cyclist, newly minted grandpa.