Enterprises Are Leveraging Smart Glasses To Be More Efficient

Jay Kim, CTO, APX Labs ©Robert Wright/LDV Vision Summit

Jay Kim, CTO, APX Labs ©Robert Wright/LDV Vision Summit

This panel discussion is an excerpt from our LDV Vision Book 2015.

Jay Kim, CTO, APX Labs

Microsoft and Magic Leap and the baby... I don’t know. These are three really, really tough acts to follow. I’m going to do my best. A lot of what’s been presented, I think, really deals with awesome content and awesome stuff. What I’m here to talk about is what my company, called APX Labs, is doing in the Enterprise AR space and specifically drilling down into a form factor of devices called smart glasses. Just to start, I’d like to actually start with showing a really short clip on how one of our customers is using Google Glass in AR in their wire harness assembly operations.

Video Voiceover: Okay, glass, start a wire bundle. Number 2-0-1.

This is a very high-level example of a real-life use case of how these things are being used today. Obviously, with that dark screen at the top corner of the user’s eye, this is far from Minority Report. is is far from Terminator vision. But as Microsoft and Magic Leap are also showing in the forms of videos, it’s not too unreasonable to think that there’s a lot just around the corner from where we are today. Right now, where businesses are finding the most amount of value in smart glasses and more broadly within the AR context, is delivering the information they already have in the systems that they have spent billions and billions of dollars and decades of time building. In the case of Boeing, imagine how much work- ow actually exists within their databases. Getting that to the people where the work is being done—for people to be able to access that in a heads-up and hands-free fashion-- is a really, really powerful concept.

And from a market opportunity perspective—and these are numbers just within the US across four representative industries, obviously there are a lot more industries where this can scale up to—we are talking about 12 million people who can access technology like this. Even in the crude and rudimentary way that I just showed you in the previous short clip, we’re able to deliver, again, information based on the user context, which is where the vision piece comes in. Vision plays a role in being able to drive enhanced knowledge of user context. And to be able to deliver things like next-generation user interfaces and heads-up and hands-free access to information.

You can do that in logistics settings. For example, a picker in a warehouse. Similarly, in a eld-service type environment—if I’m out there servicing wind turbines, I don’t necessarily want to have to go up to each of the different panels and systems to be able to access data. I can now do that by looking at the different kinds of devices that are out there. And then, of course, in health care, the savings are obvious. e obvious upside is that you could be saving lives. With automotive manufacturing—complex assemblies and things like that—is where we as a company have seen the most amount of traction, because the return on investment associated with this kind of technology can be most easily quantified. So, if you are saving seconds o of a simple task, if you are reducing the error rate of a complex assembly that you are doing so you don’t have to go and get re-work done, there is a very obvious dollar amount that’s attached to all of this.

We are big fans of HoloLens, as far as the devices that have been announced, and I have been fortunate enough to recently try it on at Build. From a sensor technology, this is one of the most powerful kinds of sensors spanning hardware, so ware, and the integration of both into a wearable enterprises are leveragIng smart glasses to be more efficient form vector. I can’t stress that last part enough. It’s really, really impressive what Microsoft has done. Essentially, jamming in a couple of Kinects’ worth of sensors along with advanced cameras, IMUs, and other kinds of radios and processors and actually make it wearable. at singularly is the biggest challenge that a lot of the industry players who have tried to have a product offering in the smart glasses space have faced. It is really, really hard to cram in the requisite amount of sensors, to gather the proper user level context, and then to be able to have it be somewhat comfortable. Maybe this isn’t a mainstream consumer device just yet, but certainly within the context that we play, which is in the industrial applications, there is a lot of appetite for this specific form factor and the capability that this others. This is tremendously exciting for us. We basically consider this, as far as state-of-the-art goes today, as the most advanced device that’s out there. Look at the number of cameras that are there. It is impressive.

From an optics perspective, optics have been a little bit of the chicken and the egg problem in the sense that some of these things that I’m about to show you have existed—it’s just been really hard to drive the price and scale to a point where these can be deployed en masse. So, today what we have is really simple prisms, like you and in Google Glass, where light bounces off a reflector and then a polarizing beam splitter basically just mixes that external light with the light that’s been being driven from the projector.

This is the most common and probably the cheapest form that you can get to—a heads-up display or an AR kind of device. But obviously you are getting Coke bottle kinds of glasses. You probably don’t want that in front of your eyes. And then you’ve got a product like Epson, for example, which is a reflective wave guide. Still somewhat thick, but at least you are able to collapse the lens and effectively drive the eld of view to be able to get something that is a little bit more over your eyes. If you think about these optics as something that’s going to have a 13-to-about-23-degrees eld of view, that’s still not a compelling user experience any way you look at it.

So we go to HoloLens, then you’ve got now stereoscopic displays with the right kinds of sensors that are able to do accurate overlays over some- what of a limited environment. Of course specifications around optics and the parameters are not public yet. e overall user experience that is being driven, coupling with vision and optics technology, is a generational leap over what we have seen so far to date. Then, of course, you’ve got Magic Leap and the very neat fiber-based modulation they are doing where they are able to portray depth.

This is where the technology is going, and really at the end of the day, the goal is to be able to drive a lot of these optics and collapse it into something that is not too unlike the set of glasses that I’m wearing.

Let’s talk about what this all means, more broadly. Smart glasses from our perspective is basically just a way to add a human element back into industry buzz words like “Internet of things” and “big data analytics.” IoT generates a glutton of connected sensors spewing out real-time data at orders of magnitude higher than what we are dealing with today. Of course, the analytic systems are going to have to keep up to be able to make sense out of all of this.

Where we see AR and smart glasses writ large coming into is being able to provide an interface and a mechanism for users to be able to interact with those objects and to be able to do that using the most natural user interface. Fundamentally, there is value in that enterprise today around the form factors that are existing today and around the work ow that exists today. Not too far from here, we are also talking about these form factors from large companies that are getting to the glasses-type of fashion. ere is no question in our mind that consumer adoption of this technology is just around the corner. It’s a start of a very, very exciting market.

[LDV Capital is an investor in APX-Labs/Upskill. In 2021, it was acquired by German remote connectivity software company TeamViewer.]