Yesterday I was lucky enough to get a chance to try out the much vaunted Hololens, which is a completely new device from Microsoft which provides “Augmented Reality” as opposed to the Virtual Reality devices like Oculus Rift provides. The wording may be subtle, but the difference is quite large. Augmented reality projects objects into the actual room you are in, rather than moving you into an entirely different world like VR does.

Hololens is quite the device. It can track the physical space around you very well, and unlike VR, there is no requirement for markers in the room or extra cameras to track movement in the room. It is completely self-contained, and that may be one of its biggest wins of all.

The device on hand was near-final hardware, and the actual device looked exactly like what has been promised for some time. Although we did not get a chance to see the preview device in January, it was apparently nothing like what was presented at Build this week.

However just like January, we were not allowed to take any actual photos of the demonstration units, and all interaction with the device required us to lock our belongings in a locker before we could enter the room. They did however have a unit on display under glass for photo opportunities.

Let’s start with what they got right. Interacting with Hololens is very easy. There were only a couple of commands needed, and things like the air tap were very simple to use, and not once did I get a missed reading. That is extremely impressive considering it is just seeing my finger move in free space. When you want to interact with something, there is almost a cursor in the center of your field of view that you just focus onto an object. The object is then highlighted, and there will be no mistaking which object you are going to interact with.

Another interaction method was using a mouse, and when looking at a PC, you can simply drag the mouse off the display and the mouse cursor just moves into free space. In my demo, which was based on an architecture theme, this allowed me to interact with the model and move walls around, and change the design.

Another cool feature was the ability to leave virtual notes. Looking at a wall, I could see that someone had left me a note, and with a simple air tap I was able to hear what they had left. Then I could leave a note of my own on the wall for that person to see later.

Another win was the actual device itself. You put it on somewhat like a welding mask, and you just tighten the band on the back of your head with a wheel. Hopefully the durability of the devices is fairly robust, because we were helped out quite a bit to get the device on and off, but that kind of makes sense with the volume of people coming through the demo.

So what did it not deliver? The actual holograms had a very limited field of view. With the demos we had seen on the keynote, you could see holograms all around you, but the actual experience was nothing like that. Directly in front of you was a small box, and you could only see things in that box, which means that there is a lot of head turning to see what’s going on. On my construction demo they provided, I was supposed to look at a virtual “Richard” and I was asked if I see Richard. I did not. There was a bug with Richard and he was laying on the floor stuck through a wall. I understand these demos can have bugs, but it was very hard to find where he was with the limited field of view.

This demo is almost nothing like what you actually see in the device

The holograms themselves were very good, but they were so limited in scope that I can only hope that some work can be done there before the device goes for sale. There is a tremendous opportunity here and it would be awful for it to be spoiled by poor hardware. Although I didn’t get a chance to see the January demo, I’m told by several people who did that the field of view was a lot better on those units.

So my expectations were not met, and I can attribute that to the demos that were provided online and during the keynote. The actual experience was almost nothing like that, and what was shown on stage was amazing.

One thing that I wanted to know was what kind of hardware is inside, but there were zero answers to that right now. The device itself looked good, it felt good, the audio was good, but the main attraction still leaves a lot to be desired.

Comments Locked

33 Comments

View All Comments

  • kyuu - Friday, May 1, 2015 - link

    The limited FOV is definitely something that needs to be addressed before they release this.
  • Gunbuster - Friday, May 1, 2015 - link

    I wonder if the limited field of view is really from a GPU constraint. More FOV = higher resolution = GPU that is not going to fit on a head mounted unit with a limited thermal envelope.
  • maecenas - Friday, May 1, 2015 - link

    I suspect this problem will take several years / generations to address. You really have a worst case scenario here - anything interacting with your eyes in the real world is going to need some serious GPU power (human vision is very high resolution compared to computer graphics) and heat is going to be an issue on something strapped around your head, much more so than a phone or tablet.
  • uhuznaa - Friday, May 1, 2015 - link

    I think you could either deal lower resolution for a wider field of view (depending on what is more important to you) or even render to a higher resolution in the center of your field of view and to a lower one outside of that. Our eyes work very similar.

    Anyway, it seems that Microsoft fell to the temptation to over-promise and under-deliver. Getting people all giddy and then offering something that is rather bland isn't clever. MS really got me suspicious with the unrealistic demos, these were too good to be true.

    Still, this is cool technology and if you manage to not compare it to the demos there will be some uses for it. As long as it can be used for games (and porn) nobody will complain very loudly. It's novel and potentially more practical then full-immersive VR (due to avoiding disorientation and nausea). Things like a wider field of view are easily addressed with better technology in a fairly linear way sooner or later.
  • Alexvrb - Friday, May 1, 2015 - link

    First gen Oculus hardware wasn't that great either and it's tethered to big powerful machines that do all the heavy lifting.

    For a standalone battery-powered rig with very limited power and thermal budgets I think the Hololens hardware is surprisingly far along. These things take time, but that doesn't mean they should hide the technology under a rug and work in the dark. The more people they get testing (and eventually using) these and building software for them, the better future gen units will be.
  • Krysto - Thursday, May 7, 2015 - link

    At least a decade is my guess. Microsoft is rushing this tech to market because it doesn't want people to think "it's falling behind VR". But AR was ALWAYS going to be at least 10 years behind VR. It's just much harder to do AR RIGHT.
  • Zoomer - Friday, May 1, 2015 - link

    I would think it's a power / integration constraint. Assuming the display tech is LCD; how much LCD can they practically wrap around the goggles?

    I imagine resolution can be mitigated by rendering peripheral vision at a lower res, perhaps at a lower color depth or even monochrome.
  • jjj - Friday, May 1, 2015 - link

    The available pico projectors are pretty low resolution.
    Himax or Omnivision are likely at 720p max for compact sizes LCoS.
    Sony, using Microvision tech, has 1920 x 720 laser pico projector but likely that's too big for this, even more so when having to use 2.
    So bigger image wouldn't mean more pixels really since the display tech doesn't allow it. You can't even go with more than 2 projectors since size, cost,weight and especially power would go crazy.
  • jjj - Friday, May 1, 2015 - link

    Maybe using optics and a laser projector they could display a higher res in the center and lower res towards the sides. Laser doesn't need focus so that would allow for such a trick to be implemented. Not sure how the human eye would react to it and if it would be a viable solution to widen the field of view but maybe it's worth testing it. Guess brightness uniformity would be a big problem too so they would need to adjust it on the fly as the projector pains the image.
  • MrSpadge - Friday, May 1, 2015 - link

    I don't think so. It would be dumb to let this thing render every pixel on the display. Most areas will be unchaged by HoloLens and simply show reality. They only need to generate whatever is overlaid. Tiile based renderers can handle this very well.

    So I suspect it's more a question of the projection technology and fitting everything into a device not too bulky.

Log in

Don't have an account? Sign up now