Last year at BUILD I got my first chance to try Hololens. That experience was very interesting, not only because of the potential of Augmented Reality, but because of the entire circus surrounding the device. Last year's Hololens sessions were at a different location, and the groups brought over had to lock up everything electronic. We could only do photos of a unit in a display case. Naturally when Microsoft announced yesterday that Hololens would start shipping to developers yesterday, this years' experience could never be as secret.

So when we got to the demo location, and were given keys for a locker, I was a bit taken aback. But it wasn’t anything as sinister this time, only a way to make sure there were no backpacks on the floor as tripping hazards, because this year’s untethered experience was really untethered.

That comes a bit later though. This year’s demo involved building and deploying a 3D app using Unity and Visual Studio, and each person doing the demo also got a coach to help solve any issues on the way. The Hololens unit was slightly different this year, but looking at it, it was remarkably similar to last year’s demo version. The one big change this year was very welcome: instead of having a person physically measure the inter-pupillary distance on your head (the distance between your pupils), the experience is now handled through software when you first put the headset on. There is a quick calibration that you can run and it sets your eye position based on some air tap gestures. It was very quick and easy, and the set walks you through everything required with voice and visual cues.

Afterwards we sat down building our apps. Since this was a demo for press, all of the coding was done ahead of time and we just had to walk through adding scripts in Unity to set up the demo. We would then build them, and deploy to a remote machine using the IP address of the Hololens.

The demo app itself was of an energy ball which, when locked to a location in space, would open up and show some nifty effects. The experience was very basic compared to what I would expect of the retail apps, but this was a simple demo and it worked well.

The very interesting bit was later on, when we linked our Hololens units with the other people in our six-person pods. This way all six people could interact with a single energy ball. People also got to choose an avatar which would float over their heads. That experience was pretty amazing. With very little setup, the holograms were truly linked to a single point that all people could see.

As part of this demo, my coach suggested I walk around the (very large) room and then look back. This was probably the most amazing part of the demo. After walking a hundred feet or more away, and around some tables and pillars, I looked back and the hologram was still floating exactly where I left it. The ability to really lock things to a location is really the one part that needs to be perfect for this experience to work, and they really nailed it. In addition, my pod mates were all around the room with avatars floating over their heads.

After a night to think about it, I want to summarize my thoughts after using the both the previous and latest incarnation of the Hololens. The field of view issue is still there, and is clearly not something they were able to address before they shipped to developers. I would explain it as something like a mid-sized television, in the 27-inch range, sitting a few feet away from you. My experience was better this time because there were fewer software issues, but the small field of view can certainly take some getting used to.

The hardware itself was very easy to put on and adjust, and it was fairly well balanced in that I never felt like the unit was heavier on the front where the lenses are. The adjustment is done with a wheel on the back, much like a welding helmet if you’ve ever seen one of those. The right side has buttons for volume, and the left side has buttons for brightness. I had to crank up the audio quite a bit because of the loud room we were in, and although the audio was spatial, it was hard to get a sense of that with the commotion going on during the demos. Meanwhile, although I don’t wear glasses, it looked like there would be no issues wearing glasses with the Hololens, and several of the other attendees seemed to have no issues putting the device on and using it with them.

The experience of AR is much different than VR. Because you are interacting with things in real space, you can easily move around without fear of tripping or walking into a wall. VR is able to offer much more complex graphics and immersion right now, but you are largely bound to a single location. The use cases for AR seem, to me, to be not necessarily the same as VR and both should easily be able to co-exist.

While going through my demo session, I asked my coach how to close the app we were running, and he showed me a “bloom” gesture which closes it. Once I did that, I was in another mode for the Hololens where I could see how it mapped out the physical world with polygons by tapping my finger in a direction. This was amazing and the Hololens did a great job on picking up everything in my area, including the people, with no issues.

Another bloom then put me back at the start screen. On the demo units, this was pretty sparse, but I was able to go into the settings and play around. I didn’t see anything special in there, otherwise the process of interacting with the menus was very simple and was very easy to get used to. From a UI aspect, the Hololens did very well.

Towards the end of the demo session we did some shooting of orbs which opened up a hole in the floor. Peering down into it, it really felt like this was something you didn’t want to step into. The holograms tend to be a bit translucent, but on this one in particular it was much more solid. There’s a lot of untapped potential here and I hope to get a chance to do some of the other demos they have here to get a better feel for that. The headset itself seemed to be near the edges of its processing power on the final demo though, which had a lot of not very complex polygons moving around, and the six people interacting. There was a lot of things to keep track of, as well as quite a few holograms flying around.

Afterwards, Microsoft told us that all of the code that we used in the demo, and all of the code used on the demos last year, is now available on GitHub to allow devs quicker access to code.

I think the Hololens is still one of the most interesting pieces of tech I’ve used in a long time. There is a lot of potential here for education, training, and even tasks like painting your house and trying different color samples. There are quite a few applications where this would work very well.

The hardware though still needs a bit of work. It is a bit bulky, and the lenses would not stay anchored to the spot in front of me where I set them, so I had to readjust. The field of view is also not very large, and this could be because the processing power is not as powerful as the tethered experiences of VR.

I look forward to seeing where this goes in the future. A lot of the pieces are already well done and on the software side, the experience is very good. With a bit better hardware, which will almost certainly come with time, this is going to be a very powerful tool from Microsoft.

POST A COMMENT

23 Comments

View All Comments

  • Reflex - Thursday, March 31, 2016 - link

    With a tether the Hololens loses most of its capability. If you think about its potential uses outside of gaming, if you have to carry a PC around with it, it basically becomes pointless. From a professional point of view, if I am an architect trying to show a client layout options in a building, I need to be able to put a headset on them, not force them to carry around a PC. If this gets used as HUD in a car or other vehicle, again I can't have it dependent on a PC.

    One of its greatest features is its ability to have holograms follow you or stay pinned to a location, again this feature is useless if its a tethered device. At that point you may as well just go VR.
    Reply
  • DPOverLord - Thursday, March 31, 2016 - link

    What would make more sense is to have the option to use it like the shield. Where the processing of the GPU/CPU go on your computer and it wirelessly transmits that power to the headset. Now I am not trying to say the above is EXACTLY what the Shield does. However, approaching this pitfall in this fashion could really fix a lot of problems. I see a time where the Internet is fast enough that most of our work won't even be done from our main computers. We'll have the option for that or we can pay a "Fee" and the majority of our work is done on the cloud and we can just "go"

    Interesting Sci Fi fantasy or reality?
    Reply
  • Murloc - Thursday, March 31, 2016 - link

    that stuff is not sci-fi but it's not estabilished or mature even in traditional gaming (streaming from the main computer maybe works but is not used much at all, over the internet, I don't think so), plus there's nothing stopping the gen2 of the hololens from supporting said technology as long as it has wi-fi, IF they pan out, IF internet speeds grow.

    In my experience video over wi-fi sucks BIG TIME even when you put a laptop very close to the transmitter so I don't think we're going to see this any time soon.
    Reply
  • Sushisamurai - Thursday, March 31, 2016 - link

    Streaming wouldn't work. Playing Halo 5 tethered to my PC via Ethernet from Xbox one still gives me 50ms of input lag. That's enough to give me a disadvantage. Imagine if u went wireless stream/processing and got 100-250ms of input lag. It would be far too noticeable and would be a terrible experience Reply
  • Murloc - Thursday, March 31, 2016 - link

    if you're going to sit at a desk you might as well use VR with hand sensors or something so that your hands show up in the view.

    This is useful to show architectural renders to decision makers (finally renders that aren't misleading?), showing instructions and visual cues to people who have to identify and repair stuff, playing games in the real world with other people (which is what really differentiates it from VR in this sector), and thus has to be untethered otherwise it has no reason to exist vs VR sets.

    I mean, you could shoot at holograms with holographic projectiles in the real world, but if you're tethered then it's a problem if you want to move around.
    Also if you start off tethered the software developed will become useless in the first untethered generation due to the drop in processing power.
    Reply
  • Zizy - Friday, April 1, 2016 - link

    Tethered to desk is quite useless - you might as well have VR or even a simple 3D screen. Point of AR is to have holograms at least appear in the real world if not even interact with it (obviously one way - you could throw that virtual ball, but virtual ball couldn't break your face. Yet)
    Tethered to cloud wouldn't work far too often. Latency kills you even if bandwidth is fine enough.
    Tethered to your existing laptop wouldn't work, as that laptop doesn't have HPU to generate those holograms and probably isn't powerful enough without that bit.
    Mixed laptop/HPU on the glasses would work though, but you probably wouldn't gain a lot, a lot of processing would be still done by the headset.

    But I could see a "backpack" tethered version aka headset has just IO parts and ALL processing (as well as batteries) are in the special unit in the backpack. So, not all that much different than the laptop version, just with specialized laptop that couldn't be used for other purposes.
    Tradeoff is mainly the need to carry backpack in return for FOV and battery. This would be an excellent tradeoff in many circumstances, but a completely pointless one in most of other. Battery part is mostly fixed even without the backpack if you can save your current setting and load it on the other headset and don't mind a brief interrupt in the experience.
    Where backpack tradeoff makes sense is mainly games - those require FOV and details, as well as the need/desire of several hours of uninterrupted fun. But why bother making this special 3k device for the audience served by 5 times cheaper VR stuff?
    But it just isn't needed for most of that boring real life. You don't need better device to explore human body, make a skype call, see or design parts of car, see how will the new house fit among the others, select which color would be better for walls or where to put kitchen elements etc etc.
    Reply
  • MrSpadge - Tuesday, April 19, 2016 - link

    Teether to a mobile phone, so that the hardware power in that thing can finally be put to some good use (and relieve battery drain on the headset). (I'm not really implying mobile phones would never be used properly. But those relatively powerful GPUs are mostly underutilized for sure) Reply
  • Murloc - Thursday, March 31, 2016 - link

    is the FOV limited in the sense that you see black on your peripheral vision, or just that the holograms don't show up in the peripheral vision? Reply
  • bji - Thursday, March 31, 2016 - link

    It's the latter. The holograms are cropped to a small region in the center of your field of view. Everything outside of that small region is clear plastic that does not impede your vision. Reply
  • bji - Thursday, March 31, 2016 - link

    "The field of view issue is still very small, and clearly not something they were not able to address before they shipped to developers"

    That is a very awkward, confusing, and just plain incorrect sentence!
    Reply

Log in

Don't have an account? Sign up now