More 3D than 3D: Stereoscopic Defined

Let's start with reality: we live in a world where things occupy a finite volume of space at any given moment in time... Alright, maybe that's not a good way to explain this. Let me try again. Stuff we see in real life has some width, some height and some depth. Our life in our 3D world and our two eyes give us the ability to quickly and easily judge position and dimensions of objects. 3D video games try to approximate this by drawing a two image that has many of the same "depth cues" we use to judge position and shape in reality.

Looking at a picture of something, a 2D image can help us perceive some of the depth that we would have seen if we had stood at the same location as the camera: stuff that's further away appears relatively smaller than the foreground. Shadows and lighting help give us a feel for dimensions as they fall on objects. If we were to talk about video, we would see parallax in effect making it look like objects closer to the viewer move faster than objects further away. Our experience tells us that we can expect certain constants in our reality and we pick up on those and use them to judge things that look similar to reality. Video games exploit all these things to help tell our brains that there is depth in that monitor. Or maybe we're looking at a video of something that was reality. Either way, there is something major (aside from actual depth) missing.

Though we can judge 3 dimensions to a certain extent based on depth cues, having two eyes see objects from two slightly different positions is what really tells our brain that something has depth. The combination of these two slightly different images in our brain delivers tons of information on depth. Trying to play catch with one eye is tough. Just ask your neighborhood pirate.

Seeing two different images with your two different eyes, or rather presenting two different images of the same thing from slightly different positions, is what stereoscopic 3D is. It's right there in the word ... ya know ... stereo ... and scopic. Alright, moving on.

If you've ever tried looking at those "magic eye" pictures, you know what impact just stereoscopic info can have. For those who don't know, a magic eye image is a seemingly random looking pattern that when viewed with your eyes looking "through" the image reveals a hidden 3D picture. Though there is absolutely no other depth information in the picture, no lighting or shadows, no perspective projection, nothing but basic shapes that each eye picks up when you focus through the image, the 3D effect is pronounced and looks "deeper" than any 3D game out there.

This is not a sailboat.

Combining stereoscopic information with all the other depth information makes for a dramatic effect when done properly. Correct rendering and presentation of left and right eye images with proper 3D projection, lighting all that simply looks real enough to touch. Viewing a game properly rendered for stereoscopic effects can range from feeling like looking at a shoe box diorama or a popup book to looking through a window into the next room.

Hollywood tried stereoscopic 3D with anaglyphs (those red and blue images you need the red and blue glasses for), but it didn't really take off except as a sort of lame gimmick. Back in the late 90s and early this century, we saw the computer industry test the waters with active shutter glasses that worked quite a bit better. Rather than displaying a single images with both eye views superimposed requiring filtering, shutter glasses cover one eye while the entire screen displays an image rendered for the other eye. That eye is covered while the first is uncovered to see it's own full resolution full color image. When done right this produces amazing effects.

There are a couple catches though. This process needs to happen super fast and super accurately. Anyone who spent (or spends) hours staring at sub-60Hz CRTs knows that slow flicker can cause problems from eye strain to migraines. So we need at least 60Hz for each eye for a passable experience. We also need to make absolutely certain that one eye doesn't see any of the image intended for the other eye. Thus, when building active shutter glasses, a lot of work needs to go into making both lenses able to turn on and off very fast and very accurately, and we need a display that can deliver 120 frames per second in order to achieve 60 for each eye.

Early shutter glasses and applications could work too slowly delivering the effect with a side of eye strain, and getting really good results required a CRT that could handle 120Hz and glasses that could match pace. It also required an application built for stereoscopic viewing or a sort of wrapper driver that could make the application render two alternating images every frame. Requiring the rendering of an extra image per "frame" required realtime 3D software to be very fast as well. These and other technical limitations helped to keep stereoscopic 3D on the desktop from taking off.

There is still a market today for active shutter glasses and stereoscopic viewing, though there has been sort of a lull between the production of CRTs and the availability of 120Hz LCD panels. And while LCDs that can accept and display a 120Hz signal are just starting to hit the market, it's still a little early for a resurgence of the technology. But for those early adopters out there, NVIDIA hopes to be the option of choice. So what's the big deal about NVIDIA's solution? Let's check it out.

Index Not Just Another Pair of Glasses: GeForce 3D Vision at Work


View All Comments

  • Matt Campbell - Thursday, January 8, 2009 - link

    One of my roommates in college had a VR helmet he used to play Descent, and was interning at a company designing (then) state-of-the-art updates to it. It was pretty wild to try, and hysterical to watch the person in the chair dodging and moving as things flew at them. It was really dodgy on support though, and gave most of us a headache after about 10 minutes. Now it's over 10 years later, and it doesn't sound like much has changed. Reply
  • crimson117 - Thursday, January 8, 2009 - link

    VR helmets were more about making your real head's position guide your avatar's head's position than about providing stereoscopic 3D. Reply
  • Holly - Thursday, January 8, 2009 - link

    They did the both. It had tiny screen for each eye...

    .. reminds me lovely days of System Shock :'(
  • Dfere - Thursday, January 8, 2009 - link

    So. Mediocre equipment with mediocre drivers. Gee, why would anyone want us to buy it?

    Am I the only one getting a feeling this is a start of something designed to suck up more GPU power and/or sell SLI as a mainstream requirement? After all, resolutions and FPS increases can't alone fuel the growth Nvidia and ATI would like.
  • PrinceGaz - Thursday, January 8, 2009 - link

    I think you are being hopelessly negative about why nVidia would be doing this.

    What advantage do they gain by a move towards stereoscopic 3D glasses? Okay, increased 3D rendering power is needed as each frame has to be rendered twice to maintain the same framerate, but GPU power is increasing so quickly that is almost a non-issue, so SLI is irrelevant... NOT.

    The main problem with stereoscopic rendering is each consecutive frame has to be rendered from a different perspective, and only every second frame is directly related to the one before it. That seems to be so nicely connected to what SLI AFR mode provides that it is too good to be true. One card does the left-eye in SLI AFR, the other the right-eye, and with suitably designed drivers, you get all the normal effects which rely on access to the previous frame (motion-blur etc) but in a "3D graphics system" sell twice as many cards as one card is doing each eye. They're not daft-- stereoscopic display is going to make dual GPU cards not just a luxury for the high-end gamer, but a necessity for normal gamers who want a satisfactory 3D experience.
  • Gannon - Thursday, January 8, 2009 - link

    ... for nvidia to co-operate with monitor manufacturers and implement 3D in the display itself instead of these half-baked attempts at depth. Nobody really wants to wear special glasses so they can have 3D depth perception on their computer.

    The only way you are going to standardize something like this (because people are lazy and ignorant, lets face it) is to do it at the point where everybody gets it so it is standardized - everyone needs a monitor with their computer, so it would make sense to work towards displays that either:

    1) Are natively 3D or
    2) Built the costly stereoscopy into the monitor itself, thereby reducing costs through economies of scale.

    I really think current shutter based stereoscopic 3D is a hold-over until we start to get real 3D displays. If I was nvidia I'd want to do it on the monitor end, not as an after-market part targetted towards gamers at a $200 price point.
  • nubie - Friday, January 9, 2009 - link

    Try Passive glasses, weight is next to nothing, no moving parts, no batteries.

    Just polarization that works off of native LCD tech:">

    nVidia dropped support for this, boo/hiss.
  • rcr - Thursday, January 8, 2009 - link

    Is there the possibility to just use an SLI-system to get rid of these problems about the visual quality. So would it be possible to let every Graphiccard do the calculations for every eye and so you could the same Quality as on one card? Reply
  • wh3resmycar - Thursday, January 8, 2009 - link

    what do you guys think? how about ViiBii?
  • JonnyDough - Thursday, January 8, 2009 - link

    No, actually the picture says "AT" just in case anyone couldn't see it. :-) Reply

Log in

Don't have an account? Sign up now