NVIIDA Ansel, Simultaneous Multi-Projection, & VR Funhouse Status Updates

Along with today’s news about the GeForce GTX 1060 launch, NVIDIA is also offering updated news on a few of their technologies and related software projects.

We’ll start with Ansel, NVIDIA’s 360 degree high-resolution screenshot composition and capture technology. After initially announcing it alongside the GTX 1080 as part of their Pascal technology briefing, the company is announcing that it will finally be shipping in select games this month, with the first of those shipping today. The first two games to get Ansel-enabled will be DICE’s Mirror’s Edge: Catalyst and CD Projekt Red’s The Witcher 3. Ansel support for Mirror’s Edge is launching today (or as NVIDIA’s press release puts it, “immediate availability”), meanwhile The Witcher 3 will get support added later this month.

As the tech requires vendors to integrate it into games and game engines on a case-by-case basis, this is a gradual rollout, but one NVIDIA is hoping to accelerate over time. The company has already lined up a half dozen additional games that will support the technology, including Unreal Tournament and No Man’s Sky, but they are not announcing an availability date at this time.

Meanwhile, in a more general status update on their Simultaneous Multi-Projection technology, NVIDIA is announcing that they have lined up both Unity and Epic Games to add support for the technology to their respective Unity and Unreal Engine 4 game engines. To that end the company is also confirming that over 30 games are now in development to implement the technology, including Epic’s Unreal Tournament.

Besides being a marquee feature of the Pascal architecture, simultaneous multi-projection is seen by NVIDIA as a key element in establishing a lead in the VR market. Though the full benefits of the technology remain to be seen, any potential performance advantage would be in their favor, and we should expect to see it significantly promoted alongside the GTX 1060, which will be NVIIDA’s entry-level VR card. Of course as developers need to implement the technology first, which is why for NVIDIA is it so important to get developers on-board and to make sure potential customers are aware.

Finally, speaking of VR, NVIDIA is also announcing that their big tech demo for Pascal, VR Funhouse, will be shipping this month. Unveiled alongside Ansel and SMP at the Pascal launch, VR Funhouse is built on Unreal Engine 4 and is meant to serve as a testbed for NVIDIA’s latest GameWorks/VRWorks technologies, including SMP and VRWorks Audio. The tech demo will be released on Steam later this month and will support the GTX 1060 and above. Though Pascal owners will want to take note that as this is a VR demo, it will require a VR headset – specifically, the HTC Vive – in order to use it.

Meanwhile NVIDIA has also confirmed that the source code to VR Funhouse will be opened up to developers. Though the primarily goal here is to allow developers to add additional attractions/modules to the tech demo, more broadly speaking it’s another means to help encourage developer adoption of GameWorks/VRWorks, giving developers a starting point for using the various technologies in NVIDIA’s libraries.

NVIDIA Announces GeForce GTX 1060: Starting at $249, Available July 19th
Comments Locked

228 Comments

View All Comments

  • eddman - Thursday, July 7, 2016 - link

    The higher prices probably have to do with 1080/1070's high demand and low supply. A while later they will definitely arrive at their MSRP. This has happened in the past too.

    I don't care if there has been cases of damage or not. I'm not buying non-standard cards. I'd only consider a 480 with two 6-pin connectors, or a single 8-pin.

    Of course it's physics. I know how hot my room can or cannot get. Even a single extra 60 watt lamp is enough to make it hotter than it should be for me, let alone "a couple", which might end up to be the difference between the 1060 and 480. I don't want a card that consumes more for the same performance. Period.

    The point is that there are people who care about such stuff, which you dismissed.
  • eddman - Thursday, July 7, 2016 - link

    *No editing function.

    Custom 1060s are supposed to launch at the same time. They would probably still cost more than $250 but that would change in time.
  • DrKlahn - Thursday, July 7, 2016 - link

    I didn't dismiss it. Just looking at the most common situation in which a ~40w difference is not likely to make any meaningful difference. Yours is not a common situation. There will always be outliers.

    Fine grab one of the upcoming 480 AIB's with an 8pin if you are worried. There's no evidence to suggest you would have any issues, but there are certainly products that will put you at ease. AMD is supposed to release a driver today to reallocate power routing to alleviate the concerns from the tech press. Their solution may also put you at ease with a reference card. Although I suspect you will buy a 1060 regardless.

    Yes as yields improve and supply increases I expect all Pascal cards to finally see price drops. But the immediate situation for the 1060 is likely to follow the 1070/1080. If it does the cheapest AIBs will likely be around $280 and I don't really expect them to go much less than $299.

    In the next few months I'm guessing the low supply issue with 1080 and 1070 will in all likelihood greatly impact the 1060. Nvidia has a contact with TSMC for X amount of wafers (they are not their only customers). As long as they are unable to meet demand for their high margin parts, they are not likely to allocate much to lower margin parts. That's simply business. The only other possibility is that the 1060 is a cut down 1070 and not its own die (which is rumored). That would give Nvidia a salvage part situation which would help short term availability, but be very bad for the long term affordability of the part (bad margin).

    I think the 1060 will be a good performing part. I think it will have limited availability for at least a few months. I think its main intent is to keep the faithful from jumping to the 480 even though it's likely to cost more and there will be a wait to get it. If it's performance is meaningfully better, Nvidia will probably be successful in swaying a lot of folks away from the 480.
  • eddman - Thursday, July 7, 2016 - link

    I agree with most of that, except this:

    "I think its main intent is to keep the faithful from jumping to the 480"

    If someone does not buy a 480 now, he/she is an nvidia faithful? A smart consumer would wait for the competition in order to have a better view of the products before making a decision.

    Yes, I might end up buying a 1060 in the end, but if pascal proves to be inherently deficient in DX12 titles going forward, I might go with 480, even though I don't like power inefficient cards. I don't care about VR at this point, so that's one thing off the 1060's pro list.

    Also, 480 seems to be in short supply too, so it's not like I can buy one now at the suggested $240 price anyway. AMD is more or less in the same situation.
  • eddman - Thursday, July 7, 2016 - link

    *a custom 480 that is, with a better cooler and an 8-pin or 2x6-pin connectors.
  • DrKlahn - Thursday, July 7, 2016 - link

    So you don't believe rushing this release, while releasing no meaningful figures, had anything to do with keeping people from going to their competitor? Nvidia has been paper launching or teasing specs for years to accomplish this exact goal. And to their credit, it does work. This is plainly an attempt to stop consumers from making a purchase and consider their product. I'm not saying that it isn't being a smart consumer to consider all angles. I'm saying there is a reason this was rushed and nebulous performance promises given. If the 1060 had kept to it's original release schedule there would certainly have been people on the fence grabbing a 480. This is obviously meant to combat that.

    Pascal in it's current form shows very little progress on handling Asynchronous tasks. In fact all evidence I see points to it being a mildly modified optical shrink of Maxwell. Which may or may not effect it in future titles. Nvidia has tremendous developer clout and will throw money to steer games to favor their architecture. So my gut tells me that asynchronous compute functions will be sidelined us much as they can be to hurt their competition and spotlight their products. For the consumer that will certainly stall progress and hurt the overall experience, but such is business. The red herring will be the consoles. If engines are ported between platforms and asynchronous compute can't be gutted easily Nvidia may be in a bad position. Time will tell.

    As far as today's titles that use DX12, the 480 is producing some good figures that I don't expect to see the 1060 match. If you look at linear scaling based on shader counts vs the 1070 (it has 67% of the shader capacity of the 1070), that puts the 1060 well below the 480 in the majority of Guru3D's DX12 titles (RotTR being the only one it should win). Which doesn't account for the memory bandwidth decrease of the 1060 which should decrease how well it scales some. DX11 in the same review using the same guess it is slightly faster or tied vs the 480. Again I expect the low memory bandwidth to hurt scaling more than the figure I'm basing these guesses off of.

    The 480 is an efficient card, it's simply not as efficient as it's competitor is likely to be (or Nvidia's currently released Pascal cards). That makes it inefficient by comparison, but a card that is approaching 390x/980 performance (and in some cases exceeding them) for ~150w is not inefficient. Nvidia will likely continue to have an edge going forward here, but again for folks with anything better than a decent 500w power supply and a normal room the differences aren't going to amount to anything tangible.

    Also Tom's Germany has tested the 16.7.1 driver and found that it does mitigate the PCI-E power concerns. Performance due to optimizations appears to have gone up as well (though this is just one source so far).
  • eddman - Thursday, July 7, 2016 - link

    Not saying that it wasn't nvidia's intention, it was, but IMO a lot of buyers would've still waited to see how 1060 would pan out, regardless. Not because they are all nvidia fans, but because some of them know better not to jump in immediately.

    That's why I'm waiting on more DX12 titles. The current DX12 games are mostly half-assed hack jobs. Pascal already performs better than Maxwell in DX12, so something IS different. Perhaps they've implemented a different method to tackle async in comparison with AMD. Need to wait a while longer I suppose.

    Of course inefficiency is a relative thing. There is no other way to determine it, and 480 is inefficient for the performance that it provides. It's obvious.

    Yes, I read the PCPerspective article. Good job on AMD's part to pull it off. If it was nvidia, they might've tried to bury the whole thing. It still doesn't fully stay within power standards. I would still go for an 8-pin or 2x6-pin variant.
  • DrKlahn - Thursday, July 7, 2016 - link

    A lot less would have waited if the 1060 was still months away. This is a calculated move.

    Pascal performs better because it has more units and clockspeed to do the work. If you normalize the clock speed and the number of CUDA cores the scaling between it and Maxwell is very close. Yes there are small improvements even after normalization, but not anything that points to a major change. Don't get me wrong Pascal is fast at DX12, but it's a brute force approach.

    If you're worried about power post fix I can only conclude you have an agenda. The likelihood of it causing problems before was minimal and it's a pretty much a non-issue at this point. I could see waiting for an 8 pin connector and a better cooler for headroom, but to do so because of the power "issue" is nonsense at this point.

    No card exists in a vacuum and it is less efficient than its competitor. But again you would need a very specific case where a 120w card is going make any sort of tangible difference vs. a 150w one. For the vast majority it simply won't matter except maybe as a bragging point.
  • eddman - Thursday, July 7, 2016 - link

    I have an agenda simply because I don't want a card that breaks the 150W standard power barrier?! Maybe it's not an issue but I don't want it. That's just me. I stick to standards. Extra power connectors are there for a reason. Everyone else is free to do as they wish.

    I don't know what you mean by normalization exactly. Hardwarecanucks' separate performance summary charts show pascal is quite better than Maxwell in DX12.
    I don't know what method nvidia used, but it's there. How are you so sure it's just brute force? It could be the new preemption feature. Even if it's brute forcing, so what? It still does better than Maxwell. It isn't as fast as GCN in the DX12 titles that are out there so far, though.
  • ACE76 - Monday, July 11, 2016 - link

    If standards are that important to you, you should probably just buy ready made PCs....just about every person that builds a custom PC, tweaks it for performance one way or the other....even doing something as minor as changing memory timings puts it out of standard..adding a little clock speed to your CPU or video card puts it out of standard...it's pointless to even care this much about "standards" unless there was a genuine worry...there was a very little worry before AMDs fix but now after the fix, there literally no worry....a $49 PSU could take that load on the 6 pin with no issue.

Log in

Don't have an account? Sign up now