02:12PM EDT - And we're done here

02:11PM EDT - Wrap-up

02:10PM EDT - Elon and Jen-Hsun are wrapping up

02:08PM EDT - Multiple levels of security. Infotainment and actual driving controls are separated

02:07PM EDT - Now discussing car hacking

02:00PM EDT - Fast and slow is easy. Mid-speed is hard

02:00PM EDT - Discussing how various speeds are easier or harder

01:55PM EDT - Even if self-driving cars become a viable product, it will take 20 years to replace the car fleet

01:54PM EDT - Jen-Hsun and Elon are discussing the potential and benefits of self-driving cars

01:53PM EDT - Musk responds that he's more worried about "strong" AI, and not specialty AI like self-driving cars

01:52PM EDT - Jen-Hsun is grilling Musk on some of his previous comments on the danger of artificial intelligence

01:51PM EDT - Tesla meets Tesla

01:50PM EDT - Now on stage: Elon Musk

01:49PM EDT - Drive PX dev kit available in May, $10,000

01:46PM EDT - Jen-Hsun has a Drive PX in hand

01:42PM EDT - The key to training cars is not to train them low-level physics, but to train them the high-level apect of how the world works and how humans behave

01:42PM EDT - How do you make a car analogy when you're already talking about cars? With a baby analogy

01:39PM EDT - I for one am all for more Freespace

01:39PM EDT - Jen-Hsun talking about how free space is important to understand in a self-driving car

01:36PM EDT - Recapping the Drive PX system

01:36PM EDT - NVIDIA's vision: to augment ADAS with GPU deep learning

01:35PM EDT - Final subject of the day: self-driving cars

01:34PM EDT - And we're done with roadmaps. Looks like no updates on the Tegra roadmap

01:32PM EDT - 28 TFLOPS FP32

01:32PM EDT - DIGITS DevBox: 1300W

01:30PM EDT - Now Jen-Hsun is explaining why he thinks Pascal will offer 10x the performance of Maxwell, by combining the FP16 performance gains with other improvements in memory bandwidth and better interconnect performance through NVLink

01:29PM EDT - FP16 is quite imprecise, but NV seems convinced it can still be useful for neural network convolution

01:28PM EDT - Pascal has 3x the memory bandwidth of Maxwell. This would put memory bandwidth at 1TB/sec

01:28PM EDT - It sounds like Pascal will come with Tegra X1's FP16 performance improvements

01:27PM EDT - Mixed precision: discussing the use of FP16

01:27PM EDT - 4x the mixed precision performance of Maxwell

01:26PM EDT - Pascal offers 2x the perf per watt of Maxwell

01:26PM EDT - Also reiterating the fact that Pascal will be the first NVIDIA GPU with 3D memory and NVLink

01:25PM EDT - New feature: mixed precision

01:25PM EDT - Roadmap update on Pascal and Volta

01:24PM EDT - Now: roadmaps

01:23PM EDT - NVIDIA is building them one at a time

01:23PM EDT - $15,000, available May 2015

01:22PM EDT - They hope not to sell a whole lot of these, but rather to have these bootstrap further use of DIGITS

01:22PM EDT - Jen-Hsun is stressing that this is a box for developers, not for end-users

01:21PM EDT - Specialty box for running the DIGITS middleware

01:21PM EDT - 4 cards in a complete system. As much compute performance as you can get out of a single electrical outlet

01:20PM EDT - New product: the DIGITS DevBox

01:19PM EDT - Web-based UI

01:19PM EDT - DIGITS: A deep GPU training system for data scientists

01:18PM EDT - NVIDIA now discussing ew frameworks for deep learning

01:16PM EDT - It's surprisingly accurate

01:16PM EDT - Demonstrating how he is able to use neural networks to train computers to describe scenes they're seeing

01:12PM EDT - "Automated image captioning with convnets and recurrent nets"

01:12PM EDT - Now on stage: Andrej Karpathy of Stanford

01:09PM EDT - Deep learning can also be applied to medical research. Using networks to identify patterns in drugs and biology

01:07PM EDT - Discussing how quickly companies have adopted deep learning, both major and start-ups

01:03PM EDT - Continuing to discuss the training process, and how networks get more accurate over time

12:59PM EDT - Now discussing AlexNet in depth

12:49PM EDT - The moral of the story: GPUs are a good match for image recognition via neural networks

12:48PM EDT - Now discussing AlexNet, a GPU powered neural network software package, one of the first and significantly better than its CPU counterparts

12:47PM EDT - Recounting a particular story about neural networks and handwriting recognition in 1999

12:44PM EDT - Discussing the history of neural networks for image recognition

12:42PM EDT - Now shifting gears from GTX Titan X to a deeper focus on deep learning

12:41PM EDT - While GTX Titan X lacks FP64 performance, NVIDIA still believes it will be very useful for compute users who need high FP32 performance such as neural networks and other examples of deep learning

12:40PM EDT - (The crowd takes a second to start applauding)

12:40PM EDT - GTX Titan X: $999

12:39PM EDT - Discussing how NVIDIA's cuDNN neural network middleware has allowed them to significantly increase neural network training times. Down to 3 days on GTX Titan X

12:37PM EDT - Now how GTX Titan X ties into deep learning

12:32PM EDT - Now running Epic's kite demo in real time on GTX Titan X

12:31PM EDT - Big Maxwell does not have Big Kepler's FP64 performance. Titan X offers high FP32, but only minimal FP64

12:30PM EDT - Titan X: 8 billion transistors. Riva 128 was 4 million

12:30PM EDT - Now rolling a teaser video

12:29PM EDT - Previously teased at GDC 2015: http://www.anandtech.com/show/9049/nvidia-announces-geforce-gtx-titan-x

12:29PM EDT - New GPU: GTX Titan X

12:29PM EDT - Jen-Hsun is continuing to address the developers, talking about how NVIDIA has created a top-to-bottom GPU compute ecosystem

12:26PM EDT - (Not clear if that's cumulative, or in the last year)

12:26PM EDT - NVIDIA has sold over 450K Tesla GPUs at this point

12:25PM EDT - Now recapping the first year of GTC, and the first year of NVIDIA's Tesla initiative

12:23PM EDT - But NVIDIA couldn't do this without their audience, the developers

12:22PM EDT - Tesla continues to do well for the company, with NVIDIA racking up even more supercomputer wins. It has taken longer than NVIDIA originally projected, but it looks like Tesla is finally taking off

12:21PM EDT - Pro Visualization: Another good year, with GRID being a big driver

12:20PM EDT - Automotive revenue has doubled year-over-year

12:19PM EDT - Cars: a subject near and dear to Jen-Hsun's heart. Automotive has been a bright spot for NVIDIA's Tegra business

12:18PM EDT - Also recapping the SHIELD Console announcement from GDC 2015

12:17PM EDT - Year in review: GeForce

12:17PM EDT - Jen-Hsun thinks we'll be talking about deep learning fo te next decade, and thinks it will be very important to NVIDIA's future

12:16PM EDT - This continues NVIDIA's earlier advocacy of this field that was started with their CES 2015 Tegra X1 presentation

12:15PM EDT - Today's theme: deep learning

12:15PM EDT - 4 announcements: A new GPU, a very fast box, a roadmap reveal, and self-driving cars

12:14PM EDT - Jen-Hsun has taken the stage

12:14PM EDT - We're told to expect several surprises today. Meanwhile Tesla CEO Elon Musk is already confirmed to be one of today's guests, so that should be interesting

12:13PM EDT - Run time is expected to be 2 hours, with NVIDIA CEO Jen-Hsun Huang doing much of the talking

12:13PM EDT - We're her at the keynote presentation for NVIDIA's annaul conference, the GPU Technology Conference

Comments Locked

47 Comments

View All Comments

  • psyq321 - Tuesday, March 17, 2015 - link

    I am quite sure NVIDIA will follow-up with the full FP64 SKU for their Tesla line of products. I doubt they would design "big Maxwell" just for the highest-end enthusiast market. The size of that market is literally nothing compared to the scientific / HPC / finance markets which have no problem shelling $5K per card in batches of dozens / hundreds per order.

    I do hope AMD gains market share, though - simply because more competition is good for the consumer. If NVIDIA remains being Intel of the GPGPU market, it is going to become the same kind of market as mature x86 market, with mobile/entry desktop on one extreme and four-digit enterprise range on the other extreme (soon five probably for the highest end Xeon EX 8xxx v5 or whatever).
  • ExarKun333 - Tuesday, March 17, 2015 - link

    Interestingly-enough, NV's own slides don't show Tesla getting updated until Pascal. It doesn't seem that Maxwell is getting DP at all. Not set in stone, but that was the word just last month. Not surprising this Titan doesn't have that...
  • psyq321 - Tuesday, March 17, 2015 - link

    I think it is simply down to the process - GM200 is simply exhausting what the TSMC 28nm process can offer. Even without FP64, the die is >huge<.

    Maybe NVIDIA will retire Tesla as a brand (though this would sound to me like a strange decision taking into account Tesla's brand recognition in HPC field), but they would be crazy to abandon the HPC and enterprise markets which require FP64.

    I would rather suspect that GM200 is, simply, a short-term answer to AMD R390X - a mere necessity due to the market situation. Without that, I doubt NVIDIA would even release "Big Maxwell" as a consumer product before doing a process shrink and launching it as an professional product first. This "high end enthusiast" market is very small - this is the same reason why Intel is simply wrapping Xeon EP rejects as "-E" HEDT "enthusiast" parts, it is merely a by-product of the R&D done for the professional markets where the margins and volumes are.

    After the process shrink, I am almost 100% sure NVIDIA will release GM210 or whatever, which will have proper FP64 support and feature in the Tesla products. GM210 might even be exclusively reserved for the pro/enterprise market in the same way GK210 was.

    Kepler-based K80, with 24 GB of RAM is still quite competitive if one needs such power and FP64 support and I suppose NVIDIA launched GK210 late last year precisely because they would need to wait for a process shrink before "Big Maxwell" will be ready for the pro market.

    While this situation with GM200 launch is clearly suboptimal for NVIDIA, since pro market commands the highest margins and volumes, I suppose they could not avoid this situation due to the 16nm process maturity and availability from TSMC.

    I suppose the best that could happen for the consumer (consumer as in non-pro market) would be some kind of Maxwell version of 780Ti - maybe 980Ti or whatever, something which would give Titan X performance but for the $500-$600 mark. I suppose at this very moment NVIDIA has no reasons to release such product and the fate of such part depends firmly on AMD's next-gen performance.
  • testbug00 - Tuesday, March 17, 2015 - link

    50% more CUDA cores and 50% larger bus shouldn't take ~43% more of the die. Given 600mm^2 die. Either Nvidia didn't max the die size, Nvidia spread out the transistors to lower power usage, or Nvidia disabled the units.

    I think the latter 2 are the most likely, I hope that it's the 2nd, not the 3rd option. While I don't like Nvidia's pricing, being able to get access to FP64 CUDA high performance for less than 4-5 thousand dollars is good.
  • Yojimbo - Tuesday, March 17, 2015 - link

    There already are Maxwell-bases Quadro cards, and many of the Kepler-based Quadros have 1/24 double precision.
  • hammer256 - Tuesday, March 17, 2015 - link

    If AMD is also using 28nm, there isn't a whole lot that can be done I would imagine. I'm would also guess that FP64 units are going to take more transistors than their equivalent FP32 units (if both are running at the same IPC). Argh, need smaller processes.
  • testbug00 - Tuesday, March 17, 2015 - link

    GCN is good at FP64 to start. You don't need dedicated units. That's what Nvidia used in Kepler GK110 and, well... Personally I assumed they would use them in GM200.

    The 390(x) silicon will have a FP64 rate of 1/2. AMD will probably lock it down to 1/8, like Hawaii.

    NVidia's architectural choices allow for other "advantages" but, personally. I think a huge GPU like this really needs FP64 to sell a at high margins and pay itself off. But, I assume Nvidia does far more market research than me when it comes to this kind of stuff.
  • Yojimbo - Tuesday, March 17, 2015 - link

    I don't think AMD has the same level of support in terms of compute-oriented development tools as NVIDIA has. Also, AMD hasn't released any new architecture, so how will AMD gain market share? The old situation of NVIDIA having preferable offerings still exists. Plus, market share in HPC seems like it shouldn't shift around as quickly as gaming GPU market share. One can buy a new card from a competitor of one's old gaming card and drop it in and gain benefits as easily as one can buy a new card from the same manufacturer as one's old gaming card to gain benefits. If one buys a new compute GPU with a different architecture, one must re-optimize one's code for the new architecture. People aren't likely to switch to AMD even if AMD released a compute card in July that was superior to Tesla cards. Some time later, say 9 months, after that, NVIDIA would be releasing Pascal and presumably take the lead again. It probably doesn't make sense to re-optimize all code for 9 months of better performance. Perhaps for new entrants, it might make sense, but one is still more likely to choose the ecosystem which has demonstrated ambition and service in the sector.
  • eanazag - Tuesday, March 17, 2015 - link

    This live blog could use some iPhone panorama pictures. Nvidia has a wide format presentation.
  • mdriftmeyer - Tuesday, March 17, 2015 - link

    ``DIGITS DevBox: $15,000, available May 2015''

    Suddenly I'm back to 1996 and SGI is rolling out a new Octane System with a cheap shell.

Log in

Don't have an account? Sign up now