This week, Google announced Cloud TPU beta availability on the Google Cloud Platform (GCP), accessible through their Compute Engine infrastructure-as-a-service. Using the second generation of Google’s tensor processing units (TPUs), the standard Cloud TPU configuration remains four custom ASICs and 64 GB of HBM2 on a single board, intended for accelerating TensorFlow-based machine learning workloads. With a leased Google Compute Engine VM, Cloud TPU resources can be used alongside current Google Cloud Platform CPU and GPU offerings. First revealed at Google I/O 2016, the original TPU was a PCIe-based accelerator designed for inference workloads, and for the most part, the TPUv1 was used internally. This past summer, Google announced the inference and training oriented successor, the TPUv2, outlining plans to incorporate it into their cloud...

The NVIDIA Titan V Preview - Titanomachy: War of the Titans

Today we're taking a preview look at NVIDIA's new compute accelerator and video card, the $3000 NVIDIA Titan V. In Greek mythology Titanomachy was the war of the Titans...

112 by Ryan Smith & Nate Oh on 12/20/2017

NVIDIA Announces “NVIDIA Titan V" Video Card: GV100 for $3000, On Sale Now

Out of nowhere, NVIDIA has revealed the NVIDIA Titan V today at the 2017 Neural Information Processing Systems conference, with CEO Jen-Hsun Huang flashing out the card on stage...

159 by Ryan Smith & Nate Oh on 12/7/2017

Hot Chips: Google TPU Performance Analysis Live Blog (3pm PT, 10pm UTC)

Another Hot Chips talk, now talking Google TPU.

30 by Ian Cutress on 8/22/2017

NVIDIA Volta Unveiled: GV100 GPU and Tesla V100 Accelerator Announced

Today at their annual GPU Technology Conference keynote, NVIDIA's CEO Jen-Hsun Huang announced the company's first Volta GPU and Volta products. Taking aim at the very high end of...

179 by Ryan Smith on 5/10/2017

Log in

Don't have an account? Sign up now