
<rss version="2.0" xmlns:dc="https://purl.org/dc/elements/1.1/" xmlns:atom="https://www.w3.org/2005/Atom">
<channel>
<docs>https://www.rssboard.org/rss-specification</docs>
<atom:link rel="self" type="application/rss+xml" href="https://www.anandtech.com/rss/" />
<title>AnandTech</title>
<description>This channel features the latest computer hardware related articles.</description>
<link>https://www.anandtech.com</link>
<language>en-us</language>
<copyright>Copyright 2023 AnandTech</copyright>


    
<item>
    <title>Silicon Motion Readies PCIe Gen5 SSDs with 3.5W Power Consumption</title>
    <dc:creator>Anton Shilov</dc:creator>    
    <description><![CDATA[ <p align="center"><a href="https://www.anandtech.com/show/20005/silicon-motion-readies-pcie-gen5-ssds-with-35w-power-consumption"><img src="https://images.anandtech.com/doci/20005/smi-sm2508-678_575px.jpg" alt="" /></a></p><p><p>Virtually all PCIe Gen5 SSDs released to date are relatively power-hungry and require a massive cooling system, effectively preventing their installation into compact desktops and notebooks. But Silicon Motion&#39;s next-generation SM2508 SSD platform promises to change that and enable ultra-high-performance drives with a PCIe 5.0 interface and power consumption as low as 3.5W. The company is showcasing prototypes of its PCIe Gen5 client drives at the Flash Memory Summit 2023.</p>

<p>The Silicon Motion SM2508 SSD controller features eight NAND channels supporting interface speed of up to 3600 MT/s per channel and capable of delivering sequential read and write speeds of up to 14 GB/s as well random read and write speeds of up to 2.5 million IOPS, which is comparable to capabilities of enterprise-grade SSDs with a PCIe 5.0 x4 interface.&nbsp;</p>

<p>Perhaps the most critical aspect of the SM2508 is its reduced power consumption, which is around 3.5W, according to Silicon Motion. SMI does not disclose whether 3.5W is idle, average, or peak power consumption, but 3.5W seems to be too high for peak, and even if it is average power consumption, it is considerably lower when compared to the average power consumption of PCIe Gen5 SSDs based on the Phison PS5026-E26 controller (around 10W).</p>

<p>The fastest 3D NAND flash memory devices currently feature a 2400 MT/s interface. Using such memory is crucial to fully saturate a PCIe 5.0 x4 interface and deliver sequential read/write performance of 13 &ndash; 14 GB/s. Support for a 3600 MT/s ONFI/Toggle DDR interface will allow the building of ultra-fast SSDs without using many memory devices, which is essential as next-generation 3D TLC devices are expected to have capacities of 1 Tb and larger.</p>

<p>Silicon Motion does not disclose many details about its SM2508, but we know from&nbsp;<a href="https://www.tomshardware.com/news/silicon-motion-readies-7nm-pcie-50-ssd-controller-for-q4-2023">unofficial sources</a>&nbsp;that the chip is made on TSCM&#39;s 12FFC (12 nm-class, compact low-power production node) and has been sampling since January 2023. Meanwhile, the company has targeted late 2023 &ndash; early 2024 as the launch timeframe for its consumer PCIe Gen5 SSD platform.</p>

<p>In addition to demonstrating its first client PC-bound SM2508-based SSDs at the FMS 2023, Silicon Motion is showcasing its MonTitan turnkey enterprise PCIe Gen5 x5 SSD solutions based on its&nbsp;<a href="https://www.anandtech.com/show/17512/silicon-motion-sm8366-montitan-ssd-platform">SM8366 controller introduced last year</a>. The SM8366 controller features 16 NAND channels at 2400 MT/s and can enable SSDs with capacities of up to 128 TB that offer up to 14 GB/s sequential read/write performance and up to 3M/2.8M random read/write performance. Samples of MonTitan SSDs will be demonstrated at the FMS 2023.</p>

<p>Source:&nbsp;<a href="https://siliconmotiontechnologycorporation.gcs-web.com/news-releases/news-release-details/huirongkejiyufms">Silicon Motion</a></p>
</p>]]></description>
    <link>https://www.anandtech.com/show/20005/silicon-motion-readies-pcie-gen5-ssds-with-35w-power-consumption</link>
 	<pubDate>Wed, 09 Aug 2023 09:30:00 EDT</pubDate>
 	<guid isPermaLink="false">tag:www.anandtech.com,20005:news</guid>
 	<category><![CDATA[ SSDs]]></category>                               
</item>  
    
    
<item>
    <title>SK Hynix Shows Off 321-Layer 3D TLC NAND Device</title>
    <dc:creator>Anton Shilov</dc:creator>    
    <description><![CDATA[ <p align="center"><a href="https://www.anandtech.com/show/20004/sk-hynix-shows-off-321layer-3d-tlc-nand-device"><img src="https://images.anandtech.com/doci/20004/SK-hynix_321-Layer-NAND_678_575px.jpg" alt="" /></a></p><p><p>SK Hynix showcased its 321-layer TLC NAND memory at the Flash Memory Summit 2023. The South Korean company is the first NAND maker to publicly demonstrate 3D NAND with over 300 layers.&nbsp;Although&nbsp;such memory is expected in&nbsp;mass production in 2025,&nbsp;the demonstration is meant to&nbsp;showcase SK Hynix&#39;s&nbsp;preparedness for the next wave of&nbsp;non-volatile&nbsp;memory&nbsp;technology.</p>

<p>This showcased 321-layer 3D NAND memory device boasts a 1 Tb (128 GB) capacity with TLC architecture, but SK Hynix refrained from revealing other details about it, such as interface speed. Meanwhile, the company mentioned that the chip features a 59% improvement in productivity compared to a 512 Gb 238-layer 3D TLC device, highlighting a significant improvement in per-wafer storage density. Whether or not the new production technology significantly reduces the&nbsp;cost-per-bit of 3D NAND is unclear.</p>

<p>SK Hynix using a 1 Tb 3D TLC device to demonstrate the prowess of its 321-layer 3D NAND process technology may be a good sign, and the company intends to build high-capacity 3D devices on this node. The potential means reduced cost-per-bit compared to existing process nodes.&nbsp;This sets the stage for higher-capacity SSDs and other 3D NAND flash-bases storage devices.</p>

<p>While SK Hynix has yet to reveal the specifics of building 321 active layers, it is safe to assume that the manufacturer used string stacking technology, just like the industry uses it for 200+ layer 3D NAND. However, it is unclear whether SK Hynix stacked two ~160-layer stacks on top of each other or managed to put three ~100+ stacks together.</p>

<p>SK Hynix&#39;s 321-layer 3D TLC NAND device continues to use the company&#39;s CMOS-under-array architecture that puts NAND logic below memory cells to save die space, which is why SK Hynix refers to it as 4D NAND, which is essentially a marketing term.</p>

<p>&quot;<em>With another breakthrough to address stacking limitations, SK Hynix will open the era of NAND with more than 300 layers and lead the market,</em>&quot; said&nbsp;Jungdal Choi,&nbsp;head of NAND&nbsp;development&nbsp;at SK Hynix, during a keynote speech.&nbsp;&quot;<em>With timely introduction of the high-performance and high-capacity NAND, we will strive to meet the requirements of the AI era and continue to lead innovation.</em>&quot;</p>

<p>Source:&nbsp;<a href="https://finance.yahoo.com/news/sk-hynix-showcases-samples-worlds-203000727.html">SK Hynix</a></p>
</p>]]></description>
    <link>https://www.anandtech.com/show/20004/sk-hynix-shows-off-321layer-3d-tlc-nand-device</link>
 	<pubDate>Wed, 09 Aug 2023 08:30:00 EDT</pubDate>
 	<guid isPermaLink="false">tag:www.anandtech.com,20004:news</guid>
 	<category><![CDATA[ Storage]]></category>                               
</item>  
    
    
<item>
    <title>Micron&#39;s CZ120 CXL Memory Expansion Modules Unveiled: 128GB and 256GB</title>
    <dc:creator>Anton Shilov</dc:creator>    
    <description><![CDATA[ <p align="center"><a href="https://www.anandtech.com/show/20003/microns-cz120-cxl-memory-expansion-modules-unveiled-128gb-and-256gb"><img src="https://images.anandtech.com/doci/20003/micron-memory-expansion-module-cxl-678_575px.jpg" alt="" /></a></p><p><p>This week, Micron announced the sample availability of its first CXL 2.0 memory expansion modules for servers that promise easy and cheap DRAM subsystem expansions.&nbsp;</p>

<p>Modern server platforms from AMD and Intel boast formidable 12- and 8-channel DDR5 memory subsystems offering bandwidth of up to 460.8 &ndash; 370.2 GB/s and capacities of up to 6 &ndash; 4 TB per socket. But some applications consume all DRAM they can get and demand more. To satisfy the needs of such applications, Micron has developed its CZ120 CXL 2.0 memory expansion modules that carry 128 GB and 256 GB of DRAM and connect to a CPU using&nbsp;a PCIe 5.0 x8 interface.</p>

<p>&quot;<em>Micron is advancing the adoption of CXL memory with this CZ120 sampling milestone to key customers,</em>&quot; said Siva Makineni, vice president of the Micron Advanced Memory Systems Group.</p>

<p>Micron&#39;s CZ120 memory expansion modules use Microchip&#39;s SMC 2000-series smart memory controller that supports two 64-bit DDR4/DDR5 channels as well as Micron&#39;s DRAM chips made on the company&#39;s 1&alpha; (1-alpha) memory production node. Every CZ120 module delivers bandwidth up to 36 GB/s (measured by running an MLC workload with a 2:1 read/write ratio on a single module), putting it only slightly behind a DDR5-4800 RDIMM (38.4 GB/s) but orders of magnitude ahead of a NAND-based storage device.</p>

<p style="text-align: center;"><a href="https://www.anandtech.com/show/20003/microns-cz120-cxl-memory-expansion-modules-unveiled-128gb-and-256gb"><img alt="" src="https://images.anandtech.com/doci/20003/micron-memory-expansion-module-cxl-1_575px.jpg" /></a></p>

<p>Micron asserts that adding four of its 256 GB CZ120 CXL 2.0 Type 3 expansion modules to a server running 12 64GB DDR5 RDIMMs can increase memory bandwidth by 24%, which is significant. Perhaps more significant is that adding an extra 1 TB of memory enables such a server to handle nearly double the number of database queries daily.</p>

<p>Of course, such an expansion means using PCIe lanes and thus reducing the number of SSDs that can be installed into such a machine. But the reward seems quite noticeable, especially if Micron&#39;s CZ120 memory expansion modules are cheaper than actual RDIMMs or have comparable costs.</p>

<p>For now, Micron has announced sample availability, and it is unclear when the company will start to ship its CZ120 memory expansion modules commercially. Micron claims that it has already tested its modules with major server platform developers, so right now, its customers are probably validating and qualifying the modules with their machines and workloads, so it is reasonable to expect CZ120 to be deployed already in 2024.</p>

<p>&quot;<em>We have been developing and testing our CZ120 memory expansion modules utilizing both Intel and AMD platforms capable of supporting the CXL standard,</em>&quot; added Makineni. &quot;<em>Our product innovation coupled with our collaborative efforts with the CXL ecosystem will enable faster acceptance of this new standard, as we work collectively to meet the ever-growing demands of data centers and their memory-intensive workloads.</em>&quot;</p>
</p>]]></description>
    <link>https://www.anandtech.com/show/20003/microns-cz120-cxl-memory-expansion-modules-unveiled-128gb-and-256gb</link>
 	<pubDate>Wed, 09 Aug 2023 08:00:00 EDT</pubDate>
 	<guid isPermaLink="false">tag:www.anandtech.com,20003:news</guid>
 	<category><![CDATA[ Memory]]></category>                               
</item>  
    
    
<item>
    <title>NVIDIA Completes ProViz Ada Lovelace Lineup with Three New Graphics Cards</title>
    <dc:creator>Anton Shilov</dc:creator>    
    <description><![CDATA[ <p align="center"><a href="https://www.anandtech.com/show/20002/nvidia-completes-proviz-ada-lovelace-lineup-with-three-new-graphics-cards"><img src="https://images.anandtech.com/doci/20002/RTX Workstation Image_575px.jpg" alt="" /></a></p><p><p>When NVIDIA began to roll out their Ada Lovelace architecture to the workstation market, the company introduced its new flagship RTX 6000 Ada graphics card meant to offer the highest performance possible as well as its quite spectacular RTX 4000 SFF board that delivers formidable performance in a tiny package. The gap between the two solutions is vast, and on Tuesday, the company finally unveiled new products that fill it.</p>

<p>NVIDIA&#39;s Ada Lovelace-based RTX-series professional graphics cards &mdash; the workstation-oriented RTX 4000 20GB, RTX 4500 24GB, RTX 5000 32GB, and the datacenter-bound L40S &mdash; graphics are designed for demanding graphics and artificial intelligence workloads, such as computer-aided design, digital content creation, real-time rendering, and basic simulations that are fine with FP32 precision. The new graphics solutions complement NVIDIA&#39;s Ada Lovelace-based workstation boards that have been announced: the midrange RTX 4000 SFF and the ultra-high-end RTX 6000 Ada. Meanwhile, NVIDIA&#39;s previous-generation offerings will continue to serve entry-level workstations based on its Ampere and Turing architectures.</p>

<p>Now, let us cover the new graphics boards in more detail.</p>

<p style="text-align: center;"><a href="https://www.anandtech.com/show/20002/nvidia-completes-proviz-ada-lovelace-lineup-with-three-new-graphics-cards"><img alt="" src="https://images.anandtech.com/doci/20002/RTX%204000%20Ada%20Generation%20GPU%20Image_575px.jpg" /></a></p>

<p>NVIDIA&#39;s&nbsp;<strong>RTX 4000</strong> 20GB&nbsp;is&nbsp;powered by the AD104&nbsp;graphics processor&nbsp;with 6,144 CUDA cores&nbsp;that&nbsp;promises&nbsp;a peak performance of 26.7 FP32 TFLOPS, which is considerably higher than 19.2 FP32 TFLOPS delivered by the RTX 4000 SFF that features the same GPU with the same configuration albeit working at lower clocks and therefore consuming up to 130W. Unlike the small form-factor board, this uses&nbsp;a full-height PCB but&nbsp;a single-slot cooling&nbsp;system. The novelty is&nbsp;slated for a September release&nbsp;with an MSRP of&nbsp;$1,250.</p>

<p style="text-align: center;"><a href="https://www.anandtech.com/show/20002/nvidia-completes-proviz-ada-lovelace-lineup-with-three-new-graphics-cards"><img alt="" src="https://images.anandtech.com/doci/20002/RTX%204500%20Ada%20Generation%20GPU%20Image_575px.jpg" /></a></p>

<p>The more powerful Ada Lovelace-based workstation board is called the&nbsp;<strong>RTX 4500,</strong>&nbsp;and it uses&nbsp;the AD104 GPU with 7,680 CUDA cores to deliver a compute performance of up&nbsp;to 39.6 FP32 TFLOPS&nbsp;at up to 210W.&nbsp;The board&nbsp;employs a dual-slot cooling system and will be available for $2,250&nbsp;sometime&nbsp;in&nbsp;October.</p>

<p style="text-align: center;"><a href="https://www.anandtech.com/show/20002/nvidia-completes-proviz-ada-lovelace-lineup-with-three-new-graphics-cards"><img alt="" src="https://images.anandtech.com/doci/20002/RTX%205000%20Ada%20Generation%20GPU%20Image_575px.jpg" /></a></p>

<p>Finally, NVIDIA is introducing its <strong>RTX 5000</strong> professional graphics card that&nbsp;utilizes the AD102 graphics processor with 12,800 CUDA cores&nbsp;(i.e., a very significant cut down)&nbsp;to achieve a compute performance of 65.3 FP32 TFLOPS&nbsp;at 250W. This board is set to hit the market now for $4,000, which is significantly lower compared to $6,800 for NVIDIA&#39;s flagship RTX 6000 Ada product.</p>

<table align="center" border="0" cellpadding="0" cellspacing="1" width="650">
	<tbody>
		<tr class="tgrey">
			<td align="center" colspan="8">NVIDIA Ada Lovelace Professional Graphics Cards</td>
		</tr>
		<tr class="tlblue">
			<td>&nbsp;</td>
			<td>RTX 4000 SFF</td>
			<td>RTX 4000</td>
			<td>RTX 4500</td>
			<td>RTX 5000</td>
			<td>RTX 6000</td>
			<td>L40S Ada</td>
		</tr>
		<tr>
			<td>GPU</td>
			<td style="text-align: center;">AD104</td>
			<td style="text-align: center;">AD104</td>
			<td style="text-align: center;">AD104</td>
			<td style="text-align: center;">AD102</td>
			<td style="text-align: center;">AD102</td>
			<td style="text-align: center;">AD102</td>
		</tr>
		<tr>
			<td>CUDA Cores</td>
			<td style="text-align: center;">6,144</td>
			<td style="text-align: center;">6,144</td>
			<td style="text-align: center;">7,680</td>
			<td style="text-align: center;">1,2800</td>
			<td style="text-align: center;">1,8176</td>
			<td style="text-align: center;">1,8176</td>
		</tr>
		<tr>
			<td>Memory</td>
			<td colspan="2" rowspan="1" style="text-align: center;">20 GB</td>
			<td style="text-align: center;">24 GB</td>
			<td style="text-align: center;">32 GB</td>
			<td colspan="2" rowspan="1" style="text-align: center;">48 GB</td>
		</tr>
		<tr>
			<td>Power</td>
			<td style="text-align: center;">70W</td>
			<td style="text-align: center;">130W</td>
			<td style="text-align: center;">210W</td>
			<td style="text-align: center;">250W</td>
			<td style="text-align: center;">300W</td>
			<td style="text-align: center;">?</td>
		</tr>
		<tr>
			<td>Cooling</td>
			<td style="text-align: center;">dual-slot, blower</td>
			<td style="text-align: center;"><span style="caret-color: rgb(68, 68, 68); color: rgb(68, 68, 68); text-align: center; background-color: rgb(238, 238, 238);">single-slot, blower</span></td>
			<td colspan="3" rowspan="1" style="text-align: center;"><span style="caret-color: rgb(68, 68, 68); color: rgb(68, 68, 68); text-align: center; background-color: rgb(238, 238, 238);">dual-slot, blower</span></td>
			<td style="text-align: center;">passive</td>
		</tr>
		<tr>
			<td>MSRP</td>
			<td style="text-align: center;">$1,250</td>
			<td style="text-align: center;"><span style="caret-color: rgb(68, 68, 68); color: rgb(68, 68, 68); text-align: center; background-color: rgb(238, 238, 238);">$1,250</span></td>
			<td style="text-align: center;">$2,250</td>
			<td style="text-align: center;">$4,000</td>
			<td style="text-align: center;">$6,800</td>
			<td style="text-align: center;">?</td>
		</tr>
	</tbody>
</table>

<p><span style="caret-color: rgb(68, 68, 68); color: rgb(68, 68, 68);">NVIDIA&#39;s latest&nbsp;ProViz&nbsp;graphics boards are set to be integrated into the upcoming&nbsp;workstation&nbsp;lineups&nbsp;of renowned companies, including Boxx, Dell, HP, Lambda, and Lenovo. Additionally,&nbsp;the graphics cards&nbsp;will be available for purchase&nbsp;from select graphics card makers like Leadtek, PNY, and Ryoyo, as well as major resellers like Arrow and Ingram. Meanwhile, there will be an Ada Lovelace professional graphics board that will unlikely be available separately.</span></p>

<p style="text-align: center;"><a href="https://www.anandtech.com/show/20002/nvidia-completes-proviz-ada-lovelace-lineup-with-three-new-graphics-cards"><img alt="" src="https://images.anandtech.com/doci/20002/L40S%20Image_575px.jpg" /></a></p>

<p>Catering to the needs of professionals&nbsp;using&nbsp;remote workstations, NVIDIA is launching the <strong>L40S Ada</strong> datacenter card.&nbsp;The board carries&nbsp;the AD102&nbsp;graphics processor&nbsp;with&nbsp;18,176&nbsp;active&nbsp;CUDA cores, delivering a staggering 91.6 FP32 TFLOPS&nbsp;performance.&nbsp;The product is initially set for NVIDIA&#39;s OVX servers&nbsp;that can be used to enable cloud AI and virtual desktop infrastructure. Still, it is reasonable to expect other AI and VDI infrastructure makers to adopt the L40S Ada board. Interestingly, despite being a data center-oriented product with passive cooling, the L40S Ada includes display outputs, making it suitable for workstations given adequate airflow&nbsp;inside&nbsp;or an attached blower.&nbsp;NVIDIA does not publish the pricing of its OVX machine or the L40S Ada card.</p>

<p>&quot;OVX systems with NVIDIA L40S GPUs accelerate AI, graphics, and video processing workloads and meet the demanding performance requirements of an ever-increasing set of complex and diverse applications,&quot; said Bob Pette, vice president of professional visualization at NVIDIA</p>
</p>]]></description>
    <link>https://www.anandtech.com/show/20002/nvidia-completes-proviz-ada-lovelace-lineup-with-three-new-graphics-cards</link>
 	<pubDate>Tue, 08 Aug 2023 16:00:00 EDT</pubDate>
 	<guid isPermaLink="false">tag:www.anandtech.com,20002:news</guid>
 	<category><![CDATA[ GPUs]]></category>                               
</item>  
    
    
<item>
    <title>NVIDIA Unveils Updated GH200 &#39;Grace Hopper&#39; Superchip with HBM3e Memory, Shipping in Q2&#39;2024</title>
    <dc:creator>Gavin Bonshor</dc:creator>    
    <description><![CDATA[ <p align="center"><a href="https://www.anandtech.com/show/20001/nvidia-unveils-gh200-grace-hopper-gpu-with-hbm3e-memory"><img src="https://images.anandtech.com/doci/20001/Grace Hopper Image jpg NVIDIA_575px.jpg" alt="" /></a></p><p><p>At SIGGRAPH in Los Angeles, NVIDIA unveiled a new variant of their GH200&#39; superchip,&#39; which is set to be the world&#39;s first GPU chip to be equipped with HBM3e memory. Designed to crunch the world&#39;s most complex generative AI workloads, the NVIDIA GH200 platform is designed to push the envelope of accelerated computing. Pooling&nbsp;their strengths in both the GPU space and growing efforts in the CPU space, NVIDIA is looking to deliver a semi-integrated design to conquer the highly competitive and complicated high-performance computing (HPC) market.</p>

<p>Although we&#39;ve covered some of the finer details of&nbsp;<a href="https://www.anandtech.com/show/18877/nvidia-grace-hopper-has-entered-full-production-announcing-dgx-gh200-ai-supercomputer">NVIDIA&#39;s Grace Hopper-related announcements, including their disclosure&nbsp;that GH200 has entered into full production</a>, NVIDIA&#39;s latest announcement is&nbsp;a new GH200 variant with HBM3e memory is coming later, in Q2 of 2024, to be exact. This is&nbsp;in addition to the GH200 with HBM3 already announced and is <a href="https://www.anandtech.com/show/18877/nvidia-grace-hopper-has-entered-full-production-announcing-dgx-gh200-ai-supercomputer">currently in production and due to land later this year</a>. This means NVIDIA has two versions of the same product, with GH200 incorporating HBM3 incoming and GH200 with HBM3e set to come later.</p>

<table align="center" border="1" cellpadding="3" cellspacing="0" width="85%">
	<tbody>
		<tr class="tgrey">
			<td align="center" colspan="5">NVIDIA Grace Hopper Specifications</td>
		</tr>
		<tr class="tlblue">
			<td align="center" class="contentwhite" width="243">&nbsp;</td>
			<td align="center" class="contentwhite" width="389">Grace Hopper (GH200) w/HBM3</td>
			<td align="center" class="contentwhite" width="389">Grace Hopper (GH200) w/HBM3e</td>
		</tr>
		<tr>
			<td align="left" class="tlgrey"><strong>CPU Cores</strong></td>
			<td align="center">72</td>
			<td align="center">72</td>
		</tr>
		<tr>
			<td align="left" class="tlgrey"><strong>CPU Architecture</strong></td>
			<td align="center">Arm Neoverse V2</td>
			<td align="center">Arm Neoverse V2</td>
		</tr>
		<tr>
			<td align="left" class="tlgrey"><strong>CPU Memory Capacity</strong></td>
			<td align="center">&lt;=480GB LPDDR5X (ECC)</td>
			<td align="center">&lt;=480GB LPDDR5X (ECC)</td>
		</tr>
		<tr>
			<td align="left" class="tlgrey"><strong>CPU Memory Bandwidth</strong></td>
			<td align="center">&lt;=512GB/sec</td>
			<td align="center">&lt;=512GB/sec</td>
		</tr>
		<tr>
			<td align="left" class="tlgrey"><strong>GPU SMs</strong></td>
			<td align="center">132</td>
			<td align="center">132?</td>
		</tr>
		<tr>
			<td align="left" class="tlgrey"><strong>GPU Tensor Cores</strong></td>
			<td align="center">528</td>
			<td align="center">528?</td>
		</tr>
		<tr>
			<td align="left" class="tlgrey"><strong>GPU Architecture</strong></td>
			<td align="center">Hopper</td>
			<td align="center">Hopper</td>
		</tr>
		<tr>
			<td align="left" class="tlgrey"><strong>GPU Memory Capcity</strong></td>
			<td align="center">96GB (Physical)<br />
			&lt;=96GB (Available)</td>
			<td align="center">144GB (Physical)<br />
			141GB (Available)</td>
		</tr>
		<tr>
			<td align="left" class="tlgrey"><strong>GPU Memory Bandwidth</strong></td>
			<td align="center">&lt;=4TB/sec</td>
			<td align="center">5TB/sec</td>
		</tr>
		<tr>
			<td align="left" class="tlgrey"><strong>GPU-to-CPU Interface</strong></td>
			<td align="center">900GB/sec<br />
			NVLink 4</td>
			<td align="center">900GB/sec<br />
			NVLink 4</td>
		</tr>
		<tr>
			<td align="left" class="tlgrey"><strong>TDP</strong></td>
			<td align="center">450W - 1000W</td>
			<td align="center">450W - 1000W</td>
		</tr>
		<tr>
			<td align="left" class="tlgrey"><strong>Manufacturing Process</strong></td>
			<td align="center">TSMC 4N</td>
			<td align="center">TSMC 4N</td>
		</tr>
		<tr>
			<td align="left" class="tlgrey"><strong>Interface</strong></td>
			<td align="center">Superchip</td>
			<td align="center">Superchip</td>
		</tr>
		<tr>
			<td align="left" class="tlgrey"><strong>Available</strong></td>
			<td align="center">H2&#39;2023</td>
			<td align="center">Q2&#39;2024</td>
		</tr>
	</tbody>
</table>

<p>During their keynote at SIGGRAPH 2023, President and CEO of NVIDIA, Jensen Huang, said,&nbsp;&quot;<em>To meet surging demand for generative AI, data centers require accelerated computing platforms with specialized needs.</em>&quot; Jensen also went on to say, &quot;<em>The new GH200 Grace Hopper Superchip platform delivers this with exceptional memory technology and bandwidth to improve throughput, the ability to connect GPUs to aggregate performance without compromise, and a server design that can be easily deployed across the entire data center.</em>&quot;&nbsp;</p>

<p>NVIDIA&#39;s GH200 GPU is set to be the world&#39;s first chip to ship with <a href="https://www.anandtech.com/show/18981/micron-unveils-hbm3-gen2-12-tbs-per-stack-at-92-gts-speed">HBM3e memory</a>, an updated version of the high-bandwidth memory with even greater bandwidth and, critically for NVIDIA, higher capacity 24GB stacks. This will allow NVIDIA to expand its local GPU memory from 96GB per GPU to 144GB (6 x 24GB stacks), a 50% increase that should be especially welcome for the AI market, where top models are massive in size and often memory capacity bound. In a dual configuration setup, it will be&nbsp;available with up to 282 GB of HBM3e memory, which NVIDIA states &quot;delivers up to 3.5 x more memory capacity and 3 x more bandwidth than the current generation offering.&quot;</p>

<p>Perhaps one of the most notable details NVIDIA shares is that the incoming GH200 GPU with HBM3e is &#39;fully&#39; compatible with the already announced NVIDIA MGX server specification, unveiled at Computex. This allows system manufacturers to have over 100 different variations of servers that can be deployed and is designed to offer a quick and cost-effective upgrade method.</p>

<p>NVIDIA claims that the GH200 GPU with HBM3e provides up to 50% faster memory performance than the current HBM3 memory and delivers up to 10 TB/s of bandwidth, with up to 5 TB/s per chip.</p>

<p align="center"><a href="https://www.anandtech.com/show/20001/nvidia-unveils-gh200-grace-hopper-gpu-with-hbm3e-memory"><img alt="" src="https://images.anandtech.com/doci/20001/diagram-topology-nvlink-switch-system-dgx-gh200_575px_575px.png" /></a></p>

<p>We&#39;ve already covered the&nbsp;<a href="https://www.anandtech.com/show/18877/nvidia-grace-hopper-has-entered-full-production-announcing-dgx-gh200-ai-supercomputer">announced DGX GH200 AI Supercomputer</a>&nbsp;built around NVIDIA&#39;s Grace Hopper platform. The DGX GH200 is a 24-rack cluster fully built on NVIDIA&#39;s architecture, with each a single DGX GH200 combining 256 chips and offering 120 TB of CPU-attached memory. These&nbsp;are connected using NVIDIA&#39;s NVLink, which has up to 96 local L1 switches providing immediate and instantaneous communications between GH200 blades.&nbsp;NVIDIA&#39;s NVLink allows the deployments to work together with a high-speed and coherent interconnect, giving the GH200 full access to CPU memory and allowing access for up to 1.2 TB of memory when&nbsp;in a dual configuration.</p>

<p>NVIDIA states that leading system manufacturers are expected to deliver GH200-based systems with HBM3e memory sometime in Q2 of 2024. It should also be noted that GH200 with HBM3 memory is currently in full production and is set to be launched by the end of this year. We expect to hear more about GH200 with HBM3e memory from NVIDIA in the coming months</p>
</p>]]></description>
    <link>https://www.anandtech.com/show/20001/nvidia-unveils-gh200-grace-hopper-gpu-with-hbm3e-memory</link>
 	<pubDate>Tue, 08 Aug 2023 14:20:00 EDT</pubDate>
 	<guid isPermaLink="false">tag:www.anandtech.com,20001:news</guid>
 	<category><![CDATA[ GPUs]]></category>                               
</item>  
    
    
<item>
    <title>TSMC Establishes Joint Venture to Build 12nm/16nm Fab in Europe</title>
    <dc:creator>Anton Shilov</dc:creator>    
    <description><![CDATA[ <p align="center"><a href="https://www.anandtech.com/show/20000/tsmc-establishes-joint-venture-to-build-12nm16nm-fab-in-europe"><img src="https://images.anandtech.com/doci/20000/tsmc_semiconductor_fab14_2_575px.jpg" alt="" /></a></p><p><p>TSMC on Tuesday announced plans to establish a European Semiconductor Manufacturing Company (ESMC)&nbsp;joint venture with its partners Bosch, Infineon, and NXP to build a fab near Dresden, Germany. The new 300-mm fab will produce chips on TSMC&#39;s 28/22 nm and 16/12 nm-class process technologies, primarily for automotive and industrial sectors. As the project is planned under the European Chips Act framework, TSMC is set to get subsidies to build it.</p>

<p>The proposed ESMC fab will be located near Dresden, Germany, and is slated to have a monthly production capacity of 40,000 300mm wafer starts per month. The fab is set to use TSMC&#39;s 28 nm family of production nodes, which includes several specialty manufacturing technologies and a 22 nm low-power fabrication process with planar transistors and 16 nm and 12 nm production technologies featuring FinFETs. The fab, which TSMC will operate, will employ about 2,000 workers and engineers.</p>

<p>ESMC intends to start fab construction in the latter half of 2024 and start making its first products there by the end of 2027. As TSMC planned, the proposed fab will mainly serve automakers based in Germany and Austria, ensuring a steady supply of chips to these companies in the latter half of the decade.&nbsp;</p>

<p>&quot;<em>This investment in Dresden demonstrates TSMC&#39;s commitment to serving our customer&#39;s strategic capacity and technology needs, and we are excited at this opportunity to deepen our long-standing partnership with Bosch, Infineon, and NXP,</em>&quot; said Dr. CC Wei, Chief Executive Officer of TSMC.</p>

<p>Meanwhile, since the fab will only make chips on mature 12/16 nm and 22/28 nm process technologies, automakers will still need to source advanced processors required for self-driving and sophisticated infotainment systems from TSMC&#39;s fabs in Taiwan and the U.S. Therefore, companies like Bosch, BMW, Infineon, Mercedes Benz Group, NXP, Stellantis, and Volkswagen Group will be able to get various microcontrollers and sensors from ESMC, their most advanced proprietary components that will define capabilities of their software-defined vehicles will be built in Taiwan or the USA by TSMC or Germany&nbsp;by Intel Foundry Services.&nbsp;</p>

<p>Yet, mature process technologies are required not only for automotive and industrial sectors, but also for various emerging applications that fall under the Internet-of-Things umbrella. These will benefit significantly from TSMC&#39;s low-power 22 nm production node and N12e process technology.</p>

<p>&quot;<em>Infineon will use the new capacity to serve the growing demand particularly of its European customers, especially in automotive and IoT,&quot; said Jochen Hanebeck, CEO of Infineon Technologies. &quot;The advanced capabilities will provide a basis for developing innovative technologies, products and solutions to address the global challenges of decarbonization and digitalisation.</em>&quot;</p>

<p>Financially, the venture is structured such that TSMC will have a 70% stake, with the remaining partners each holding a 10% equity stake. The collective investments for this initiative are forecasted to surpass &euro;10 billion. ESMC is expected to get around &euro;5 billion in subsidies under the Europen Chips Act and from the German government.&nbsp;</p>

<p>Source:&nbsp;<a href="https://pr.tsmc.com/english/news/3049">TSMC</a></p>
</p>]]></description>
    <link>https://www.anandtech.com/show/20000/tsmc-establishes-joint-venture-to-build-12nm16nm-fab-in-europe</link>
 	<pubDate>Tue, 08 Aug 2023 08:23:00 EDT</pubDate>
 	<guid isPermaLink="false">tag:www.anandtech.com,20000:news</guid>
 	<category><![CDATA[ Semiconductors]]></category>                               
</item>  
    
    
<item>
    <title>Colorful Reveals Mini-ITX GeForce RTX 4060 Ti</title>
    <dc:creator>Anton Shilov</dc:creator>    
    <description><![CDATA[ <p align="center"><a href="https://www.anandtech.com/show/19999/colorful-reveals-miniitx-geforce-rtx-4060-ti"><img src="https://images.anandtech.com/doci/19999/colorful-mini-itx-geforce-rtx-4060ti-678_575px.png" alt="" /></a></p><p><p>Colorful has quietly introduced GeForce RTX 4060 Ti graphics cards in Mini-ITX form-factor that combine compact dimensions and performance of Nvidia&#39;s latest Ada Lovelace GPUs. In fact, with the boosted performance of 22 FP32 TFLOPS, Colorful&#39;s iGame GeForce RTX 4060 Ti Mini OC is likely&nbsp;the highest-performing Mini-ITX graphics card that has&nbsp;been launched to date.</p>

<p>Looking at the options, Colorful has two iGame GeForce RTX 4060 Ti Mini graphics cards in Mini-ITX form-factor: one with&nbsp;<a href="https://www.colorful.cn/product_show.aspx?mid=102&amp;id=2084">8 GB of GDDR6 memory</a>&nbsp;and another with&nbsp;<a href="https://www.colorful.cn/product_show.aspx?mid=102&amp;id=2085">16 GB of GDDR6 SGRAM</a>. Both boards are naturally based on Nvidia&#39;s AD106 GPU with 4352 CUDA cores running at 2310 MHz &ndash; 2580 MHz, which is slightly higher than Nvidia&#39;s recommended 2540 MHz. The board has four display outputs (three DisplayPort, one HDMI), like fully-fledged GeForce RTX 4060 Ti boards.</p>

<p>Since the boards have a relatively simplistic 6+2-phase voltage regulating module (VRM), it is unlikely that the cards were designed with overclocking in mind, so it is doubtful that these devices can be overclocked significantly. Furthermore, they are equipped with a dual-slot single-fan cooling system with four heat pipes, good enough to dissipate 160W ~ 165W of heat generated by Colorful&#39;s iGame GeForce RTX 4060 Ti Mini. Still, I doubt this cooler will enable great overclocking potential.</p>

<p style="text-align: center;"><a href="https://www.anandtech.com/show/19999/colorful-reveals-miniitx-geforce-rtx-4060-ti"><img alt="" src="https://images.anandtech.com/doci/19999/colorful-mini-itx-geforce-rtx-4060ti-1_575px.png" /></a></p>

<p>Colorful&#39;s iGame GeForce RTX 4060 Ti Mini graphics cards are said to be 199.5 mm long, which is somewhat longer than typical Mini-ITX motherboards, so owners will probably have to ensure compatibility of these AIBs with the chassis.</p>

<p>Compact Mini-ITX PCs are rather popular among gamers these days, but due to the high power consumption of Nvidia&#39;s previous-generation graphics processors did not allow makers of add-in-boards to make Mini-ITX versions of their midrange products. With Ada Lovelace, Nvidia opted to reduce the power consumption of its GeForce RTX 4060-series, which enabled makers of AIBs to experiment with form factors and come up with Mini-ITX versions of GeForce RTX 4060 Ti.</p>

<p>Unfortunately, Colorful&#39;s graphics cards are rare guests in North American and European retailers, so those interested in the company&#39;s iGame GeForce RTX 4060 Ti Mini graphics cards should probably buy directly or from retailers like JD.com.</p>

<p>Given that Nvidia&#39;s GeForce RTX 4060 Ti GPU is readily available from the green company, other makers of graphics cards may follow Colorful with their Mini-ITX versions at some point.</p>
</p>]]></description>
    <link>https://www.anandtech.com/show/19999/colorful-reveals-miniitx-geforce-rtx-4060-ti</link>
 	<pubDate>Tue, 08 Aug 2023 08:15:00 EDT</pubDate>
 	<guid isPermaLink="false">tag:www.anandtech.com,19999:news</guid>
 	<category><![CDATA[ GPUs]]></category>                               
</item>  
    
    
<item>
    <title>The Be Quiet! Pure Rock 2 FX CPU Cooler Review: For Quiet Contemplation</title>
    <dc:creator>E. Fylladitakis</dc:creator>    
    <description><![CDATA[ <p>Today we are taking a look at the Pure Rock 2 FX CPU cooler from the aptly-named and acoustics-focused Be Quiet! One of the company&#39;s latest CPU air coolers, the Pure Rock 2 FX is intended to compete in the packed mainstream cooler market as a competitively priced all-rounder. Always a careful balancing act for cooler vendors, the mainstream market lives up to its name by being where the bulk of sales are, but it&#39;s also the most competitive segment of the market, with numerous competing vendors all chasing the same market with their own idea of what a well-balanced cooler should be. So a successful cooler needs to stand out from the crowd in some fashion&nbsp;&ndash; something that&#39;s no easy task when all of them are beholden to the same laws of physics.</p>

<p>So does Be Quiet&#39;s latest cooler have that exceptional factor to make it memorable? We will see where the Pure Rock 2 FX stands in this review.</p>
]]></description>
    <link>https://www.anandtech.com/show/18985/the-be-quiet-pure-rock-2-fx-cpu-cooler-review</link>
 	<pubDate>Tue, 08 Aug 2023 08:00:00 EDT</pubDate>
 	<guid isPermaLink="false">tag:www.anandtech.com,18985:news</guid>
 	<category><![CDATA[ Cases/Cooling/PSUs]]></category>                               
</item>  
    
    
<item>
    <title>Kioxia&#39;s CD8P SSD Unveiled: Up to 30.72 TB, PCI 5.0 x4 Interface</title>
    <dc:creator>Anton Shilov</dc:creator>    
    <description><![CDATA[ <p align="center"><a href="https://www.anandtech.com/show/19998/kioxias-cd8p-ssd-unveiled-up-to-3072-tb-pci-50-x4-interface"><img src="https://images.anandtech.com/doci/19998/CD8P U.2 E3 SSD_575px.jpg" alt="" /></a></p><p><p>Hyperscale data centers have very specific requirements for different tiers of storage devices: some tiers need maximum performance, and others demand maximum storage density. Kioxia&#39;s new CD8P drives for data centers seem to combine both high storage capacity of up to 30.72 TB and high sequential read performance of up to 12,000 MB/s and up to 2 million random read IOPS, which somewhat blurs the lines between storage tiers and provides new opportunities.</p>

<p>Kioxia&#39;s CD8P single-port drives are based on the company&#39;s proprietary controller, firmware, and time-proven 112-layer BICS 5 3D TLC NAND memory. The NVMe 2.0-compliant controller and firmware fully support enterprise-grade features like the company&#39;s exclusive flash die failure protection, power loss protection, end-to-end data protection, sanitize instant erase (SIE), and self-encrypting drive (SED). Since the new SSDs are designed for hyperscale data centers, they come in E3.S and U.2 form factors.</p>

<p>Regarding performance, Kioxia&#39;s new CD8P SSDs offer up to 12,000/5,500 MB/s sequential/write speed and up to 2,000,000/400,000 random read/write 4K IOPS. Meanwhile, there will be two versions of CD8P: CD8P-V for mixed-use applications with up to three drive writes per day with capacities up to 12.8 TB and CD8P-R for read-intensive workloads with up to one drive writes per day capacities of 30.72 TB.</p>

<p style="text-align: center;"><a href="https://www.anandtech.com/show/19998/kioxias-cd8p-ssd-unveiled-up-to-3072-tb-pci-50-x4-interface"><img alt="" src="https://images.anandtech.com/doci/19998/kioxia-cd-8p-specifications_575px.png" /></a></p>

<p>An avid reader will undoubtedly notice that the CD8P family of SSDs is not Kioxia&#39;s first lineup of high-capacity drives with a PCIe Gen5 x4 interface, as the company has been shipping its&nbsp;<a href="https://apac.kioxia.com/content/dam/kioxia/shared/business/ssd/enterprise-ssd/asset/productbrief/eSSD-CM7-V-product-brief.pdf">CM7-series SSDs</a>&nbsp;for about a year now. This is, of course, correct, but Kioxia&#39;s CM7 is designed for enterprise environments, which is why they support dual-port for extended availability, FIPS SED capability, and maximized performance (up to 14 GB/s and 2.7 million 4K IOPS). Hyperscalers barely need such functionality, and the lack of it will probably make CD8Ps slightly cheaper than CM7 drives.</p>

<p>Kioxia positions its CD-series SSDs for s&nbsp;broad range of scale-out and cloud&nbsp;workloads, and indeed, these applications will benefit from their extended performance and capacity. In addition, these new drives could be used for various AI-oriented workloads (particularly on the edge) that can take advantage of high storage density, high performance, and prices that promise to be below those of enterprise-oriented CM7.</p>

<p>Kioxia has not decided when it plans to start shipping its CD8P SSD lineup of SSDs. Still, since hyper scalers need some time to validate and qualify the new storage devices, these new drives will take some time to leave assembly factories. Meanwhile, companies that will use CD8P drives for things like edge AI deployments may deploy these new SSDs somewhat faster if they find them suitable for their workloads.</p>
</p>]]></description>
    <link>https://www.anandtech.com/show/19998/kioxias-cd8p-ssd-unveiled-up-to-3072-tb-pci-50-x4-interface</link>
 	<pubDate>Mon, 07 Aug 2023 09:28:00 EDT</pubDate>
 	<guid isPermaLink="false">tag:www.anandtech.com,19998:news</guid>
 	<category><![CDATA[ SSDs]]></category>                               
</item>  
    
    
<item>
    <title>Gigabyte Launches Low-Profile GeForce RTX 4060 Graphics Card</title>
    <dc:creator>Anton Shilov</dc:creator>    
    <description><![CDATA[ <p align="center"><a href="https://www.anandtech.com/show/19996/gigabyte-launches-lowprofile-geforce-rtx-4060-graphics-card"><img src="https://images.anandtech.com/doci/19996/03_575px.png" alt="" /></a></p><p><p>The relatively low power consumption of Nvidia&#39;s GeForce RTX 4060 graphics processor allows graphics card makers to experiment with the form factors of their products. We have already seen&nbsp;<a href="https://www.anandtech.com/show/18958/lenovo-develops-geforce-rtx-4060-in-miniitx-formfactor">Mini-ITX GeForce RTX 4060 graphics cards</a>,&nbsp;and late last week, Gigabyte introduced a low-profile GeForce RTX 4060 that can fit into miniature desktops and provide decent performance in games.</p>

<p>The&nbsp;<a href="https://www.gigabyte.com/Graphics-Card/GV-N4060OC-8GL#kf">Gigabyte GeForce RTX 4060 OC Low Profile 8G</a>&nbsp;is based on NVIDIA&#39;s AD107 GPU with 3072 CUDA cores that are paired with 8 GB of 17 GT/s GDDR6 memory using a 128-but interface. To justify the OC (overclocked) moniker in the product name, Gigabyte even clocked the graphics processor at 2475 MHz, which is 15 MHz higher than Nvidia&#39;s recommendations for the RTX 4060 model.</p>

<p>Using a graphics board that requires an eight-pin auxiliary PCIe power connector, it is&nbsp;equipped with a dual-slot triple-fan cooling system featuring dozens of thin aluminum. We can only guess whether the cooler is quiet and whether it is good enough to enable further overclocking, but at least Gigabyte guarantees a GPU boost clock of up to 2475 MHz.</p>

<p style="text-align: center;"><a href="https://www.anandtech.com/show/19996/gigabyte-launches-lowprofile-geforce-rtx-4060-graphics-card"><img alt="" src="https://images.anandtech.com/doci/19996/08_575px.png" /></a></p>

<p>Touching more on the&nbsp;cooler, it&nbsp;is longer than the printed circuit board itself, making the graphics card 182 mm long, so owners of compact systems should measure their chassis to ensure compatibility. Most low-profile PC cases are pretty long, but there are also tiny chassis that may be too small for this card.</p>

<p>Despite being low profile, Gigabyte&#39;s&nbsp;GV-N4060OC-8GL&nbsp;has four display outputs: two DisplayPort 1.4a (up to 4Kp120 or up to 5Kp60) and two HDMI 2.1a (up to 5Kp60 or up to 8Kp60 with DSC), so it can be used for rather serious PCs with up to four monitors.</p>

<p>Gigabyte has not disclosed the recommended pricing of their&nbsp;GeForce RTX 4060 OC Low Profile 8G graphics card. Considering that prices of most GeForce RTX products are hovering around recommended $299 price point, it is unlikely that Gigabyte will attempt to charge a huge premium for the unique form factor of its low-profile GeForce RTX 4060. Still, the&nbsp;compact dimensions are undoubtedly a significant differentiator of this product, and GIGABYTE&nbsp;will likely try to earn something extra from it.</p>

<p>Source:&nbsp;<a href="https://www.gigabyte.com/Graphics-Card/GV-N4060OC-8GL">Gigabyte</a></p>
</p>]]></description>
    <link>https://www.anandtech.com/show/19996/gigabyte-launches-lowprofile-geforce-rtx-4060-graphics-card</link>
 	<pubDate>Mon, 07 Aug 2023 08:00:00 EDT</pubDate>
 	<guid isPermaLink="false">tag:www.anandtech.com,19996:news</guid>
 	<category><![CDATA[ GPUs]]></category>                               
</item>  
    
    
<item>
    <title>Cloud Provider Gets $2.3 Billion Debt Using NVIDIA&#39;s H100 as Collateral</title>
    <dc:creator>Anton Shilov</dc:creator>    
    <description><![CDATA[ <p align="center"><a href="https://www.anandtech.com/show/19995/cloud-provider-gets-23-billion-debt-using-nvidias-h100-as-collateral"><img src="https://images.anandtech.com/doci/19995/nvidia-h100-678_575px.jpg" alt="" /></a></p><p><p>CoreWeave, an NVIDIA-backed cloud service provider specializing in GPU-accelerated services, has secured a debt facility worth $2.3 billion using NVIDIA&#39;s H100-based hardware as collateral. The company intends to use the funds to procure more compute GPUs and systems from NVIDIA, construct new data centers, and hire additional personnel to meet the growing needs for AI and HPC workloads.</p>

<p>CoreWeave has reaped enormous benefits from the rise in generative AI due to its large-scale cloud infrastructure as well as an exclusive relationship with NVIDIA, and its ability to procure the company&#39;s H100 compute GPUs as well as HGX H100 supercomputing platforms amid shortages of AI and HPC hardware. Since many AI and HPC applications used nowadays were developed for NVIDIA&#39;s CUDA platform and API, they require NVIDIA&#39;s GPUs. Therefore, access to H100 gives CoreWeave a competitive edge over traditional CSPs like AWS, Google, and Microsoft.</p>

<p>In addition to offering its customers access to advanced hardware, CoreWeave collaborates with AI startups and major CSPs &mdash; which are essentially its competitors &mdash; to build clusters that power AI workloads. These rivals &mdash; AWS and Google &mdash; have their own processors for AI workloads, and they continue to develop new ones. Still, given the dominance of CUDA, they have to offer NVIDIA-powered instances to their clients and are currently grappling with NVIDIA GPU supply limitations.</p>

<p>CoreWeave&#39;s competitive advantage, facilitated by access to NVIDIA&#39;s latest hardware, is a key factor in the company&#39;s ability to secure such substantial credit lines from companies like Magnetar Capital, Blackstone, Coatue, DigitalBridge, BlackRock, PIMCO, and Carlyle. Meanwhile, CoreWeave has already gotten $421 million from Magnetar at a valuation exceeding $2 billion.</p>

<p>Notably, this is not the first example of an NVIDIA-supported startup reaping substantial benefits from its association with the tech giant. Last month, Inflection AI built a supercomputer worth hundreds of millions of dollars powered by 22,000 NVIDIA H100 compute GPUs.</p>

<p>Meanwhile, this is the first time NVIDIA&#39;s H100-based hardware was used as collateral, emphasizing these processors&#39; importance in the capital-intensive AI and HPC cloud business. Moreover, this massive loan indicates the growing market for private asset-based financing secured by actual physical assets.</p>

<p>&quot;<em>We negotiated with them to find a schedule for how much collateral to go into it, what the depreciation schedule was going to be versus the payoff schedule,</em>&quot; said Michael Intrator, CoreWeave&#39;s CEO. &quot;For us to go out and to borrow money against the asset base is a very cost-effective way to access the debt markets.&quot;</p>

<p>The company recently announced a $1.6 billion data center in Texas and plans to expand its presence to 14 locations within the U.S. by the end of 2023.</p>
</p>]]></description>
    <link>https://www.anandtech.com/show/19995/cloud-provider-gets-23-billion-debt-using-nvidias-h100-as-collateral</link>
 	<pubDate>Fri, 04 Aug 2023 08:30:00 EDT</pubDate>
 	<guid isPermaLink="false">tag:www.anandtech.com,19995:news</guid>
 	<category><![CDATA[ Supercomputers]]></category>                               
</item>  
    
    
<item>
    <title>AMD to Introduce New Enthusiast-Class Graphics Cards This Quarter</title>
    <dc:creator>Anton Shilov</dc:creator>    
    <description><![CDATA[ <p align="center"><a href="https://www.anandtech.com/show/19994/amd-to-introduce-new-enthusiast-class-graphics-cards-this-quarter"><img src="https://images.anandtech.com/doci/19994/amd-radeon-hero_575px.jpg" alt="" /></a></p><p><p>As part of their quarterly earnings call this week, AMD revealed that the company is getting ready to launch new enthusiast-class Radeon RX 7000-series graphics cards in the coming months. To date, the company has launched cards for the top and bottom portions of their product stack, leaving a noticeable gap for higher performing cards that the company needs to fill to fully flesh out the current card lineup.</p>

<p>&quot;We are on track to further expand our RDNA 3 GPU offerings with the launch of new, enthusiast-class Radeon 7000 series cards in the third quarter,&quot; said Lisa Su, chief executive of AMD, at the company&#39;s earnings call with analysts and investors.</p>

<p>So far, AMD has introduced four RDNA 3-based Radeon RX 7000-series desktop graphics cards aimed at diversified market segments: three Radeon RX 7900-series offerings for enthusiasts who can spend between $650 and $1000 on a graphics card, and the Radeon RX 7600 product for mainstream gamers at roughly $270. This has left an empty space for higher performing cards for cost-conscientious enthusiasts that, for the moment, is being met by NVIDIA&#39;s GeForce RTX 4000-series as well as previous-generation Radeon RX 6000-series boards. In particular, AMD currently lacks something current to compete with NVIDIA&#39;s modestly well received GeForce RTX 4070.</p>

<p>AMD is believed to have only one GPU left in its Navi 30 range, Navi 32, which would slot in between the current Navi 31 and Navi 33 parts. Navi 32, in turn, is expected to power both Radeon RX 7700 and RX 7800 product families. That said, one thing that remains to be seen is whether the company will decide to go after volume first this quarter and start things off with the RX 7700 series, or after higher margins and reveal its Radeon RX 7800 series first.</p>

<p>AMD&#39;s gaming segment revenue was $1.6 billion in Q2 2023, down 4% year-over-year and 10% sequentially primarily due to lower sales of gaming graphics cards. Unit sales of graphics processors in Q2 are typically lower than their shipments in Q1, so a 10% quarter-over-quarter decrease is not surprising. Meanwhile, a 4% drop YoY indicates that appeal of AMD&#39;s discrete GPUs was lower in Q2 2023 compared to Q2 2022, an indicator that the company needs new products.</p>
</p>]]></description>
    <link>https://www.anandtech.com/show/19994/amd-to-introduce-new-enthusiast-class-graphics-cards-this-quarter</link>
 	<pubDate>Thu, 03 Aug 2023 18:00:00 EDT</pubDate>
 	<guid isPermaLink="false">tag:www.anandtech.com,19994:news</guid>
 	<category><![CDATA[ GPUs]]></category>                               
</item>  
    
    
<item>
    <title>AMD Announces Radeon Pro W7600 &amp; W7500: Pro RDNA 3 For The Mid-Range</title>
    <dc:creator>Ryan Smith</dc:creator>    
    <description><![CDATA[ <p align="center"><a href="https://www.anandtech.com/show/19993/amd-announces-radeon-pro-w7600-w7500"><img src="https://images.anandtech.com/doci/19993/Radeon_Pro_W7600_W7500b_575px.jpg" alt="" /></a></p><p><p>As AMD continues to launch their full graphics product stacks based on their latest RDNA 3 architecture GPUs, the company is now preparing their next wave of professional cards under the Radeon Pro lineup. Following the launch of their high-end Radeon Pro W7900 and W7800 graphics cards back in the second quarter of this year, today the company is announcing the low-to-mid-range members of the Radeon Pro W7000 series: the Radeon Pro W7500 and Radeon Pro W7600. Both based on AMD&rsquo;s monolithic Navi 33 silicon, the latest Radeon Pro parts will hit the shelves a bit later this quarter.</p>

<p>The two cards, as a whole, will make up what AMD defines as the mid-range segment of their professional video card market. And like their flagship counterparts, AMD is counting on a combination of RDNA 3&rsquo;s advanced features, including AV1 encoding support, improved compute and ray tracing throughput, and DisplayPort 2.1 outputs to help drive sales of the new video cards. That, and as is tradition, significantly undercutting NVIDIA&rsquo;s competing professional cards.</p>
</p>]]></description>
    <link>https://www.anandtech.com/show/19993/amd-announces-radeon-pro-w7600-w7500</link>
 	<pubDate>Thu, 03 Aug 2023 09:00:00 EDT</pubDate>
 	<guid isPermaLink="false">tag:www.anandtech.com,19993:news</guid>
 	<category><![CDATA[ GPUs]]></category>                               
</item>  
    
    
<item>
    <title>Intel Plans Massive Expansion in Oregon: D1X and D1A to Be Upgraded</title>
    <dc:creator>Anton Shilov</dc:creator>    
    <description><![CDATA[ <p align="center"><a href="https://www.anandtech.com/show/19992/intel-plans-massive-expansion-in-oregon-d1x-and-d1a-to-be-upgraded"><img src="https://images.anandtech.com/doci/19992/DSC07224_16x9sb_575px.jpg" alt="" /></a></p><p><p>Intel has filed a permit application that outlines&nbsp;significant expansion plans for its&nbsp;campus near&nbsp;Hillsboro, Oregon. According to filings submitted to state&nbsp;regulators, the tech giant&#39;s ambitious proposals include a fourth expansion phase for the&nbsp;<a href="https://www.anandtech.com/show/17258/a-visit-to-intels-d1x-fab-next-generation-euv-process-nodes">D1X&nbsp;research facility</a>&nbsp;and&nbsp;an upgrade of the older D1A&nbsp;fab&nbsp;situated on the same 450-acre property.</p>

<p>The planned enhancements will take place at the company&#39;s Gordon Moore Park&nbsp;(previously known as Ronler Acres)&nbsp;campus, according to a 1,100-page air-quality permit application submitted by Intel to the Oregon Department of Environmental Quality&nbsp;back in July.&nbsp;While the filings highlight Intel&#39;s intention to&nbsp;upgrade its&nbsp;existing facilities and&nbsp;build some additional capacity, they do not contain&nbsp;a specific financial outline for these projects.&nbsp;Furthermore, they indicate Intel&#39;s potential plans, not commitments. Meanwhile,&nbsp;if the scale is comparable to previous&nbsp;Oregon&nbsp;expansions, the total investment could reach billions.</p>

<p>The last upgrade, D1X&#39;s third phase, cost $3 billion and added over one million square feet to the campus. The latest expansion could potentially exceed this, given that Intel plans not only to add a fourth phase to D1X but also to overhaul the 30-year-old D1A factory, add manufacturing support buildings, and implement other&nbsp;upgrades. Intel anticipates that the installation of new equipment could begin as early as 2025, with the completion of additional work slated for 2028.</p>

<p>So far, Intel has not formally announced any plans about its Oregon campus, but in May, its chief executive Pat Gelsinger already implied that he wants the site to grow big.</p>

<p>&quot;<em>I would be reticent to constrain my dreams for how big it might be in the future,</em>&quot; Gelsinger said.</p>

<p>The&nbsp;Gordon Moore Park&nbsp;site currently houses five fabs: D1X, Intel&#39;s flagship manufacturing process development facility; D1A, Intel&#39;s development fab built in the 1980s; 10nm-capable D1B and D1C fabs; and 7nm-capable D1D fab. Intel is&nbsp;the largest corporate employer&nbsp;in Oregon, with 22,000 workers.</p>

<p>This proposed expansion represents a significant milestone, not just for Intel but for Oregon as well. While the investment may not match the tens of billions earmarked for new&nbsp;campuses&nbsp;in Arizona and Ohio, it would nonetheless constitute one of Oregon&#39;s largest capital projects to date. This would likely result in the addition of hundreds or possibly thousands of new jobs to Intel&#39;s workforce in the state, reaffirming Intel&#39;s commitment to ongoing investment in its Oregon research endeavors.</p>
</p>]]></description>
    <link>https://www.anandtech.com/show/19992/intel-plans-massive-expansion-in-oregon-d1x-and-d1a-to-be-upgraded</link>
 	<pubDate>Wed, 02 Aug 2023 12:00:00 EDT</pubDate>
 	<guid isPermaLink="false">tag:www.anandtech.com,19992:news</guid>
 	<category><![CDATA[ Semiconductors]]></category>                               
</item>  
    
    
<item>
    <title>PCI-SIG Forms Optical Workgroup - Lighting The Way To PCIe&#39;s Future</title>
    <dc:creator>Ryan Smith</dc:creator>    
    <description><![CDATA[ <p align="center"><a href="https://www.anandtech.com/show/19990/pcisig-forms-optical-workgroup-lighting-the-way-to-pcies-future"><img src="https://images.anandtech.com/doci/19990/pcisiglogo_575px.png" alt="" /></a></p><p><p>The PCI-Express interconnect standard may be going through some major changes in the coming years, based on a new announcement from the group responsible for the standard. The PCI-SIG is announcing this morning the formation of a PCIe Optical Workgroup, whose remit will be to work on enabling PCIe over optical interfaces. And while the group is still in its earliest of stages, the ramifications for the traditionally copper-bound standard could prove significant, as optical technology would bypass some increasingly stubborn limitations of copper signaling that traditional PCIe is soon approaching.</p>

<p>First released in the year 2000, PCI-Express was initially developed around the use of high-density edge connectors, which are still in use to this day. The PCIe Card Electromechanical specification (CEM) defines the PCIe add in card form factors in use for the last two decades, ranging from x1 to x16 connections.</p>

<p>But while the PCIe CEM has seen very little change over the years &ndash; in large part to ensure backward and forward compatibility &ndash; the signaling standard itself has undergone numerous speed upgrades. Including the latest PCIe 6.0 standard, the speed of a single PCIe lane has increased by 32-fold since 2000 &ndash; and the PCI-SIG will double that once more with <a href="https://www.anandtech.com/show/18909/pci-express-70-spec-hits-draft-512gbps-connectivity-on-track-for-2025-release">PCIe 7.0 in 2025</a>. As a result of increasing the amount of data transferred per pin by such a significant amount, the literal frequency band width used by the standard has increased by a similar degree, with PCIe 7.0 set to operate at nearly 32GHz.</p>

<p>In developing newer PCIe standards, the PCI-SIG has worked to minimize these issues, such as by employing alternative means of signaling that don&rsquo;t require higher frequencies (e.g. <a href="https://www.anandtech.com/show/17203/pcie-60-specification-finalized-x16-slots-to-reach-128gbps">PCIe 6 with PAM-4</a>), and the use of mid-route retimers along with materials improvements have helped to keep up with the higher frequencies the standard does use. But the frequency limitations of copper traces within a PCB have never been eliminated entirely, which is why in more recent years the PCI-SIG has developed an official standard for PCIe over copper cabling.</p>

<p align="center"><a href="https://www.anandtech.com/show/19990/pcisig-forms-optical-workgroup-lighting-the-way-to-pcies-future"><img alt="" src="https://images.anandtech.com/doci/19990/PCIe-Cabling_575px.png" /></a></p>

<p>Still in the works for late this year, the PCIe 5.0/6.0 cabling standard offers the option of using copper cables to carry PCIe both within a system (internal) and between systems (external). In particular, the relatively thick copper cables have less signal loss than PCB traces, overcoming the immediate drawback of high frequency comms, which is the low channel reach (i.e. short signal propagation distance). And while the cabling standard is designed to be an alternative to the PCIe CEM connector rather than a wholesale replacement, its existence underscores the problem at hand with high frequency signaling over copper, a problem that will only get even more challenging once PCIe 7.0 is made available.</p>

<p align="center"><a href="https://www.anandtech.com/show/19990/pcisig-forms-optical-workgroup-lighting-the-way-to-pcies-future"><img alt="" src="https://images.anandtech.com/doci/19990/Samtec-pcie-signal-loss_575px.png" /></a><br />
<em>PCIe Insertion Loss Budgets Over The Years (<a href="https://blog.samtec.com/wp-content/uploads/2021/04/04_15_2021_successful_PCIe_interconnect_guidelines.pdf">Samtec</a>)</em></p>

<p>And that brings us to the formation of the PCI-SIG Optical Workgroup. Like the Ethernet community, which tends to be at the forefront of high frequency signaling innovation, PCI-SIG is looking towards optical, light-based communication as part of the future for PCIe. As we&rsquo;ve already seen with optical networking technology, optical comms offers the potential for longer ranges and higher data rates vis-&agrave;-vis the vastly higher frequency of light, as well as a reduction in power consumed versus increasingly power-hungry copper transmission. For these reasons, the PCI-SIG is forming an Optical Workgroup to help develop the standards needed to supply PCIe over optical connections.</p>

<p>Strictly speaking, the creation of a new optical standard isn&rsquo;t necessary to drive PCIe over optical connections. Several vendors already offer proprietary solutions, with a focus on external connectivity. But the creation of an optical standard aims to do just that &ndash; standardize how PCIe over fiber optics would work and behave. As part of the working group announcement, the traditionally consensus-based PCI-SIG is making it clear that they aren&rsquo;t developing a standard for any single optical technology, but rather they are aiming to make it technology-agnostic, allowing the spec to support a wide range of optical technologies.</p>

<p>But the relatively broad announcement from the PCI-SIG doesn&rsquo;t just stop with optical cabling as a replacement for current copper cabling, the group is also looking at &ldquo;potentially developing technology-specific form factors.&rdquo; While the classic CEM connector is unlikely to go away entirely any time soon &ndash; the backwards and forwards compatibility is that important &ndash; the CEM connector is the weakest/most difficult way to deliver PCIe today. So if the PCI-SIG is thinking about new form factors, then it&rsquo;s likely the Optical Workgroup will at least be looking at some kind of optical-based successor to the CEM. And if that were to come to pass, this would easily be the biggest change in the PCIe specification in its 23+ year history.</p>

<p>But, to be sure, if any such change were to happen, it would be years down the line. The new Optical Workgroup has yet to form, let alone set its goals and requirements. With a broad remit to make PCIe more optical-friendly, any impact from the group is several years away &ndash; presumably no sooner than making a cabling standard for PCIe 7.0, if not a more direct impact on a PCIe 8.0 specification. But it shows where PCI-SIG leadership sees the future of the PCIe standard going, assuming they can get a consensus from their members. And, while not explicated stated in the PCI-SIG&rsquo;s press release, any serious use of optical PCIe in this fashion would seem to be predicated on cheap optical transceivers, i.e. silicon photonics.</p>

<p>In any case, it will be interesting to see what eventually comes out of the PCI-SIG&rsquo;s new Optical Workgroup. As PCIe begins to approach the practical limits of copper, the future of the industry&rsquo;s standard peripheral interconnect may very well be to go towards the light.</p>
</p>]]></description>
    <link>https://www.anandtech.com/show/19990/pcisig-forms-optical-workgroup-lighting-the-way-to-pcies-future</link>
 	<pubDate>Wed, 02 Aug 2023 11:00:00 EDT</pubDate>
 	<guid isPermaLink="false">tag:www.anandtech.com,19990:news</guid>
 	<category><![CDATA[ CPUs]]></category>                               
</item>  
    
    
<item>
    <title>Intel Quietly Launches New Arc GPUs for Laptops</title>
    <dc:creator>Anton Shilov</dc:creator>    
    <description><![CDATA[ <p align="center"><a href="https://www.anandtech.com/show/19991/intel-quietly-launches-new-arc-gpus-for-laptops"><img src="https://images.anandtech.com/doci/19991/intel-arc-laptops-648_575px.jpg" alt="" /></a></p><p><p>Intel has quietly released two new Arc Alchemist-series graphics processors for laptops. The new&nbsp;<a href="https://ark.intel.com/content/www/us/en/ark/products/232776/intel-arc-a530m-graphics.html">Arc A530M</a>&nbsp;and&nbsp;<a href="https://ark.intel.com/content/www/us/en/ark/products/232777/intel-arc-a570m-graphics.html">Arc A570M</a>&nbsp;target mid-range notebooks designed for light gaming. Perhaps the most intriguing thing about the new mobile GPUs is that they use previously unreleased ACM-G12 silicon.</p>

<p>Intel&#39;s Arc A530M GPU comes with 12 Xe cores and 1536 stream processors operating at 1300 MHz, which clearly distinguishes it from the company&#39;s entry-level Arc A370M GPU that only has eight Xe cores and 1024 stream processors. Meanwile, the Arc A570M features 16 Xe cores and 2048 stream processors running at 1300 MHz, which makes it clearly faster than the previously released Arc A550M with the same number of SPs at 900 MHz, but does not allow it to challenge the Arc A730M that has 3072 SPs working at 1100 MHz.</p>

<p>One interesting wrinkle about the Arc A530M and Arc A570M is that they seem to be based on Intel&#39;s yet-to-be-confirmed ACM-G12 GPU, according to&nbsp;<a href="https://twitter.com/SquashBionic/status/1686206180153282560">Bionic_Squash</a>. This graphics processor reportedly has 16 Xe clusters, which means that it sits above the ACM-G11 with eight Xe clusters and Arc-G10 with 16 Xe clusters in total. Intel yet has to formally confirm that it uses its unannounced ACM-G12 silicon for the A530M and A570M parts.</p>

<table align="center" border="0" cellpadding="0" cellspacing="1" width="650">
	<tbody>
		<tr class="tgrey">
			<td align="center" colspan="7">Intel Arc Comparison</td>
		</tr>
		<tr class="tlblue">
			<td>&nbsp;</td>
			<td>Arc&nbsp;A370M</td>
			<td>Arc A530M</td>
			<td>Arc A550M</td>
			<td>Arc A570M</td>
			<td>Arc<br />
			A730M</td>
		</tr>
		<tr>
			<td>Stream Processors</td>
			<td style="text-align: center;">1024</td>
			<td style="text-align: center;">1536</td>
			<td style="text-align: center;">2048</td>
			<td style="text-align: center;">2048</td>
			<td style="text-align: center;">3072</td>
		</tr>
		<tr>
			<td>Xe-cores</td>
			<td style="text-align: center;">8</td>
			<td style="text-align: center;">12</td>
			<td style="text-align: center;">16</td>
			<td style="text-align: center;">16</td>
			<td style="text-align: center;">24</td>
		</tr>
		<tr>
			<td>Render Slices</td>
			<td style="text-align: center;">2</td>
			<td style="text-align: center;">3</td>
			<td style="text-align: center;">4</td>
			<td style="text-align: center;">4</td>
			<td style="text-align: center;">6</td>
		</tr>
		<tr>
			<td>Ray Tracing Units</td>
			<td style="text-align: center;">8</td>
			<td style="text-align: center;">12</td>
			<td style="text-align: center;">16</td>
			<td style="text-align: center;">16</td>
			<td style="text-align: center;">24</td>
		</tr>
		<tr>
			<td>Xe Matrix Extensions (XMX) Engines</td>
			<td style="text-align: center;">128</td>
			<td style="text-align: center;">192</td>
			<td style="text-align: center;">256</td>
			<td style="text-align: center;">256</td>
			<td style="text-align: center;">384</td>
		</tr>
		<tr>
			<td>Xe Vector Engines</td>
			<td style="text-align: center;">128</td>
			<td style="text-align: center;">192</td>
			<td style="text-align: center;">256</td>
			<td style="text-align: center;">256</td>
			<td style="text-align: center;">384</td>
		</tr>
		<tr>
			<td>Graphics Clock</td>
			<td style="text-align: center;">1550 MHz</td>
			<td style="text-align: center;">1300 MHz</td>
			<td style="text-align: center;">900 MHz</td>
			<td style="text-align: center;">1300 MHz</td>
			<td style="text-align: center;">1100 MHz</td>
		</tr>
		<tr>
			<td>TGP</td>
			<td style="text-align: center;">35-50W</td>
			<td style="text-align: center;">65W-95W</td>
			<td style="text-align: center;">60W</td>
			<td style="text-align: center;">75W-95W</td>
			<td style="text-align: center;">80W-120W</td>
		</tr>
		<tr>
			<td>PCI Express&nbsp;</td>
			<td colspan="4" rowspan="1" style="text-align: center;">PCIe 4.0 x8</td>
			<td style="text-align: center;">PCIe 4.0 x16</td>
		</tr>
		<tr>
			<td>Memory Size</td>
			<td style="text-align: center;">4 GB</td>
			<td style="text-align: center;">4 GB<br />
			8 GB</td>
			<td style="text-align: center;">8 GB</td>
			<td style="text-align: center;">8 GB</td>
			<td style="text-align: center;">12 GB</td>
		</tr>
		<tr>
			<td>Memory Type</td>
			<td colspan="5" rowspan="1" style="text-align: center;">GDDR6</td>
		</tr>
		<tr>
			<td>Graphics Memory Interface</td>
			<td style="text-align: center;">64 bit</td>
			<td style="text-align: center;">?</td>
			<td style="text-align: center;">128 bit</td>
			<td style="text-align: center;">?</td>
			<td style="text-align: center;">192 bit</td>
		</tr>
		<tr>
			<td>Graphics Memory Bandwidth</td>
			<td style="text-align: center;">112 GB/s</td>
			<td style="text-align: center;">?</td>
			<td style="text-align: center;">224 GB/s</td>
			<td style="text-align: center;">?</td>
			<td style="text-align: center;">336 GB/s</td>
		</tr>
		<tr>
			<td>Graphics Memory Speed</td>
			<td style="text-align: center;"><span style="caret-color: rgb(68, 68, 68); color: rgb(68, 68, 68); text-align: center; background-color: rgb(238, 238, 238);">14 Gbps</span></td>
			<td style="text-align: center;">?</td>
			<td style="text-align: center;"><span style="caret-color: rgb(68, 68, 68); color: rgb(68, 68, 68); text-align: center; background-color: rgb(238, 238, 238);">14 Gbps</span></td>
			<td style="text-align: center;">?</td>
			<td style="text-align: center;"><span style="caret-color: rgb(68, 68, 68); color: rgb(68, 68, 68); text-align: center; background-color: rgb(238, 238, 238);">14 Gbps</span></td>
		</tr>
	</tbody>
</table>

<p>One of the things that strikes the eye about the new mobile GPUs is their thermal graphics power between 65W and 95W for the Arc A530M as well as between 75W and 95W for the Arc A570M. By contrast, the Arc A550M is rated for a 60W TGP, which makes it a considerably better choice than the Arc A530M both from performance and from battery life point of view.</p>

<p>What remains to be seen is whether Intel uses its ACM-G12 graphics processor for desktop parts too. While the company has formally announced its Arc A580 with 3072 stream processors, this part was based on the ACM-G10 and never came to market possibly because Intel did not want to address entry-level gaming market segment. It is unclear whether Intel is interested in rolling out a discrete desktop offering that would be positioned even below the unreleased Arc A580.</p>

<p>Intel&#39;s newly released Arc A530M and Arc A570M are already supported by&nbsp;<a href="https://www.intel.com/content/www/us/en/download/726609/intel-arc-iris-xe-graphics-whql-windows.html">Intel&#39;s latest graphics drivers</a>.</p>

<p>Sources: Intel Ark (<a href="https://ark.intel.com/content/www/us/en/ark/products/232776/intel-arc-a530m-graphics.html">1</a>,&nbsp;<a href="https://ark.intel.com/content/www/us/en/ark/products/232777/intel-arc-a570m-graphics.html">2</a>),&nbsp;<a href="https://twitter.com/SquashBionic/status/1686206180153282560">Bionic_Squash</a></p>
</p>]]></description>
    <link>https://www.anandtech.com/show/19991/intel-quietly-launches-new-arc-gpus-for-laptops</link>
 	<pubDate>Wed, 02 Aug 2023 10:00:00 EDT</pubDate>
 	<guid isPermaLink="false">tag:www.anandtech.com,19991:news</guid>
 	<category><![CDATA[ GPUs]]></category>                               
</item>  
    
    
<item>
    <title>ASRock Z790 Taichi Carrara Motherboard Review: ASRock Rocks With White Marble</title>
    <dc:creator>Gavin Bonshor</dc:creator>    
    <description><![CDATA[ <p>Building on the success of their hybrid architecture Alder Lake (12th Gen) Core series chips, Intel last year released the upgraded Raptor Lake core with a similar core architecture and design with&nbsp;performance (P) cores and efficiency (E) cores. While we&#39;ve reviewed and put Intel&#39;s 13th Gen Core series chips through their paces, it&#39;s been a while since we tested out the platforms that not only unleash that multi-threaded and single-core IPC performance but add all of the features associated with each chipset. In a series of relative socket LGA 17000 motherboard reviews, we&#39;re looking at perhaps one of the most interesting models for Intel&#39;s 13th Gen Core series.</p>

<p>ASRock has added &#39;Carrara&#39; to the mix, adding to their already popular Taichi series of motherboards that blend cogwheel-inspired aesthetics with a premium selection of controllers and features. Sometimes referred to as Luna marble by the Romans, the ASRock Z790 Taichi Carrara edition boasts a white Carrara marble-inspired design while retaining all the exact specifications and features of the regular Z790 Taichi. Some of the most prominent features include an advertised 27-phase (24+1+2) power delivery, support for DDR5-7400 memory, as well as dual Thunderbolt 4 Type-C ports on the rear panel.</p>

<p>ASRock loves to be different with its offerings, aka the Aqua series of motherboards. Still, the Taichi Carrara is different in that it celebrates ASRock&#39;s 20th anniversary at the upper echelon of PC components and hardware. We take a closer look to see if the Z790 Taichi Carrara&#39;s premium standing in ASRock&#39;s Z790 line-up represents what we&#39;ve come to like about the Taichi series over the years and, more importantly, how it performs against other LGA 1700 motherboards.</p>
]]></description>
    <link>https://www.anandtech.com/show/18897/asrock-z790-taichi-carrara-motherboard-review</link>
 	<pubDate>Wed, 02 Aug 2023 09:00:00 EDT</pubDate>
 	<guid isPermaLink="false">tag:www.anandtech.com,18897:news</guid>
 	<category><![CDATA[ Motherboards]]></category>                               
</item>  
    
    
<item>
    <title>China Imposes New Export Restrictions on Gallium and Germanium</title>
    <dc:creator>Anton Shilov</dc:creator>    
    <description><![CDATA[ <p align="center"><a href="https://www.anandtech.com/show/19989/china-imposes-new-export-restrictions-on-gallium-and-germanium"><img src="https://images.anandtech.com/doci/19989/germanium-mit-wafer-678_575px.jpg" alt="" /></a></p><p><p>China this week formally imposed new export regulations on gallium and germanium, as well as materials incorporating them. This move is broadly seen as a retaliatory act for the limitations recently placed on the Chinese semiconductor industry by the U.S., Japan, and the Netherlands. These new export regulations risk eventually significantly impacting the semiconductor sector, especially factories based in Japan.</p>

<p>Starting from August 1, 2023, Chinese companies are required to secure an export license to export gallium and germanium metals or any products that include these elements. With a stronghold over the global production of gallium (94%) and germanium (around 60%), China&#39;s announcement of these restrictions in early July led to nearly a 20% price hike for gallium in the U.S and Europe. While the rules are said to be in the interests of China&#39;s national security, many see them as a retaliation to curbs on China&#39;s high-tech sector.</p>

<p>While the decision to restrict exports of gallium and germanium from China should not significantly impact the production of high-performance logic components like CPUs, GPUs, and memory, it is worth noting that GaN (gallium nitride) and GaAs (gallium arsenide) are integral to power chips, radio frequency amplifiers, LEDs, and numerous other applications.</p>

<p>Although gallium and germanium are not exceptionally rare and are typically acquired as byproducts of other mining operations, China&#39;s dominance in their exports is due to its cheap refinement process, which made extracting these metals in other regions financially unviable. China&#39;s new restrictions could cause an initial increase in prices and potential disruptions in supplies and component production. Yet, over time, these limitations may encourage companies from other countries to mine these metals, possibly threatening China&#39;s market dominance. For example, Pentagon recently declared plans to recover gallium from waste electronics.</p>

<p>Japanese companies are likely to be the most affected by these new regulations, as Japan is the largest global consumer of gallium, based on data from the Japan Organization for Metals and Energy Security. Around 60% of gallium used in the country is imported, and China contributes 70% of these imports. Consequently, approximately 40% of Japan&#39;s gallium supply is dependent on China.</p>

<p>Companies like Mitsubishi Chemical Group, which manufacture compound semiconductors and other products, reassure that they have adequate stocks in Japan to prevent any immediate supply issues. Other firms, including Sumitomo Chemical, a producer of gallium nitride substrates, and Nichia Corp., a producer of LEDs, also have plenty of gallium in stock, but are planning to monitor the situation and consider diversifying their suppliers. Meanwhile, to date, the new export rules have not affected Japanese companies&#39; raw material procurement or other business operations.</p>

<p>Despite the new rules, China&#39;s Ministry of Commerce had stated that the export quality and quantity will remain unaffected. As long as exporters comply with national security protocols and other regulations, exports will continue as before. Meawhile, Wei Jianguo, ex-vice minister of commerce in China, cautions that the newly imposed export controls on gallium and germanium may only be the initial phase of China&#39;s countermeasures. Looking ahead, China could potentially utilize its powerful position in specific commodity markets as a strategic means for exerting economic and diplomatic influence.</p>

<p>Source: <a href="https://asia.nikkei.com/Economy/Trade/China-tightens-export-restrictions-on-two-chipmaking-materials">Nikkei</a></p>

<p>Image Source: <a href="https://news.mit.edu/2010/first-germanium-laser">MIT</a></p>
</p>]]></description>
    <link>https://www.anandtech.com/show/19989/china-imposes-new-export-restrictions-on-gallium-and-germanium</link>
 	<pubDate>Wed, 02 Aug 2023 08:00:00 EDT</pubDate>
 	<guid isPermaLink="false">tag:www.anandtech.com,19989:news</guid>
 	<category><![CDATA[ Semiconductors]]></category>                               
</item>  
    
    
<item>
    <title>Western Digital Preps 28 TB UltraSMR Hard Drive</title>
    <dc:creator>Anton Shilov</dc:creator>    
    <description><![CDATA[ <p align="center"><a href="https://www.anandtech.com/show/19988/western-digital-preps-28-tb-ultrasmr-hard-drive"><img src="https://images.anandtech.com/doci/19988/western-digital-wdc-hdd-678_575px.jpg" alt="" /></a></p><p><p>Western Digital is gearing up to start sampling of its 28 TB nearline hard drive for hyperscalers. The new HDD will use the company&#39;s energy-assisted perpendicular magnetic recording (ePMR) technology with UltraSMR track layouts. Since both technologies are now familiar to hyperscalers, the validation and qualification of this hard drive should be relatively straightforward.</p>

<p>&quot;We are about to begin product sampling of our 28 TB UltraSMR drive,&quot; said David Goeckeler, chief executive of Western Digital, at the company&#39;s most recent earnings call. &quot;This cutting-edge product is built upon the success of our ePMR and UltraSMR technologies with features and reliability trusted by our customers worldwide. We are staging this product for quick qualification and ramp as demand improves.&quot;</p>

<p>Right now, Western Digital is shipping its 26 TB UltraSMR hard drives introduced over a year ago to select customers among operators of large cloud datacenters. Since these drives rely on UltraSMR it took hyperscalers quite a while to qualify them before deployment. But now that Western Digital&#39;s customers know how to use UltraSMR and what to expect from it in terms of performance and behavior, deployment of 28 TB HDDs will likely go smoother.</p>

<p>Based on their release timeline, Western Digital&#39;s 28 TB hard drives are expected to compete against Seagate&#39;s 32 TB HDDs based on heat-assisted magnetic recording (HAMR) technology starting early 2024. Western Digital offering will be familiar to its clients who already use shingled magnetic recording HDDs in general and UltraSMR drives in particular. Meanwhile, Seagate&#39;s product will deliver higher capacity, predictable performance (and considerably higher performance when it comes to write operations), but will probably need slightly longer qualification.</p>

<p>Western Digital&#39;s&nbsp;<a href="https://www.tomshardware.com/news/western-digital-shares-roadmap-26tb-today-50tb-tomorrow">UltraSMR</a>&nbsp;set of technologies promises to add around 20% of extra capacity to CMR (conventional magnetic recording) platters.&nbsp;To make UltraSMR possible, Western Digital not only had to increase the number number of shingled bands and reduce the number of CMR bands, but employ all of its leading-edge HDD technologies. This includes triple stage actuators with two-dimensional (TDMR) read heads,&nbsp;ePMR write heads,&nbsp;OptiNAND&nbsp;technology, Distributed Sector (DSEC) technology and&nbsp;a proprietary error correcting code (ECC) technology&nbsp;with&nbsp;large block encoding&nbsp;to ensure that increased adjacent tracks interference (ATI) does not harm data integrity. In fact, the sophisticated ECC capability supported by an HDD controller may be crucial SMR hard drives in the coming years as well as for CMR drives in the longer-term future.</p>

<p>One interesting thing about Western Digital&#39;s 28 TB HDD is that it will likely use the company&#39;s 2<sup>nd</sup>&nbsp;generation ePMR since it is based on a 24 TB CMR drive and the latter is meant to rely on the&nbsp;<a href="https://www.anandtech.com/show/18908/western-digital-estimates-hamr-hdds-to-emerge-in-15-years">ePMR 2 technology</a>&nbsp;with advanced head structures, according to Western Digital&#39;s roadmap.</p>
</p>]]></description>
    <link>https://www.anandtech.com/show/19988/western-digital-preps-28-tb-ultrasmr-hard-drive</link>
 	<pubDate>Tue, 01 Aug 2023 17:00:00 EDT</pubDate>
 	<guid isPermaLink="false">tag:www.anandtech.com,19988:news</guid>
 	<category><![CDATA[ Storage]]></category>                               
</item>  
    
    
<item>
    <title>TeamGroup Unveils JEDEC-Spec DDR5-6400 Memory Kits: Faster 1.1V DDR5 On The Way For Future CPUs</title>
    <dc:creator>Anton Shilov</dc:creator>    
    <description><![CDATA[ <p align="center"><a href="https://www.anandtech.com/show/18988/teamgroup-unveils-jedec-spec-ddr5-6400-kits-faster-11v-memory"><img src="https://images.anandtech.com/doci/18988/f1189d80f823cdb608566937dae1cc95-20230731105803_575px.jpg" alt="" /></a></p><p><p>While DDR5 memory has been out and in use for a couple of years now, so far we haven&#39;t seen the memory reach its full potential &ndash; at least, not for rank-and-file standards-compliant DIMMs. The specification allows for <a href="https://www.anandtech.com/show/16143/insights-into-ddr5-subtimings-and-latencies">speeds as high as DDR5-6400</a>, but to date we&#39;ve only seen on-spec kits (and processors) as fast as DDR5-5600. But at last, it looks like things are about to change and DDR5 is set to live up to its full potential, going by a new memory kit announcement from TeamGroup.</p>

<p>The memory kit vendor on Monday introduced its new ElitePlus-series DDR5-6400 memory modules, the first DDR5-6400 kit to be announced as JEDEC specification compliant. This means their new kit not only hits 6400 MT/s with standards-compliant timings, but arguably more importantly, it does so at DDR5&#39;s standard voltage of 1.1V as well. And while there are no platforms on the market at this time that are validated for JEDEC DDR5-6400 speeds, TeamGroup&#39;s product page already lists compatibility with Intel&#39;s yet-to-be-announced &quot;Z790 Refresh&quot; platform &ndash; so suitable processors seem to be due soon.</p>

<p>TeamGroup&#39;s&nbsp;<a href="https://www.teamgroupinc.com/en/product/elite-u-dimm-ddr5">Elite</a>&nbsp;and&nbsp;<a href="https://www.teamgroupinc.com/en/product/elite-plus-u-dimm-ddr5">ElitePlus</a>&nbsp;DDR5-6400 memory modules come in 16 GB and 32 GB capacities (32 GB and 64 GB dual-channel kits) and feature JEDEC-standard&nbsp;CL52&nbsp;52-52-103&nbsp;timings as well as 1.1V voltage, as specified by the organization overseeing DRAM specs. For the moment, at least, TeamGroup&#39;s DDR5-6400 modules are the industry&#39;s fastest UDIMMs that are fully compliant with the JEDEC specifications.</p>

<p>And while DDR5-6400 speeds (and far higher) are available today with factory overclocked XMP/EXPO, the announcement of a JEDEC standards-compliant kit is still significant for a few different reasons. Being able to hit DDR5-6400B speeds and timings at 1.1V means DDR5 memory has improved to the point to make higher speeds at low voltages more viable, which has potential payoffs for memory at every speed grade by allowing for improved speeds and reduced power consumption/heat. And for OEM and other warrantied systems that only use JEDEC-complaint RAM, this allows for a straightforward improvement in memory speeds and bandwidth. About the only downside to faster on-spec kits is that they lack XMP or EXPO serial presence detect (SPD) profiles, which makes their configuration slightly more complicated on existing platforms from AMD and Intel, as they don&#39;t officially support DDR5-6400.&nbsp;</p>

<p>Meanwhile, on their <a href="https://www.teamgroupinc.com/en/product/elite-u-dimm-ddr5">product pages</a> TeamGroup notes that the new RAM is compatible with Intel&#39;s &quot;Z790 Refresh&quot; platform, a platform that has yet to be officially announced, but is rumored to go hand-in-hand with Intel &quot;Raptor Lake Refresh&quot; processors. Despite the lack of formal announcements from Intel there, TeamGroup seems to have let the cat out of the bag. So, prospective owners of Z790 Refresh systems can look forward to having access to specs-compliant 1.1V DDR5-6400 memory when that platform launches later this year.</p>

<p>As for the modules at hand, traditionally, TeamGroup&#39;s Elite and ElitePlus memory modules are minimalistic and are aimed both at system integrators and at enthusiasts who are not after fancy designs of heat spreaders, RGB lighting, and maximum performance. In fact, TeamGroup&#39;s Elite modules do not have heat spreaders at all, whereas ElitePlus modules have a minimalistic heat spreader that will not interfere with large CPU coolers.</p>

<p>TeamGroup says its Elite and ElitePlus DDR5-6400 memory modules will be available separately and in dual-channel kits starting from August in North America and Taiwan. And from that, we&#39;d assume, Raptor Lake Refresh will not be far behind.</p>
</p>]]></description>
    <link>https://www.anandtech.com/show/18988/teamgroup-unveils-jedec-spec-ddr5-6400-kits-faster-11v-memory</link>
 	<pubDate>Mon, 31 Jul 2023 10:00:00 EDT</pubDate>
 	<guid isPermaLink="false">tag:www.anandtech.com,18988:news</guid>
 	<category><![CDATA[ Memory]]></category>                               
</item>  
    
    
<item>
    <title>GEEKOM AS 6 (ASUS PN53) Review: Ryzen 9 6900HX Packs Punches in a Petite Package</title>
    <dc:creator>Ganesh T S</dc:creator>    
    <description><![CDATA[ <p>The market demand for small form-factor (SFF) PCs was kickstarted by the Intel NUC in the early 2010s. Since then, many vendors have come out with their own take on the Intel NUC using both Intel and AMD processors. In recent years, we have also seen various Asian companies such as Beelink, Chuwi, GEEKOM, GMKtec, MinisForum, etc. emerging with a focus solely on these types of computing systems. Earlier this year, GEEKOM announced a tie-up with ASUS to market specific configurations of the ASUS ExpertCenter PN53 under their own brand as the GEEKOM AS 6. Based on AMD&#39;s Rembrandt line of notebook processors, the GEEKOM AS 6 comes with a choice of Ryzen 9 6900HX, Ryzen 7 6800H, or the Ryzen 7 7735H. Read on for a detailed look at the performance profile and value proposition of the GEEKOM AS 6&#39;s flagship configuration.</p>
]]></description>
    <link>https://www.anandtech.com/show/18964/geekom-as-6-asus-pn53-review-ryzen-9-6900hx-packs-punches-in-a-petite-package</link>
 	<pubDate>Mon, 31 Jul 2023 08:00:00 EDT</pubDate>
 	<guid isPermaLink="false">tag:www.anandtech.com,18964:news</guid>
 	<category><![CDATA[ Systems]]></category>                               
</item>  
    
    
<item>
    <title>Dozens of Companies Adopt TSMC&#39;s 3nm Process Technology</title>
    <dc:creator>Anton Shilov</dc:creator>    
    <description><![CDATA[ <p align="center"><a href="https://www.anandtech.com/show/18986/dozens-of-companies-adopt-tsmcs-3nm-process-technology"><img src="https://images.anandtech.com/doci/18986/wafer-siemens-eda-semiconductor-chip-hero-3_575px.jpg" alt="" /></a></p><p><p>Designing chips for modern, leading-edge manufacturing technologies is an expensive endeavor. Still, dozens of companies have already adopted&nbsp;<a href="https://www.anandtech.com/show/18833/tsmc-details-3nm-evolution-n3e-on-schedule-n3p-n3x-deliver-five-percent-gains">TSMCs N3 and N3E (3 nm-class) fabrication processes</a>, according to disclosures made by TSMC and Synopsys.</p>

<p>&quot;<em>Synopsys IP for TSMC&#39;s 3nm process has been adopted by dozens of leading companies to accelerate their development time, quickly achieve silicon success and speed their time to market,</em>&quot; said John Koeter, senior vice president of marketing and strategy for IP at Synopsys.</p>

<p>TSMC has been producing chips using its latest N3 (aka N3B) fabrication technology (with up to 25 EUV layers and support for EUV double patterning) since late 2022 and intends to start making products on its simplified N3E manufacturing process (with up to 19 EUV layers and without EUV double patterning) in Q4 2023.&nbsp;</p>

<p>Previously, TSMC disclosed that its N3 nodes had been adopted by designers of&nbsp;<a href="https://www.anandtech.com/show/18970/tsmc-3nm-chips-for-smartphones-and-hpcs-coming-this-year">high-performance computing (HPC) and smartphone SoCs</a>&nbsp;and that the number of adopters was higher compared to N5 early in its lifecycle. Meanwhile, TSMC never mentioned the number of companies that had decided to use its 3 nm-class fabrication processes.</p>

<p>Synopsys is a major IP developer and electronic design automation tools provider, so it means a lot when it says that dozens of companies have licensed its IP for TSMC&#39;s N3 fabrication technologies. But Synopsys is not the only IP designer out there, and companies like Cadence also supplied their N3-compatible IP to other fabless chip developers. It is safe to say that the number of their clients is also significant.</p>

<p>TSMC&#39;s N3 family of process technologies includes baseline N3 (N3B), relaxed N3E with a bit reduced transistor density but widened process window for better yields, performance-enhanced N3P that is IP compatible with N3E will be production ready in the second half of 2024, and N3X for extremely high-performance applications that are due in 2025.&nbsp;</p>

<p>The IP licensed by Synopsys right now can be used for N3, N3E, and N3P production nodes.</p>

<p>Sources:&nbsp;<a href="https://news.synopsys.com/2023-07-20-Synopsys-Accelerates-Advanced-Chip-Design-with-First-Pass-Silicon-Success-of-IP-Portfolio-on-TSMC-3nm-Process">Synopsys</a></p>
</p>]]></description>
    <link>https://www.anandtech.com/show/18986/dozens-of-companies-adopt-tsmcs-3nm-process-technology</link>
 	<pubDate>Fri, 28 Jul 2023 11:00:00 EDT</pubDate>
 	<guid isPermaLink="false">tag:www.anandtech.com,18986:news</guid>
 	<category><![CDATA[ Semiconductors]]></category>                               
</item>  
    
    
<item>
    <title>Samsung Begins to Produce Third 3nm Chip Amid Massive Losses On DRAM &amp; NAND</title>
    <dc:creator>Anton Shilov</dc:creator>    
    <description><![CDATA[ <p align="center"><a href="https://www.anandtech.com/show/18983/samsung-begins-to-produce-third-3nm-chip-amid-massive-losses"><img src="https://images.anandtech.com/doci/18983/samsung-foundry-wafer-semiconductor-678-1_575px.jpg" alt="" /></a></p><p><p>Samsung this week reported their financial results for the second quarter of 2023, closing the book on an especially bleak quarter of the year with a massive $3.4 billion operating loss. The losses, stemming from its semiconductor business, come amid a continued slump in 3D NAND and DRAM sales volumes and prices. Though buried deep in Samsung&#39;s earnings report was a speck of good news, as well: the company has started to produce its third 3nm chip design with stable yield.</p>

<p>Discussing Samsung Foundry&#39;s earnings, the company remains uncertain about demand recovery in the second half. &quot;Demand to recover gradually under considerable uncertainty over the intensity of a market recovery in 2H, with consumer sentiment to rebound amid easing inflation and as customers wind down inventory adjustments,&quot; a statement by Samsung reads.</p>

<p>More broadly, Samsung revenue dropped sharply, with the company recording a 22% year-over-year decline to $46.915 billion. Earnings of Samsung&#39;s semiconductor divisions &mdash; including memory, SoCs, and foundry operations &mdash; declined to $29.86 billion, 48% YoY drop. Sales of memory hit $7 billion, a 57% year-over-year decline, though eking out a 1% quarter-over-quarter increase. Overall, Samsung recorded a $3.4 billion loss from its semiconductor operations due to low demand for commodity memory and declining commodity 3D NAND and DRAM prices.</p>

<p>But there were some bright spots in Samsung&#39;s DRAM business, as well. Demand for high-performance high-density premium products like DDR5 modules and HBM memory increased, which helped to partly offset slow sales of commodity memory.</p>

<p>&quot;Bit growth exceeded guidance as we expanded sales of server products while actively responding to rising demand for DDR5 and AI-use HBM,&quot; Samsung said. &quot;Demand for high-density/high-performance products stayed strong, driven by increased investments focusing on AI by major hyperscalers.&quot;</p>

<p>While Samsung expects demand for memory to recover in the second half, the company is expecting to enact additional production cuts to further support memory prices.</p>

<p>&quot;We expect to see a gradual recovery of the memory market in the second half of the year, but the pace of the market rebound is likely to depend on our macro variables,&quot; said Jaejune Kim, executive vice president of memory division. &quot;</p>

<p>Kim said that Samsung would be making further alterations to the output of some products, including 3D NAND.</p>

<p>&quot;Production cuts across the industry are likely to continue in the second half, and demand is expected to gradually recover as clients continue to destock their (chip) inventory,&quot; a statement by Samsung reads.</p>

<p>Finally, as noted earlier, as part of Samsung&#39;s earnings report the company also revealed that it&#39;s started production on its third 3nm (GAAFET) chip.</p>

<p>&quot;Mass production of our third GAA product is going smoothly thanks to the stabilization of the 3nm process, and we are developing an improved process for 3nm as planned based on mass production experience with GAA,&quot; a <a href="https://images.samsung.com/is/content/samsung/assets/global/ir/docs/2023_2Q_conference_eng.pdf">statement</a> by Samsung reads.</p>

<p>It <a href="https://www.anandtech.com/show/18960/samsung-foundry-s-3nm-and-4nm-yields-are-improving-report">recently transpired</a> that Samsung Foundry has been producing the Whatsminer M56S++ cryptocurrency mining ASIC on its SF3E node (formerly known as 3GAE, 3nm gate-all-around early) for some time. It turned out a bit later that there is PanSemi, another developer of cryptocurrency mining hardware, that uses Samsung&#39;s SF3E to make its mining ASIC. Now, Samsung confirms that there is another customer that uses its latest production node, though the company isn&#39;t disclosing any further details about the client or their chip.</p>

<p>Producing tiny cryptocurrency mining ASICs is a good way test a new fabrication process on a commercial application since even with a relatively high defect density, yields of such chips will likely be good enough to be viable. Meanwhile, Samsung Foundry&#39;s SF3E process technology promises to increase performance and cut down power consumption of cryptocurrency mining ASICs (vs. similar chips made on previous-generation nodes) and these are exactly that targets that miners would like to hit to boost their earnings.</p>

<p>Sources: <a href="https://images.samsung.com/is/content/samsung/assets/global/ir/docs/2023_2Q_conference_eng.pdf">Samsung</a>, <a href="https://www.reuters.com/technology/samsung-elec-q2-profit-plunges-95-chip-glut-persists-2023-07-27/">Reuters</a>, <a href="https://asia.nikkei.com/Business/Tech/Semiconductors/Samsung-expects-chip-demand-rebound-in-second-half">Nikkei</a>, <a href="https://seekingalpha.com/article/4620349-samsung-electronics-co-ltd-ssnlf-q2-2023-earnings-call-transcript">SeekingAlpha</a></p>
</p>]]></description>
    <link>https://www.anandtech.com/show/18983/samsung-begins-to-produce-third-3nm-chip-amid-massive-losses</link>
 	<pubDate>Fri, 28 Jul 2023 10:00:00 EDT</pubDate>
 	<guid isPermaLink="false">tag:www.anandtech.com,18983:news</guid>
 	<category><![CDATA[ Semiconductors]]></category>                               
</item>  
    
    
<item>
    <title>Seagate Ships First Commercial HAMR Hard Drives</title>
    <dc:creator>Anton Shilov</dc:creator>    
    <description><![CDATA[ <p align="center"><a href="https://www.anandtech.com/show/18984/seagate-ships-first-commercial-hamr-hard-drives"><img src="https://images.anandtech.com/doci/18984/HAMR-actuator-head-and-laser-illustration_575px.jpg" alt="" /></a></p><p><p>Seagate announced this week that it had begun the first commercial revenue shipments of its next-generation HAMR hard drives, which are being shipped out as part of Seagate&#39;s latest Corvault storage systems. This commercialization marks an important milestone in the HDDs market, as heat-assisted magnetic recording (HAMR) is expected to enable hard drives with capacities of 50 TB and beyond. Meanwhile, HDDs employing perpendicular magnetic recording (PMR) and shingled magnetic recording (SMR) technologies are expected to remain on the market for the foreseeable future.</p>

<p>&quot;<em>We shipped our first HAMR-based&nbsp;Corvault&nbsp;system for revenue as planned during the June quarter,</em>&quot;&nbsp;<a href="https://s24.q4cdn.com/101481333/files/doc_financials/2023/q4/CORRECTED-TRANSCRIPT_-Seagate-Technology-Holdings-Plc-STX-US-Q4-2023-Earnings-Call-26-July-2023-4_30-PM-ET.pdf">said</a>&nbsp;Gianluca Romano, chief financial officer of Seagate, at the company&#39;s earnings call.&nbsp;&quot;<em>We expect broader availability of these CORVAULT systems by the end of calendar 2023.</em>&quot;</p>

<p>Seagate&nbsp;<a href="https://www.anandtech.com/show/18901/big-leap-for-hdds-32-tb-hamr-drive-is-coming-40tb-on-horizon">officially disclosed in early June</a>&nbsp;that its first HAMR-based HDDs feature a 32 TB capacity and use a familiar 10-platter platform. Meanwhile, the company refrained from releasing specific capacity details of the HAMR hard drives used in these revenue Corvault systems.</p>

<p>Beyond Corvault systems, Seagate also shipped its HAMR-based hard drives to key customers among hyper scalers for testing and evaluation. Hyperscalers, due to their extensive storage requirements, are expected to benefit significantly from capacity points exceeding 30 TB. Though with the new technology at hand, as well as slightly higher power requirements for HAMR drives than standard PMR and SMR hard drives, the hyperscalers are also playing it safe and thoroughly validating the drives to ensure consistent performance.</p>

<p>Seagate&#39;s initial 32 TB HAMR hard drives will use the company&#39;s 10-platter platform, a system already proven and currently in use. Using an established platform, Seagate effectively mitigates numerous potential points of failure, potentially ensuring predictable production yield. This is smart, given the introduction of new media and write heads with its HAMR hard drives. The same 10-platter platform is expected to be used for 36 TB, 40 TB, and even larger-capacity hard drives in the future with as few alterations as possible.</p>

<p>&quot;<em>[We are]&nbsp;delivering on our 30+ TB HAMR product development and qualification milestones, with volume ramp on track to begin in early calendar 2024,&quot; said Dave Mosley, chief executive officer of Seagate.</em>&quot;<em>[&hellip;] Initial customer qualifications are progressing well. We are on track to begin volume ramp in early calendar 2024. We are also preparing qualifications with more customers, including testing for lower capacity drives targeting VIA and enterprise OEM workloads.</em>&quot;</p>

<p>Even though high-volume production of HAMR hard drives is slated to begin in roughly half a year, Seagate also reaffirmed its plans for another generation of PMR and SMR hard drives during the call. These HDDs target customers not yet ready to switch to HAMR technology.&nbsp;</p>

<p>According to Seagate, they&nbsp;plan to introduce 24TB+ drives featuring PMR technology with two-dimensional magnetic recording (TDMR) read heads and SMR+TDMR in the near future.</p>

<p>&quot;<em>Development efforts on what may be our last PMR product are nearing completion and will extend drive capacities into the mid-to-upper 20TB range,</em>&quot; Mosley said.</p>
</p>]]></description>
    <link>https://www.anandtech.com/show/18984/seagate-ships-first-commercial-hamr-hard-drives</link>
 	<pubDate>Fri, 28 Jul 2023 08:00:00 EDT</pubDate>
 	<guid isPermaLink="false">tag:www.anandtech.com,18984:news</guid>
 	<category><![CDATA[ Storage]]></category>                               
</item>  
    
    
<item>
    <title>AMD Announces Ryzen 9 7945HX3D: Ryzen Mobile Gets 3D V-Cache</title>
    <dc:creator>Gavin Bonshor</dc:creator>    
    <description><![CDATA[ <p>For this year&#39;s ChinaJoy expo, AMD is taking to the show to announce a new and very special mobile CPU for high-end, desktop replacement-class laptops: the Ryzen 9 7945HX3D, AMD&#39;s first V-cache-equipped mobile CPU. Slated to launch on August 22nd, the new chip is set to break new ground for AMD in the mobile space, all the while giving gamers an even more potent CPU for high-end gaming laptops.</p>

<p>Based on AMD&#39;s cutting-edge 3D V-Cache packaging technology, which places an additional slice of L3 cache on top of the existing L3 cache on the core complex die (CCD), the Ryzen 9 7945HX3D marks the first time AMD has brought their extended L3 cache technology to the mobile space. And like the Ryzen desktop parts already featuring this cache, such as the <a href="https://www.anandtech.com/show/18747/the-amd-ryzen-9-7950x3d-review-amd-s-fastest-gaming-processor">Ryzen 9 7950X3D</a>, AMD&#39;s aim is to offer buyers &ndash; and especially gamers &ndash; a top-end part that can offer even better performance in select classes of workloads that can take advantage of the additional cache.</p>

<p>The Ryzen 9 7945HX3D is joining AMD&#39;s current lineup of desktop replacement-class mobile SKUs, the<a href="https://www.anandtech.com/show/18716/amd-announces-ryzen-7045-hx-series-cpus-for-laptops-up-to-16-cores-and-5-4-ghz"> Ryzen 7045HX &#39;Dragon Range&#39; series</a>, as its new flagship mobile part. First introduced earlier this year, the AMD Ryzen 7045HX series is designed to offer desktop-grade hardware and desktop-like performance, marking the first time in the Zen era that AMD has offered its desktop silicon in a mobile chip. The entirety of the 7045HX series is based on repacked desktop silicon, and the new&nbsp;Ryzen 9 7945HX3D is no exception &ndash; for all practical purposes, we&#39;re essentially looking at a mobilized version of AMD flagship desktop part, the Ryzen 9 7950X3D.</p>
]]></description>
    <link>https://www.anandtech.com/show/18978/amd-announces-the-ryzen-9-7945hx3d-ryzen-mobile-gets-3d-v-cache</link>
 	<pubDate>Thu, 27 Jul 2023 21:00:00 EDT</pubDate>
 	<guid isPermaLink="false">tag:www.anandtech.com,18978:news</guid>
 	<category><![CDATA[ CPUs]]></category>                               
</item>  
    
    
<item>
    <title>Micron Publishes Updated DRAM Roadmap: 32 Gb DDR5 DRAMs, GDDR7, HBMNext</title>
    <dc:creator>Anton Shilov</dc:creator>    
    <description><![CDATA[ <p align="center"><a href="https://www.anandtech.com/show/18982/micron-publishes-updated-dram-roadmap-32-gb-ddr5-drams-gddr7-hbmnext"><img src="https://images.anandtech.com/doci/18982/micron-wafer-semiconductor-678_575px.jpg" alt="" /></a></p><p><p>In addition to unveiling its first <a href="https://www.anandtech.com/show/18981/micron-unveils-hbm3-gen2-12-tbs-per-stack-at-92-gts-speed">HBM3 memory products yesterday</a>, Micron also published a fresh DRAM roadmap for its AI customers for the coming years. Being one of the world&#39;s largest memory manufacturers, Micron has a lot of interesting things planned, including high-capacity DDR5 memory devices and modules, GDDR7 chips for graphics cards and other bandwidth-hungry devices, as well as HBMNext for artificial intelligence and high-performance computing applications.</p>

<p align="center"><a href="https://www.anandtech.com/show/18982/micron-publishes-updated-dram-roadmap-32-gb-ddr5-drams-gddr7-hbmnext"><img alt="" src="https://images.anandtech.com/doci/18981/HBM3%20Gen2%20Press%20Deck_7_25_2023_10.png" style="width: 678px;" /></a></p>

<h3>32 Gb DDR5 ICs</h3>

<p>We all love inexpensive high-capacity memory modules, and it looks like Micron has us covered. Sometimes in the late first half of 2024, the company plans to roll-out its first 32 Gb DDR5 memory dies, which will be produced on the company&#39;s 1&beta; (1-beta) manufacturing process. This is Micron&#39;s latest process node and which does not use extreme ultraviolet lithography, but rather relies on multipatterning.</p>

<p>32 Gb DRAM dies will enable Micron to build 32 GB DDR5 modules using just eight memory devices on one side of the module. Such modules can be made today with Micron&#39;s current 16 Gb dies, but this requires either placing 16 DRAM packages over both sides of a memory module &ndash; driving up production costs &ndash; or by placing two 16 Gb dies within a single DRAM package, which incurs its own costs due to the packaging required. 32 Gb ICs, by comparison, are easier to use, so 32 GB modules based on denser DRAM dies will eventually lead to lower costs compared to today&#39;s 32 GB memory sticks.</p>

<p>But desktop matters aside, Micron&#39;s initial focus with their higher density dies will be to build even higher capacity data center-class parts, including RDIMMs, MRDIMMs, and CXL modules. Current high performance AI models tend to be very large and memory constrained, so larger memory pools open the door both to even larger models, or in bringing down inference costs by being able to run additional instances on a single server.</p>

<p>For 2024, Micron is planning to release 128GB DDR5 modules based on these new dies. In addition, the company announced plans for 192+ GB and 256+ GB DDR5 modules for 2025, albeit without disclosing which chips these are set to use.</p>

<p>Meanwhile, Micron&#39;s capacity-focused roadmap doesn&#39;t have much to say about bandwidth. While it would be unusual for newer DRAM dies not to clock at least somewhat higher, memory manufacturers as a whole have not offered much guidance about future DDR5 memory speeds. Especially with MRDIMMs in the pipeline, the focus is more on gaining additional speed through parallelism, rather than running individual DRAM cells faster. Though with this roadmap in particular, it&#39;s clear that Micron is more focused on promoting DDR5 capacity than promoting DDR5 performance.</p>

<h3>GDDR7 in 1H 2024</h3>

<p>Micron was the first larger memory maker to <a href="https://www.anandtech.com/show/18939/micron-expects-to-debut-gddr7-memory-in-2024">announce</a> plans to roll out its GDDR7 memory in the first half of 2024. And following up on that, the new roadmap has the the company prepping 16 Gb and 24 Gb GDDR7 chips for late Q2 2024.</p>

<p>As with <a href="https://www.anandtech.com/show/18963/samsung-completes-initial-gddr7-development-first-parts-to-reach-up-to-32gbpspin">Samsung</a>, Micron&#39;s plans for their first generation GDDR7 modules do not have them reaching the spec&#39;s highest transfer rates right away (36 GT/sec), and instead Micron is aiming for a more modest and practical 32 GT/sec. Which is still good enough to enable upwards of 50% greater bandwidth for next-generation graphics processors from AMD, Intel, and NVIDIA. And perhaps especially NVIDIA, since this roadmap also implies that we won&#39;t be seeing a GDDR7X from Micron, meaning that for the first time since 2018, NVIDIA won&#39;t have access to a specialty GDDR DRAM from Micron.</p>

<h3>HBMNext in 2026</h3>

<p>In addition to GDDR7, which will be used by graphics cards, game consoles, and lower-end high-bandwidth applications like accelerators and networking equipment, Micron is also working on the forthcoming generations of its HBM memory for heavy-duty artificial intelligence (AI) and high-performance computing (HPC) applications.</p>

<p>Micron expects its HBMNext (HBM4?) to be available in 36 GB and 64 GB capacities, which points to a variety of configurations, such as 12-Hi 24 Gb stacks (36 GB) or 16-Hi 32 Gb stacks (64 GB), though these are pure speculations at this point. As for performance, Micron is touting 1.5 TB/s &ndash; 2+ TB/s of bandwidth per stack, which points to data transfer rates in excess of 11.5 GT/s/pin.</p>
</p>]]></description>
    <link>https://www.anandtech.com/show/18982/micron-publishes-updated-dram-roadmap-32-gb-ddr5-drams-gddr7-hbmnext</link>
 	<pubDate>Thu, 27 Jul 2023 09:00:00 EDT</pubDate>
 	<guid isPermaLink="false">tag:www.anandtech.com,18982:news</guid>
 	<category><![CDATA[ Memory]]></category>                               
</item>  
    
    
<item>
    <title>Rapidus Wants to Supply 2nm Chips to Tech Giants, Challenge TSMC</title>
    <dc:creator>Anton Shilov</dc:creator>    
    <description><![CDATA[ <p align="center"><a href="https://www.anandtech.com/show/18979/rapidus-wants-to-supply-2nm-chips-to-tech-giants-challenge-tsmc"><img src="https://images.anandtech.com/doci/18979/2_nm_wafer_9b1b1f1e55-ibm-semiconductor_575px.jpg" alt="" /></a></p><p><p>It has been a couple of decades since a Japanese fab has offered a leading-edge chip manufacturing process. Even to this day, none of the Japanese chipmakers have made it as far as adopting FinFETs, something that U.S. and Taiwanese companies adopted in early-to-mid-2010s. But Rapidus, a semiconductor consortium backed by the Japanese government and large conglomerates, plans to leapfrog several generations of nodes and start 2nm production in 2027. Interestingly, the company aims to serve world&#39;s leading tech giants, challenging TSMC, IFS, and Samsung Foundry.</p>

<p>The endeavor is both extremely challenging and tremendously expensive. Modern fabrication technologies are expensive to develop in general. To cut down its R&amp;D costs, Rapidus teamed up with IBM, which has done extensive research in such fields as transistor structures as well as chip materials. But in addition to developing a viable 2nm fabrication process, Rapidus will also have to build a modern semiconductor fabrication facility, which is an expensive venture. Rapidus itself projects that it will need approximately $35 billion to initiate pilot 2nm chip production in 2025, and then bring that to high-volume manufacturing in 2027.</p>

<p>To recover the massive R&amp;D and fab construction costs, Rapidus will need to produce its 2nm chips in very high volumes. As demand from Japanese companies alone may not suffice, Rapidus is looking for orders from international corporations like Apple, Google, and Meta.</p>

<p>&quot;We are looking for a U.S. partner, and we have begun discussions with some GAFAM [Google, Apple, Facebook, Amazon and Microsoft] corporations,&quot; Atsuyoshi Koike, chief executive of Rapidus,&nbsp;told&nbsp;<a href="https://asia.nikkei.com/Editor-s-Picks/Interview/Japan-s-Rapidus-in-talks-to-supply-chips-to-U.S.-tech-giants-CEO">Nikkei</a>. &quot;Specifically, there is demand [for chips] from data centers [and] right now, TSMC is the only company that can make the semiconductors they envision. That is where Rapidus will enter.&quot;</p>

<p>Despite escalating chip design costs, the number of companies opting to develop their own custom system-on-chips for artificial intelligence (AI) and high-performance computing (HPC) applications is growing these days. Hyperscalers like AWS, Google, and Facebook have already developed numerous chips in-house to replace off-the-shelf offerings from companies like AMD, Intel, and NVIDIA with something that suits them better.</p>

<p>These companies typically rely on TSMC&nbsp;since the latter tends to offer competitive nodes, predictable yields, and the ability to&nbsp;re-use IP&nbsp;across various products. So securing orders from a tech giant is challenging for a new kid on the block. But Rapidus&#39; strategy is not completely&nbsp;unfounded, as&nbsp;the number of hyperscalers that&nbsp;need custom silicon&nbsp;is growing and one or two&nbsp;may opt for Rapidus if the Japanese company&nbsp;can provide competitive technology, high yields, and fair pricing.</p>

<p>With that said, however, Rapidus is also making it clear that the company does&nbsp;not plan to&nbsp;emulate TSMC&#39;s entire business model, where they&#39;d serve a wide range of clients like TSMC does. Instead, Rapidus intends&nbsp;to start with about five customers, then&nbsp;gradually expand to&nbsp;10, and then see if it wants and can serve more.</p>

<p>&quot;Our business model is not that of TSMC, which manufactures for every client,&quot; said Koike. &quot;We will start with around five companies at most, then eventually grow to 10 companies, and we&#39;ll see if we&#39;ll increase the number beyond that.&quot;</p>

<p>It is unclear&nbsp;whether such a limited client base can generate enough demand&nbsp;and revenue&nbsp;to recover Rapidus&#39; massive investment needed to kick-start 2nm production by 2027.&nbsp;It is&nbsp;also&nbsp;going to be&nbsp;a challenge to secure even five significant 2nm orders by 2027 given the limited number of companies ready to invest in&nbsp;chips to be made on a&nbsp;leading-edge technology&nbsp;and competition from established players like TSMC, Samsung Foundry, and IFS.</p>

<p>However, from the Japanese government&#39;s perspective, Rapidus is seen as a catalyst for revitalizing Japan&#39;s advanced semiconductor supply chain, rather than a money making machine in and of itself. So even if the 2nm project was not an&nbsp;immediate success, it can be justified as a stepping stone towards creating more opportunities for local chip designers.</p>

<p>As for revenue, Koike&nbsp;predicts that quotes for its 2nm chips will be 10&nbsp;times greater than for chips currently made by Japanese firms, which is of course a significant change for the Japanese chip industry.&nbsp;This is not particularly surprising though, as the most advanced process technology available in Japan today is 45nm, which these days is a very inexpensive node as it is used on fully depreciated fabs and does not require any new equipment.</p>

<p>Sources:&nbsp;<a href="https://asia.nikkei.com/Editor-s-Picks/Interview/Japan-s-Rapidus-in-talks-to-supply-chips-to-U.S.-tech-giants-CEO">Nikkei</a>,&nbsp;<a href="https://www.digitimes.com/news/a20230724PD211.html">DigiTimes</a></p>
</p>]]></description>
    <link>https://www.anandtech.com/show/18979/rapidus-wants-to-supply-2nm-chips-to-tech-giants-challenge-tsmc</link>
 	<pubDate>Wed, 26 Jul 2023 13:00:00 EDT</pubDate>
 	<guid isPermaLink="false">tag:www.anandtech.com,18979:news</guid>
 	<category><![CDATA[ Semiconductors]]></category>                               
</item>  
    
    
<item>
    <title>Micron Unveils HBM3 Gen2 Memory: 1.2 TB/sec Memory Stacks For HPC and AI Processors</title>
    <dc:creator>Anton Shilov</dc:creator>    
    <description><![CDATA[ <p align="center"><a href="https://www.anandtech.com/show/18981/micron-unveils-hbm3-gen2-12-tbs-per-stack-at-92-gts-speed"><img src="https://images.anandtech.com/doci/18981/Micron_HBM3_Gen2_Stack_575px.jpg" alt="" /></a></p><p><p>Micron today is introducing its first HBM3 memory products, becoming the latest of the major memory manufacturers to start building the high bandwidth memory that&#39;s widely used in server-grade GPUs and other high-end processors. Aiming to make up for lost time against its Korean rivals, Micron intends to essentially skip &quot;vanilla&quot; HBM3 and move straight on to even higher bandwidth versions of the memory they&#39;re dubbing &quot;HBM3 Gen2&quot;, developing 24 GB stacks that run at over 9 GigaTransfers-per-second. These new HBM3 memory stacks from Micron will target primarily AI and HPC datacenter, with mass production kicking off for Micron in early 2024.</p>

<p>Micron&#39;s 24 GB HBM3 Gen2 modules are based on stacking eight 24Gbit memory dies made using the company&#39;s 1&beta; (1-beta) fabrication process. Notably, Micron is the first of the memory vendors to announce plans to build HBM3 memory with these higher-density dies, while <a href="https://www.anandtech.com/show/18823/sk-hynix-now-sampling-24gb-hbm3-stacks-prepping-for-mass-production">SK hynix offers their own 24 GB stacks</a>, the company is using a 12-Hi configuration of 16Gbit dies. So Micron is on track to be the first vendor to offer 24 GB HBM3 modules in the more typical 8-Hi configuration. And Micron is not going to stop at 8-Hi 24Gbit-based HBM3 Gen2 modules, either, with the company saying that they plan to introduce even higher capacity class-leading 36 GB 12-Hi HBM3 Gen2 stacks next year.</p>

<p>Besides taking the lead in density, Micron is also looking to take a lead in speed. The company expects its HBM3 Gen2 parts to hit date rates as high as 9.2 GT/second, 44% higher than the top speed grade of the base HBM3 specification, and 15% faster than the 8 GT/second target for <a href="https://www.anandtech.com/show/18880/sk-hynix-hbm3e-disclosure-8gts-memory-in-2024">SK hynix&#39;s rival HBM3E memory</a>. The increased data transfer rate enables each 24 GB memory module to offer peak bandwidth of 1.2 TB/sec per stack.</p>

<p>Micron says that 24GB HBM3 Gen2 stacks will enable 4096-bit HBM3 memory subsystems with a bandwidth of 4.8 TB/s and 6096-bit HBM3 memory subsystems with a bandwidth of 7.2 TB/s. To put the numbers into context, Nvidia&#39;s H100 SXM features a peak memory bandwidth of 3.35 TB/s.</p>

<table align="center" border="0" cellpadding="0" cellspacing="1" width="650">
	<tbody>
		<tr class="tgrey">
			<td align="center" colspan="7">HBM Memory Comparison</td>
		</tr>
		<tr class="tlblue">
			<td width="186">&nbsp;</td>
			<td align="center" valign="middle" width="137">&quot;HBM3 Gen2&quot;</td>
			<td align="center" valign="middle" width="137">HBM3</td>
			<td align="center" valign="middle" width="137">HBM2E</td>
			<td align="center" rowspan="1" valign="middle" width="136">HBM2</td>
		</tr>
		<tr>
			<td class="tlgrey">Max Capacity</td>
			<td align="center" valign="middle">24 GB</td>
			<td align="center" valign="middle">24 GB</td>
			<td align="center" valign="middle">16 GB</td>
			<td align="center" valign="middle">8 GB</td>
		</tr>
		<tr>
			<td class="tlgrey">Max Bandwidth Per Pin</td>
			<td align="center" valign="middle">9.2 GT/s</td>
			<td align="center" valign="middle">6.4 GT/s</td>
			<td align="center" valign="middle">3.6 GT/s</td>
			<td align="center" valign="middle">2.0 GT/s</td>
		</tr>
		<tr>
			<td class="tlgrey">Number of DRAM ICs per Stack</td>
			<td align="center" valign="middle">8</td>
			<td align="center" valign="middle">12</td>
			<td align="center" valign="middle">8</td>
			<td align="center" valign="middle">8</td>
		</tr>
		<tr>
			<td class="tlgrey">Effective Bus Width</td>
			<td align="center" colspan="4" rowspan="1" valign="middle">1024-bit</td>
		</tr>
		<tr>
			<td class="tlgrey">Voltage</td>
			<td align="center" valign="middle">1.1 V?</td>
			<td align="center" valign="middle">1.1 V</td>
			<td align="center" valign="middle">1.2 V</td>
			<td align="center" rowspan="1" valign="middle">1.2 V</td>
		</tr>
		<tr>
			<td class="tlgrey">Bandwidth per Stack</td>
			<td align="center" valign="middle">1.2 TB/s</td>
			<td align="center" valign="middle">819.2 GB/s</td>
			<td align="center" valign="middle">460.8 GB/s</td>
			<td align="center" rowspan="1" valign="middle">256 GB/s</td>
		</tr>
	</tbody>
</table>

<p>High frequencies aside, Micron&#39;s HBM3 Gen2 stacks are otherwise drop-in compatible with current HBM3-compliant applications (e.g., compute GPUs, CPUs, FPGAs, accelerators). So device manufacturers will finally have the option of tapping Micron as an HBM3 memory supplier as well, pending the usual qualification checks.</p>

<p>Under the hood, Micron&#39;s goal to jump into an immediate performance leadership position within the HBM3 market means that they need to one-up their competition from a technical level. Among other changes and innovations to accomplish that, the company increased the number of through-silicon vias (TSVs) by two times compared to shipping HBM3 products. In addition, Micron shrunk the distance between DRAM devices in its HBM3 Gen2 stacks. These two changes to packaging reduced thermal impendence of these memory modules and made it easier to cool them down. Yet, the increased number of TSVs can bring other advantages too.</p>

<p align="center"><a href="https://www.anandtech.com/show/18981/micron-unveils-hbm3-gen2-12-tbs-per-stack-at-92-gts-speed"><img alt="" src="https://images.anandtech.com/doci/18981/HBM3%20Gen2%20Press%20Deck_7_25_2023_06.png" style="width: 678px;" /></a></p>

<p>Given that Micron uses 24 Gb memory devices (rather than 16 Gb memory devices) for its HBM3 Gen2 stacks, it is inevitable that it had to increase the number of TSVs to ensure proper connectivity. Yet, doubling the number of TSVs in an HBM stack can enhance overall bandwidth (and shrink latency), power efficiency, and scalability by facilitating more parallel data transfers. It also improves reliability by mitigating the impact of single TSV failures through data rerouting. However, these benefits come with challenges such as increased manufacturing complexity and increased potential for higher defect rates (already an ongoing concern for HBM), which can translate to higher costs.</p>

<p align="center"><a href="https://www.anandtech.com/show/18981/micron-unveils-hbm3-gen2-12-tbs-per-stack-at-92-gts-speed"><img alt="" src="https://images.anandtech.com/doci/18981/HBM3%20Gen2%20Press%20Deck_7_25_2023_09.png" style="width: 678px;" /></a></p>

<p>Just like other HBM3 memory modules, Micron&#39;s HBM3 Gen2 stacks feature Reed-Solomon on-die ECC, soft repair of memory cells, hard-repair of memory cells as well as auto error check and scrub support.</p>

<p>Micron says it will mass produce its 24 GB HBM3 modules starting in Q1 2024, and will start sampling its 12-Hi 36GB HBM3 stacks around this time as well. The latter will enter high volume production in the second half of 2024.</p>

<p>To date, the JEDEC has yet to approve a post-6.4GT/second HBM3 standard. So Micron&#39;s HBM3 Gen2 memory, as well as SK hynix&#39;s rival HBM3E memory, are both off-roadmap standards for the moment. Given the interest in higher bandwidth HBM memory and the need for standardization, we&#39;d be surprised if the group didn&#39;t eventually release an updated version of the HBM3 standard that Micron&#39;s devices will conform to. Though as the group tends to shy away from naming battles (&quot;HBM2E&quot; was never a canonical product name for faster HBM2, despite its wide use), it&#39;s anyone&#39;s guess how this latest kerfuffle over naming will play out.</p>

<p>Beyond their forthcoming HBM3 Gen2 products, Micron is also making it known that the company already working on HBMNext (HBM4?) memory. That iteration of HBM will provide 1.5 TB/s &ndash; 2+ TB/s of bandwidth per stack with capacities ranging from 36 GB to 64 GB.</p>

<p align="center"><a href="https://www.anandtech.com/show/18981/micron-unveils-hbm3-gen2-12-tbs-per-stack-at-92-gts-speed"><img alt="" src="https://images.anandtech.com/doci/18981/HBM3%20Gen2%20Press%20Deck_7_25_2023_10.png" style="width: 678px;" /></a></p>

<p align="center"><div>Gallery: <a href="https://www.anandtech.com/Gallery/Album/8334" target="_blank">Micron HBM3 Gen2 Press Deck</a><div><a href="https://www.anandtech.com/Gallery/Album/8334#1" target="_blank"><img src="https://images.anandtech.com/galleries/8334/HBM3 Gen2 Press Deck_7_25_2023_01_thumb.png" width="85" height="85" border="0"/></a><a href="https://www.anandtech.com/Gallery/Album/8334#2" target="_blank"><img src="https://images.anandtech.com/galleries/8334/HBM3 Gen2 Press Deck_7_25_2023_02_thumb.png" width="85" height="85" border="0"/></a><a href="https://www.anandtech.com/Gallery/Album/8334#3" target="_blank"><img src="https://images.anandtech.com/galleries/8334/HBM3 Gen2 Press Deck_7_25_2023_03_thumb.png" width="85" height="85" border="0"/></a><a href="https://www.anandtech.com/Gallery/Album/8334#4" target="_blank"><img src="https://images.anandtech.com/galleries/8334/HBM3 Gen2 Press Deck_7_25_2023_04_thumb.png" width="85" height="85" border="0"/></a><a href="https://www.anandtech.com/Gallery/Album/8334#5" target="_blank"><img src="https://images.anandtech.com/galleries/8334/HBM3 Gen2 Press Deck_7_25_2023_05_thumb.png" width="85" height="85" border="0"/></a><a href="https://www.anandtech.com/Gallery/Album/8334#6" target="_blank"><img src="https://images.anandtech.com/galleries/8334/HBM3 Gen2 Press Deck_7_25_2023_06_thumb.png" width="85" height="85" border="0"/></a></div></div></p>
</p>]]></description>
    <link>https://www.anandtech.com/show/18981/micron-unveils-hbm3-gen2-12-tbs-per-stack-at-92-gts-speed</link>
 	<pubDate>Wed, 26 Jul 2023 09:00:00 EDT</pubDate>
 	<guid isPermaLink="false">tag:www.anandtech.com,18981:news</guid>
 	<category><![CDATA[ Memory]]></category>                               
</item>  
    
    
<item>
    <title>The Be Quiet! Dark Power Pro 13 1300W ATX 3.0 PSU Review: Flagship Quality, Flagship Price</title>
    <dc:creator>E. Fylladitakis</dc:creator>    
    <description><![CDATA[ <p>Having reviewed and dissected almost a dozen ATX 3.0 power supplies in the last year, thus far we&#39;ve seen an interesting mix in design pedigrees for PSUs targeting the newest power standard. For some manufacturers this has meant bringing up entirely new PSU designs by OEMs new and old, developing fresh platforms to accommodate the new 12VHPWR connector and its up to 600 Watt power limits. Meanwhile for other manufacturers, especially at the high end of the market, their existing PSU designs are so bulletproof that they&#39;ve been able to add everything needed for ATX 3.0 compliance with only very modest changes.</p>

<p>For Be Quiet&#39;s flagship power supply lineup, the Dark Power Pro series, the company falls distinctly in to the second group. The pride and joy of Be Quiet!&#39;s lineup has always been the pinnacle of the company&rsquo;s engineering abilities, with the best possible specifications their engineers could muster (and equally prodigious price tags for the consumer). Besides making for long-lived PSUs themselves, that kind of engineering rigor has also allowed for a long-lived platform &ndash; even with the more extreme power delivery requirements brought about by ATX 3.0, Be Quiet has only needed to make a handful of changes to meet the new standard.</p>

<p>The result of those updates is the latest generation of the Dark Power Pro series, the Dark Power Pro 13, which we&#39;re looking at today. The 13th iteration of Be Quiet&#39;s lead PSU series builds upon their already impressive design for the Dark Power Pro 12, adding compliance with Intel&rsquo;s ATX 3.0 design guide while retaining the 80Plus Titanium certification and impressive features of the previous version.</p>
]]></description>
    <link>https://www.anandtech.com/show/18956/the-be-quiet-dark-power-pro-13-1300w-atx-30-psu-review</link>
 	<pubDate>Wed, 26 Jul 2023 08:00:00 EDT</pubDate>
 	<guid isPermaLink="false">tag:www.anandtech.com,18956:news</guid>
 	<category><![CDATA[ Cases/Cooling/PSUs]]></category>                               
</item>  
    
    
<item>
    <title>TACC&#39;s Stampede3 Supercomputer Uses Intel&#39;s Xeon Max with HBM2E and Ponte Vecchio</title>
    <dc:creator>Anton Shilov</dc:creator>    
    <description><![CDATA[ <p align="center"><a href="https://www.anandtech.com/show/18974/taccs-stampede3-uses-intels-xeon-max-with-hbm2e-and-ponte-vecchio"><img src="https://images.anandtech.com/doci/18974/stampede3-pressrelease-banner.jpg__1200x1200_q85_subsampling-2_575px.jpg" alt="" /></a></p><p><p>The Texas Advanced Computing Center (TACC) unveiled its latest <em>Stampede</em> supercomputer for open science research projects, Stampede3. TACC anticipates that Stampede3 will come online this fall and will deliver its full performance in early 2024. The supercomputer will be a crucial component of the U.S. National Science Foundation&rsquo;s (NSF) ACCESS scientific supercomputing ecosystem, and it is projected to serve the open science community from 2024 until 2029.</p>

<p>The third-generation Stampede cluster, which will be built by Dell, will incorporate 560 nodes equipped with Intel&#39;s Sapphire Rapids generation Xeon CPU Max processors, each offering 56 CPU cores and 64GB of on-package HBM2E memory. Surprisingly, TACC is going to be operating these nodes in HBM-only mode, so no additional DRAM will be attached to the CPU nodes&nbsp;&ndash; all of their memory will come from the on-chip HBM stacks.</p>

<p>With these specifications, Stampede3 is expected to have a peak performance of approximately 4 FP64 PetaFLOPS, while offering nearly 63,000 general-purpose cores. In addition, TACC also plans to install 10 Dell PowerEdge XE9640 servers with 40 Intel Data Center GPU Max compute GPUs for artificial intelligence and machine learning workloads.</p>

<p>Given this layout, the bulk of Stampede3&#39;s compute performance will be supplied by CPUs. This makes Stampede3 a bit of a rarity in this day and age, as most high-performance systems are GPU driven, leaving Stampede3 as one of the last supercomputers that relies almost solely on general-purpose CPUs.</p>

<p>And while the current cluster is primarily focused on CPU performance, TACC is also going to use the Intel GPUs in the latest Stampede revamp to investigate on how to incorporate larger numbers of GPUs into future versions of the system. For now, most of TACC&#39;s AI tasks are run on its Lone Star systems, which is powered by hundreds Nvidia A100 compute GPUs. So the organization&#39;s aim is to explore whether a portion of this workload can be transferred to Intel&#39;s Ponte Vecchio.</p>

<p>We are going to put in a small system with exploratory capability using Intel Ponte Vecchio,&quot; said Dan Stanzione, executive director of TACC. &quot;We are still negotiating exactly how much of that will have, but I would say a minimum of 40 nodes and maximum of a hundred or so. [&hellip;] We are just putting a couple of racks of Ponte Vecchio out there to see how people work with it.&quot;</p>

<p>Stampede3 will leverage 400 Gb/s Omni-Path Fabric technology that will enable a backplane bandwidth of 24TB/s. This setup will allow the machine to efficiently scale and minimize latencies, making it well-suited for various applications requiring simulations.</p>

<p>TACC also plans to reincorporate nodes from the previous version, Stampede2, which were based on older-generation Xeon Scalable CPUs. This integration will enhance the capacity of Stampede3 for high-memory applications, high-throughput computing, interactive workloads, and other previous-generation applications. In total, the new supercomputer system will feature 1,858 compute nodes with over 140,000 cores, more than 330 TBs of RAM, new storage capacity of 13 PBs, and a peak performance close to 10 PetaFLOPS.</p>

<p>Sources: <a href="https://www.tacc.utexas.edu/news/latest-news/2023/07/24/taccs-new-stampede3-advances-nsf-supercomputing-ecosystem/">TACC</a>, <a href="https://www.hpcwire.com/2023/07/24/taccs-new-stampede3-enhances-nsf-supercomputing-ecosystem/">HPCWire</a></p>
</p>]]></description>
    <link>https://www.anandtech.com/show/18974/taccs-stampede3-uses-intels-xeon-max-with-hbm2e-and-ponte-vecchio</link>
 	<pubDate>Tue, 25 Jul 2023 12:00:00 EDT</pubDate>
 	<guid isPermaLink="false">tag:www.anandtech.com,18974:news</guid>
 	<category><![CDATA[ Supercomputers]]></category>                               
</item>  
    
</channel>
</rss>