This morning NVIDIA has taken the wraps off of a new video card for laptops, the GeForce MX150. Aimed at the entry-level market for discrete GPUs – that is, laptops that need performance only a bit above an integrated GPU – the MX150 is NVIDIA’s Pascal-based successor to the previous 930M/940M series of laptop adapters that have been in computers over the last couple of years. Today’s reveal is undoubtedly tied to next week’s Computex trade show, so we should expect to see a number of laptops using the new adapter announced in the coming days.

From a technical perspective, details on the GeForce MX150 are very limited. Traditionally NVIDIA does not publish much in the way of details on their low-end laptop parts, and unfortunately the MX150’s launch isn’t any different. We’re still in the process of shaking down NVIDIA for more information, but what usually happens in these cases is that these low-end products don’t have strictly defined specifications. At a minimum, OEMs are allowed to dial in clockspeeds to meet their TDP and performance needs. However in prior generations we’ve also seen NVIDIA and OEMs use multiple GPUs under the same product name – mixing in GM107 and GM108, for example – so there’s also a strong possibility that will happen here as well.

Officially, all NVIDIA says about the new video card is that it uses GDDR5 and that it offers around 33% better performance than the GeForce 940MX, a (typically) GM108-based product. Based on the market segment and NVIDIA’s recent activities in the desktop space, the “baseline” MX150 is without a doubt GP108, NVIDIA’s entry-level GPU that was just recently launched in the GeForce GT 1030 for desktops. Information about this chip is limited, but here’s my best guess for baseline MX150 specifications.

Best Guess: NVIDIA Laptop Video Card Specification Comparison
  Typical MX150 Typical 940MX
CUDA Cores 384? 384
ROPs 16 8
Boost Clock Variable Variable
Memory Type GDDR5 GDDR5/DDR3
Memory Bus Width 64-bit? 64-bit
VRAM <=2GB <=2GB
GPU GP108? GM108
Manufacturing Process TSMC 16nm TSMC 28nm
Launch Date 05/26/2017 03/2016

The limited 33% performance improvement over the existing 940MX comes as a bit of a surprise, but it makes sense within the context of the specifications. Relative to a GDDR5 940MX, the MX150 does not have a significant specification advantage over the aforementioned 940MX, with the same number of CUDA cores and similar memory bandwidth. The one stand-out here is ROP throughput, which doubles thanks to GP108’s higher ROP count.

Ultimately what this means is that most of MX150’s performance advantage over the 940MX comes from clockspeed improvements, with a smaller uptick from architectural gains. The counterpoint to that is that these are entry-level laptop parts that are frequently going to be paired with 15W Intel U-series CPUs, so vendors are going to play it safe on clockspeeds in order to maximize energy efficiency. NVIDIA does advertise these GPUs as offering multiple times the performance of Intel’s HD 620 iGPU, however given the higher power consumption of the GPU, I’m more curious how things would compare against Intel’s 28W Iris Plus 650 configurations.

Owing to OEM configurability and general NVIDIA secrecy, NVIDIA does not publish official TDPs for these parts. But it’s interesting to note that while performance has only gone up 33%, NVIDIA is claiming that power efficiency/perf-per-watt has tripled. This strongly implies that NVIDIA’s baseline specifications for the product are favoring TDP over significant clockspeed gains, so I’m very interested to see what the real-world TDPs are going to be like. 940MX was a 20-30W part (depending on who you asked and what GPU they used), so with the jump from 28nm to 16nm, NVIDIA should have a good bit of room for drawing down TDPs. Though ultimately what this may mean is that MX150 is closer to a 930M(X) replacement than a 940M(X) replacement if we’re framing things in terms of power consumption.

Otherwise, as a GP108 part this is the Pascal architecture we’ve all come to know and love. Relative to NVIDIA’s desktop parts, this is actually a more substantial upgrade, as the previous 930M/940M parts were based on NVIDIA’s Maxwell 1-generation GM108 GPUs, and not the newer Maxwell 2 GM2xx series. The difference being that these earlier parts lacked the DirectX feature level 12_1, HDMI 2.0, and low-level performance optimizations (e.g. newer color compress) that we better know the Maxwell family for. So while MX150 isn’t meant for serious gaming laptops, it has a much richer feature set to draw from for both rendering and media tasks. CUDA road coders will likely also appreciate the fact that the newer part will offer CUDA compute capabilities much closer to NVIDIA’s current-generation server hardware, such as fine-grained preemption.

Finally, like its predecessor, expect to see the GeForce MX150 frequently paired up with Intel’s U-series CPUs in ultrabooks. While this SKU isn’t strictly limited to slim form factors – and someone will probably put it into a larger device for good measure – it’s definitely how NVIDIA is positioning the part, as the GTX 1050 series is for larger devices. Also expect to see most (if not all) MX150 parts running in Optimus mode, which continues to be a strong selling point for encouraging OEMs to include a dGPU.

With Computex kicking off next week, we should see a flurry of laptop announcements. Though not all of the relevant laptop announcements have gone out yet, NVIDIA’s announcement names Acer, Asus, Clevo, MSI, and HP as laptop vendors who will all be shipping MX150-equipped laptops. NVIDIA and their various partners will in turn hit the ground running here, as NVIDIA’s announcement notes that MX150-equipped laptops will begin shipping next month.

Source: NVIDIA

POST A COMMENT

15 Comments

View All Comments

  • StevoLincolnite - Friday, May 26, 2017 - link

    So how does the Geforce MX150 stack up against the Geforce 2 MX200? Benchmarks incoming? ;) Reply
  • creed3020 - Friday, May 26, 2017 - link

    I was thinking down the same lines, right back to my first GPU! Reply
  • Samus - Friday, May 26, 2017 - link

    Can't wait for them to revive the RIVA \ TNT names now... would probably fit into a budget category perfectly. Reply
  • Rankor - Saturday, May 27, 2017 - link

    Brings back memories. My first nVidia board was an (Geforce 2) MX back in the day. This was coming from a Matrox G400. Reply
  • tipoo - Friday, May 26, 2017 - link


    Aw man, I played things at such poor resolutions and framerates on that thing! As a kid it's all I had to game on, so I dealt with Halo and Half Life 2 at like 800x600 or something and 20fps, I could not deal with that today :P
    Reply
  • nathanddrews - Friday, May 26, 2017 - link

    Wow, 20fps at 800x600? That was a REVELATION compared to the 10fps 320×240 of N64 GoldenEye! LOL Reply
  • Bulat Ziganshin - Friday, May 26, 2017 - link

    we are played dynamic games on programmable calculator with 10-digit display and 104 bytes of program memory. the program was written in secret KGB laboratory, and executed by drunken bears. but you will never believe how hard it was to live in Russia Reply
  • tipoo - Friday, May 26, 2017 - link

    Actually since the N64 was my only point of comparison, yeah it was :P Reply
  • tipoo - Friday, May 26, 2017 - link


    Also curious as to the last generation's very top end that this would match, how far back that would go
    Reply
  • satai - Friday, May 26, 2017 - link

    Is it any better than top iGPUs? Reply

Log in

Don't have an account? Sign up now