Meet The ZOTAC GAMING GeForce GTX 1650 OC

In what's becoming a theme of the GTX 16-series, the GeForce GTX 1650 is once again a pure virtual launch, where NVIDIA is not going for any Founders Edition models and all cards are up to their add-in board partners. For today's review, we take a look at ZOTAC's GeForce GTX 1650 OC, a diminutive 2-slot single-fan card with reference base clockspeed and mildly overclocked boost clock. With a TDP of 75W, the card pulls all its power from the slot, with is typical for most GeForce GTX xx50 parts.

GeForce GTX 1650 Card Comparison
  GTX 1650
(Reference Specification)
ZOTAC GTX 1650 GAMING OC
Base Clock 1485MHz 1485MHz
Boost Clock 1665MHz 1695MHz
Memory Clock 8Gbps GDDR5 8Gbps GDDR5
VRAM 4GB 4GB
TDP 75W 75W
Length N/A 5.94"
Width N/A 2-Slot
Cooler Type N/A Open Air
Price $149 $149

At just under 6", the Zotac GTX 1650 OC is compact enough most builds. As the card pulls power only from the PCIe slot, it's a conventional fit for mITX and other SFF builds, or simply as a no-fuss drop-in replacement. In turn, the Zotac GTX 1650 OC's cooling solution is one they've used before with their other mini ITX cards, combining a 90mm fan and 'sunflower' heatsink. This also provides headroom for ZOTAC to put a modest boost increase of 30MHz.

 

The design/shroud and output situation is likewise similar. One DVI port, one HDMI 2.0b port, and one DisplayPort covers all bases, including potential HTPC use. Of course, partners can always decide on different configurations but the power/cost-sensitive entry-level range is essentially standardized. VirtualLink is naturally not included here for several reasons, and in perspective the 30W USB-C controller power budget for VirtualLink would be 40% of the overall 75W TDP.

For overclocking and tweaking, ZOTAC has their in-house Firestorm utility updated for Turing, including support for auto-OC scanning as part of Turing's GPU Boost 4 technology.

 
TU117: The Smallest Turing Gets Volta’s Video Encoder? The Test
Comments Locked

126 Comments

View All Comments

  • onbquo - Friday, May 3, 2019 - link

    Why is it nobody talking about coming 7nm Radeons mopping the floor in the 75W segment?
  • PeachNCream - Friday, May 3, 2019 - link

    Because no one has been able to benchmark said graphics cards so no one knows if something is going to mop floors or just draw polygons. (Personally, I'm in for a GPU that will mop my floors for me. I'd also like one that will mow the yard, wash the dishes, and take care of the laundry.)
  • onbquo - Friday, May 3, 2019 - link

    Good point but I seriously believe the next architecture Radeon built on 7nm could perform almost twice as fast than a RX 560 with 1024 CUs. Am I the only one hyped for 7nm graphics cards?
  • guidryp - Friday, May 3, 2019 - link

    You are making a pile of assumptions with no evidence.

    Process bumps aren't the big win that they once were. Radeon 7 is 7nm and it didn't get twice as fast. RTX2080 outperforms it while using less power.

    7nm is NOT a magic bullet. We need to wait and see what actually happens.
  • Cooe - Friday, May 3, 2019 - link

    More recent benchmarking actually shows the RVII with the performance edge vs the RTX 2080 (AMD just completely botched the launch drivers-wise, as isn't particularly uncommon for them) as many recent videos have shown, but you're totally passing over the fact that it uses the exact same Vega architecture as 14nm Vega 10 but manages to outperform it by around 30% while pulling LESS power than a V64. That's nearly a 40-50% boost in power efficiency per fps, with absolutely no arch changes beyond 2x additional memory controllers. Even if Navi only matches that kind of efficiency bump vs Polaris it'll still be looking really good just as long as they maintain their performance advantage as well.
  • guidryp - Saturday, May 4, 2019 - link

    Better in one or two AMD favorable games, but not overall. Beating power of V64 is needed, but still doesn't come close to NVidia power usage.
  • Oxford Guy - Saturday, May 4, 2019 - link

    7nm TSMC isn't nearly as impressive as 5nm TSMC. 80% increase in density with 5nm. 7nm is a little bit sad, really. But, it saves companies money because it doesn't require nearly as much design rules modification, so porting existing 14nm stuff is much easier.
  • PeachNCream - Tuesday, May 7, 2019 - link

    I'm really looking forward to seeing what 7nm GPUs do once they hit the market, but I want to hold back on making judgements before we see what sorts of performance and power numbers emerge. I'm also more interested in mobile than desktop components because I have not put together or purchased a desktop PC in the past 5 years since I find laptops and phones a better fit in my living space and lifestyle.
  • nevcairiel - Saturday, May 4, 2019 - link

    Personally, the only reason I would ever care about a 75W card is for video duties - and AMDs video decoding/encoding is significantly worse then Intels or NVIDIAs. So there is that.

    I would be excited if they were trying to make a high-end 7nm card that doesn't suck, but apparently its once again just low-power cards. same old same old. I'm bored already.
  • Oxford Guy - Saturday, May 4, 2019 - link

    "Personally, the only reason I would ever care about a 75W card is for video duties "

    Then the lack of B frame support in the encoder is a deal-breaker.

Log in

Don't have an account? Sign up now