HTPC Aspects : Miscellaneous Factors

In this section, we cover some miscellaneous HTPC aspects that are too short to warrant a separate section. These include a discussion of various display refresh rates supported, a short look at the hardware encoder (NVENC) in action and a summary of our thoughts on the GT 750Ti as a HTPC GPU.

Refresh Rate Accuracy:

NVIDIA provides an easy way to customize refresh rates. The process remains the same as what we explained in our review of the GT 640. The 23 Hz setting gives us a refresh rate of 23.971 Hz. With Intel providing rock-solid 23.976 Hz support in Haswell, it is time NVIDIA got the out-of-the-box refresh rate support correct.

NVIDIA also allows setting of refresh rates not reported as available by the display's EDID. On the Sony KDL46EX720, it allowed driving of 1080p50 without any issues. The flexibility is definitely appreciated, though it would be nice to have better accuracy without all the tweaking.

Hardware Encoder: NVENC

We used CyberLink MediaEspresso v6.7 to evaluate the hardware encoder block. Our test clip was a 3-minute long 1080p24 H.264 stream at 36 Mbps and the target was a 720p24 H.264 stream at 6 Mbps. The time taken for conversion and the power consumption at the wall during the conversion process are provided in the table below.

GPU Video Encoding Performance
  Conversion Time Power
NVIDIA GeForce GTX 750 Ti 2:54 88.97W
NVIDIA GeForce GT 640 0:36 108.18W
AMD Radeon HD 7750 (VCE) 1:06 76.84W
Intel HD 4000 QuickSync (Better Quality/Fast Conversion) 0:24 63.91W

It appears as if the 750Ti is using the CUDA path rather than NVENC, while the 640 seems to use NVENC fine. We had readied ourselves for some quality comparison using objective metrics for the new NVENC. It looks like we have to wait for this issue to be resolved before proceeding down that path. [Update: NVIDIA got back to us indicating that this is a Maxwell-related driver issue. We are waiting for new drivers]

HTPC Verdict - Wait and Watch

We have taken a look at the HTPC credentials of the 750Ti and compared it with the GT 640 and the HD 7750. In terms of power efficiency, it is hard not to recommend the 750Ti. With a 60W TDP, it is amenable to passive cooling also. However, it comes to the market at a time when the HEVC standard has just been ratified (preventing it from having a full-blown hardware accelerated decoder) and HDMI 2.0 with 4Kp60 support being right around the corner. The perfect HTPC GPU would include support for both, but the 750Ti, unfortunately, is a bit early to the game. More troublesome is the fact that CyberLink's MediaEspresso seems unable to take advantage of the new NVENC and the fact that some of our 1080p60 H.264 clips are showing decoding artifacts (considering they play perfectly using the GT 640).

We would suggest HTPC enthusiasts to adopt a wait-and-watch approach to the GT 750Ti, particularly with respect to driver bugs specific to the 750Ti and also the extent of HEVC decode support that will be available. Depending on the requirements, it might also be prudent to wait for a Maxwell GPU with HDMI 2.0 support.

HTPC Aspects : Decoding & Rendering Benchmarks The Test
Comments Locked

177 Comments

View All Comments

  • TheinsanegamerN - Tuesday, February 18, 2014 - link

    Look at sapphire's 7750. superior in every way to the 6570, and is single slot low profile. and overclocks like a champ.
  • dj_aris - Tuesday, February 18, 2014 - link

    Sure but it's cooler is kind of loud. Definitely NOT a silent HTPC choice. Maybe a LP 750 would be better.
  • evilspoons - Tuesday, February 18, 2014 - link

    Thanks for pointing that out. None of my local computer stores sell that, but I took a look on MSI's site and sure enough, there it is. They also seem to have an updated version of the same card being sold as an R7 250, although I'm not sure there's any real difference or if it's just a new sticker on the same GPU. Clock speeds, PCB design, and heat sink are the same, anyway.
  • Sabresiberian - Tuesday, February 18, 2014 - link

    I'm hoping the power efficiency means the video cards at the high end will get a performance boost because they are able to cram more SMMs on the die than SMXs were used in Kepler solutions. This of course assumes the lower power spec means less heat as well.

    I do think we will see a significant performance increase when the flagship products are released.

    As far as meeting DX11.1/11.2 standards - it would be interesting to hear from game devs how much this effects them. Nvidia has never been all that interested in actually meeting all the requirements for Microsoft to give them official status for DX versions, but that doesn't mean the real-world visual quality is reduced. In the end what I care about is visual quality; if it causes them to lose out compared to AMD's offerings, I will jump ship in a heartbeat. So far that hasn't been the case though.
  • Krysto - Tuesday, February 18, 2014 - link

    Yeah, I'm hoping for a 10 Teraflops Titan, so I can get to pair with my Oculus Rift next year!
  • Kevin G - Tuesday, February 18, 2014 - link

    nVidia has been quite aggressive with the main DirectX version. They heavily pushed DX10 back in day with the Geforce 8000/9000 series. They do tend to de-emphassize smaller updates like 8.1, 10.1, 11.1 and 11.2. This is partially due to their short life spans on the market before the next major update arrives.

    I do expect this to have recently changed as Windows it is moving to rapid release schedule and it'll be increasingly important to adopt these smaller iterations.
  • kwrzesien - Tuesday, February 18, 2014 - link

    Cards on Newegg are showing DirectX 11.2 in the specs list along with OpenGL 4.4. Not that I trust this more than the review - we need to find out more.
  • JDG1980 - Tuesday, February 18, 2014 - link

    The efficiency improvements are quite impressive considering that they're still on 28nm. TDP is low enough that AIBs should be able to develop fanless versions of the 750 Ti.

    The lack of HDMI 2.0 support is disappointing, but understandable, considering that it exists virtually nowhere. (Has the standard even been finalized yet?) But we need to get there eventually. How hard will it be to add this feature to Maxwell in the future? Does it require re-engineering the GPU silicon itself, or just re-designing the PCB with different external components?

    Given the increasing popularity of cryptocoin mining, some benchmarks on that might have been useful. I'd be interested to know if Maxwell is any more competitive in the mining arena than Kepler was. Admittedly, no one is going to be using a GPU this small for mining, but if it is competitive on a per-core basis, it could make a big difference going forward.
  • xenol - Tuesday, February 18, 2014 - link

    I'm only slightly annoyed that NVIDIA released this as a 700 series and not an 800 series.
  • DanNeely - Tuesday, February 18, 2014 - link

    I suspect that's an indicator that we shouldn't expect the rest of the Maxwell line to launch in the immediate future.

Log in

Don't have an account? Sign up now