Preventing Espionage at AMD: How The Eyefinity Project Came to Be

There’s one more thing Carrell Killebrew has done for the world. He’s single handedly responsible for getting Eyefinity included in the Evergreen stack.

It started like this. All GPU vendors go to their customers (OEMs) and ask them for features they’d like to have. The notebook vendors wanted a total of 6 display outputs from the GPU, although they only needed two to be active at the same time. Two paths could be used for LCD panels, two could be used for external outputs (VGA + DVI/HDMI) and two routed to a docking station connector.

Carrell thought it would be a shame to have all of these output pins but not be able to drive all six at the same time. So he came up with a plan to be able to drive at least 3 displays on any Evergreen card. The high end cards would support 6 displays simultaneously.

His desire to do this wasn’t born out of pure lunacy, Carrell does have a goal in mind. Within the next 6 years he wants to have a first generation holodeck operational. A first generation holodeck would be composed of a 180 degree hemispherical display with both positionally and phase accurate sound. We’ll also need the pixel pushing power to make it all seem lifelike. That amounts to at least 100 million pixels (7 million pixels for what’s directly in front of you, and the rest for everything else in the scene), or almost 25 times the number of pixels on a single 30” display.

We’re not quite at 2016, so he had to start somewhere. And that somewhere happened to be with enabling a minimum of 3 and a maximum of 6 displays, per card, for all members of the Evergreen family. Today we know the technology as Eyefinity, but internally Carrell called it SunSpot.

Carrell didn’t want anyone knowing about SunSpot, so he kept it off the Cypress PRS. Through some very clever maneuvering he managed to keep it off of the radar while engineering hammered out the PRS, and even managed to keep it off of the chopping block when the GPU was cut down in size. He knew that if anyone got wind of it, they’d ask him to kill it while the chip was being scaled down. To make matters worse, if anyone outside of a trusted few became aware of it - there was the chance that NVIDIA would have time to copy and implement the feature. It then became Carrell’s goal to keep SunSpot as quiet as possible.

It began with a list. On this list were names of people who needed to know about SunSpot. If your name wasn’t on the list not only did you not know about SunSpot, but no one who knew about the project was allowed to talk about it near you. There was an internal website created that had the names of everyone who needed to know about SunSpot.

Along with the list, came rules.

As I just mentioned, no one on the list could talk about SunSpot in a place where someone not on the list could overhear. And if you wanted to get someone added to the list, it had to be approved - the final say was in the hands of none other than Carrell Killebrew.

The SunSpot engineers went to work on the feature, bringing in others only when absolutely necessary. The team grew one person at a time and eventually plateaued. The software engineers weren’t made aware of SunSpot until the last minute. Carrell only gave them enough time to enable SunSpot, they didn’t get the luxury of advance knowledge.

Carrell went to David Glenn, head of software engineering at ATI and asked him what the latest possible date that they needed to have someone in software working on this stuff. David gave him a date. Carrell asked for a list of names of people who needed to know. David gave him three names. On that date, the SunSpot team called up those three people and said “we need to tell you something”. Needless to say, no one was happy about Carrell’s secrecy. Some of the higher ups at ATI knew Carrell had people working on something, they just had no idea what it was.

It's the software that ultimately made Eyefinity

When in his own cube Carrell always spoke about SunSpot in code. He called it feature A. Carrell was paranoid, and for good reason. The person who sat on the other side of Carrell’s cube wall left to work for NVIDIA a couple months into the SunSpot project. In all, ATI had three people leave and work for NVIDIA while SunSpot was going on. Carrell was confident that NVIDIA never knew what was coming.

Other than the obvious, there was one real problem with Carrell’s secrecy. In order for Eyefinity to work, it needed support from external companies. If you’ll remember back to the Radeon HD 5800 series launch, Samsung announced thin-bezel displays to be sold in 1, 3 or 6 panel configurations specifically for Eyefinity setups. There was no way to keep SunSpot a secret while still talking to OEMs like Samsung, it’s just too big of a risk. The likelihood of someone within ATI leaking SunSpot to NVIDIA is high enough. But from an employee for an OEM that deals with both companies? That’s pretty much guaranteed.

For a feature like SunSpot to go completely unnoticed during the development of a GPU is unheard of. Carrell even developed a rating system. The gold standard is launch; if SunSpot could remain a secret until the launch, that’s gold. Silver is if they can keep it a secret until they get chips back. And the effort would get a bronze if they could keep it a secret up to tape out, at that point NVIDIA would be at least one full product cycle behind ATI.

Eventually, Rick Bergman, GM of graphics at AMD, committed to keeping SunSpot a secret until bronze, but he told Carrell that when they got to tape out they were going to have a serious talk about this.

Time went on, SunSpot went on, Carrell and crew made it to bronze. The chip had taped out and no one knew about Carrell’s pet project. It got a little past bronze and Rick asked Carrell to have that talk. There were three customers that would really benefit from talking to them about SunSpot, then the killer: it would also help ATI competitively.

Carrell didn’t want to risk tipping off the competition to SunSpot, but he knew that in order to make it successful he needed OEMs on board. The solution was to simply add those at the OEMs who needed to know about SunSpot to the list. The same rules applied to them, and they were given a separate NDA from existing NDAs in place between AMD and the OEM. AMD legal treated SunSpot as proprietary IP, if anyone else within an OEM needed to know about it they needed to first ask for permission to discuss it. To make sure that any leaks would be traceable, Carrell called SunSpot a different name to each of the three OEMs involved.

A few weeks prior to the Cypress launch one of the CEOs at one of the OEMs saw Eyefinity and asked to show it to someone else. Even the CEO’s request needed to be approved before he could share. Surprisingly enough, each of the three OEMs abided by their agreement - to Carrell’s knowledge the tech never leaked.

NVIDIA's Surround driven off two cards

While NVIDIA demonstrated its own triple-display technology at this year’s CES, it’s purely a software solution; each GPU is still only limited to two display outputs. I asked Carrell what he thought about NVIDIA’s approach, he was honest as always.

Eyefinity allows for 3 outputs from a single GPU

ATI considered a software only approach a while ago, but ultimately vetoed it for a couple of reasons. With the software-only solution you need to have a multi-GPU capable system. That means a more expensive motherboard, a more powerful PSU and a little more hassle configuration wise. Then there were the performance concerns.

One scenario is that you have very noticeable asymmetry as you have one card driving one display and the other card driving two displays. This can cause some strange problems. The other scenario is that you have all three displays coming off of a single card, and in alternating frames you send display data from one GPU to the next either via PCIe or a CF/SLI connector. With 6 displays, Carrell was concerned that there wouldn’t be enough bandwidth to do that fast enough.

There were also game compatibility concerns that made ATI not interested in the software approach. Although I was quick to point out that FOV and aspect ratio issues are apparent in many games today with Eyefinity. Carrell agreed, but said that it’s a lot better than they expected - and better than it would have been had they used a software-only solution.

Not to belittle the efforts of ATI’s software engineers here. While Carrell was one of three people originally responsible for SunSpot, they weren’t the ones who made it great. In Carrell’s own words “In the end, I’d say the most key contributions came from our Software engineering team. SunSpot is more a software feature than a hardware one”. ATI’s software team, despite not being clued into the project until it was implemented in hardware, was responsible for taking SunSpot and turning it into Eyefinity.

As for the ridiculous amount of secrecy that surrounded SunSpot? It wasn’t just to keep Carrell entertained. AMD has since incorporated much of Carrell’s brand of information compartmentalization into how it handled other upcoming features. I have to wonder if Carrell somehow managed to derive Apple’s equation for secrecy.

The Payoff: How RV740 Saved Cypress Final Words


View All Comments

  • Dudler - Sunday, February 14, 2010 - link

    Your reasoning is wrong. The 57xx is a performance segment down from the 48xx segment. By your reasoning the 5450 should be quicker than the last gen 4870X2. The 5870 should be compared to the 4870, the 5850 to 4850 and so on.

    Regarding price, the article sure covers it, the 40nm process was more expensive than TSMC told Amd, and the yield problems factored in too. Can't blame Amd for that can we?

    And finally, don't count out that Fermi is absent to the party. Amd can charge higher prices when there is no competition. At the moment, the 5-series has a more or less monopoly in the market. Considering this, I find their prices quite fair. Don't forget nVidia launched their Gtx280 at $637....
  • JimmiG - Friday, February 19, 2010 - link

    "Your reasoning is wrong. The 57xx is a performance segment down from the 48xx segment. "

    Well I compared the cards across generations based on price both at launch and how the price developed over time. The 5850 and 4850 are not in the same price segment of their respective generation. The 4850 launched at $199, the 5850 at $259 but quickly climbed to $299.

    The 5770 launched at $159 and is now at $169, which is about the same as a 1GB 4870, which will perform better in DX9 and DX10. Model numbers are arbitrary and at the very best only useful for comparing cards within the same generation.

    The 5k-series provide a lot of things, but certainly not value. This is the generation to skip unless you badly want to be the first to get DX11 or you're running a really old GPU.
  • just4U - Tuesday, February 16, 2010 - link

    and they are still selling the 275,285 etc for a hefty chunk of change. I've often considered purchasing one but the price has never been right and mail in rebates are a "PASS" or "NO THANKS" for many of us rather then a incentive.

    I haven't seen the mail-ins for AMD products much I hope their reading this and shy away from that sort of sales format. To many of us get the shaft and never recieve our rebates anyway.
  • BelardA - Monday, February 15, 2010 - link

    Also remind people... the current GTX 285 is about $400 and usually slower than the $300 5850. So ATI is NOT riping off people with their new DX11 products. And looking at the die-size drawings, the RV870 is a bit smaller than the GT200... and we all now that FERMI is going to be another HUGE chip.

    The only disappointment is that the 5750 & 5770 are not faster than the 4850/70 which used to cost about $100~120 when inventory was good. Considering that the 5700 series GPUs are smaller... Hell, even the 4770 is faster than the 5670 and costs less. Hopefully this is just the cost of production.

    But I think once the 5670 is down to $80~90 and the 5700s are $100~125 - they will be more popular.
  • coldpower27 - Monday, February 15, 2010 - link

    nVidia made a conscious decision, not to fight the 5800 Series, with the GTX 200 in terms of a price war. Hence why their price remain poor value. They won't win using a large Gt200b die vs the 5800 smaller die.

    Another note to keep in mind is that the 5700 Series, also have the detriment of being higher in price due to ATi moving the pricing scale backup a bit with the 5800 Series.

    I guess a card that draws much less power then the 4800's, and is close to the performance of those cards is a decent win, just not completely amazing.
  • MonkeyPaw - Sunday, February 14, 2010 - link

    Actually, the 4770 series was meant to be a suitable performance replacement for the 3870. By that scheme, the 5770 should have been comperable to the 4870. I think we just hit some diminishing returns from the 128bit GDDR5 bus. Reply
  • nafhan - Sunday, February 14, 2010 - link

    It's the shaders not the buswidth. Bandwidth is bandwidth however you accomplish it. The rv8xx shaders are slightly less powerful on one to one basis than the rv7xx shaders are. Reply
  • LtGoonRush - Sunday, February 14, 2010 - link

    The point is that an R5770 has 128-bit GDDR5, compared to an R4870's 256-bit GDDR5. Memory clock speeds can't scale to make up the difference from cutting the memory bus in half, so overall the card is slower, even though it has higher compute performance on the GPU. The GPU just isn't getting data from the memory fast enough. Reply
  • Targon - Monday, February 15, 2010 - link

    But you still have the issue of the 4870 being the high end from its generation, and trying to compare it to a mid-range card in the current generation. It generally takes more than one generation before the mid range of the new generation is able to beat the high end cards from a previous generation in terms of overall performance.

    At this point, I think the 5830 is what competes with the 4870, or is it the 4890? In either case, it will take until the 6000 or 7000 series before we see a $100 card able to beat a 4890.
  • coldpower27 - Monday, February 15, 2010 - link

    Yeah we haven't seen ATi/nVidia achieved the current gen mainstream faster then the last gen high end. Bandwidth issues are finally becoming apparent.

    6600 GT > 5950 Ultra.
    7600 GT > 6800 Ultra.

    But the 8600 GTS, was only marginally faster then the 7600 GT and nowhere near 7900 GTX, it took the 8800 GTS 640 to beat the 7900 GTX completely, and the 800 GTS 320 beat it, in the large majority of scenarios, due to frame buffer limitation.

    The 4770 slots somewhere between the 4830/4850. However, that card was way later then most of the 4000 series. Making it a decent bit faster then the 3870, which made more sense since the jump from 3870 to 4870 was huge, sometimes nearly even 2.5x could be seen nearly, given the right conditions.

    4670 was the mainstream variant and it was, most of the performance of the 3800 Series, it doesn't beat it though, more like trades blows or is slower in general.

Log in

Don't have an account? Sign up now