• What
    is this?
    You've landed on the AMD Portal on AnandTech. This section is sponsored by AMD. It features a collection of all of our independent AMD content, as well as Tweets & News from AMD directly. AMD will also be running a couple of huge giveaways here so check back for those.
    PRESENTED BY
POST A COMMENT

122 Comments

Back to Article

  • quiksilvr - Tuesday, January 28, 2014 - link

    I am so sorry but I couldn't resist.

    http://cdn0.dailydot.com/uploaded/images/original/...
    Reply
  • eanazag - Tuesday, January 28, 2014 - link

    I saw part of this a week ago in the AMD center on this site when reading the Kaveri article. I was hoping you could buy these already. Ubuntu has an ARM distro already, but not for any hardware I plan to own. These specs are perfect for what I want to do.

    This wouldn't be a comment about AMD if I didn't gripe, so here it is: how hell do you release integrated 10 GbE in ARM servers a before x86? All AMD needs to do is include 10GbE into the x86 Opteron platform to have some reason to purchase. Intel won't because they're still making too much money in add-on cards. With Netgear selling a 10GbE switch for around 1K we are now just waiting on AMD and Intel to integrate copper ports.
    Reply
  • BMNify - Tuesday, January 28, 2014 - link

    it the perfect reason at last to finally buy AMD IF and only IF they do include as standard these 2x integrated 10 GbE in their 64bit ARM SOC and you can actually make a full ARM PC for their regular current x64 high end systems price...

    im not impressed with the usual DDR3 or DDR4 data throughput potential as AMD have always provided far lower ram speed throughput and slower L1/L2 than was needed to maximize the given cores potential, if it right that they are using "one" single channel though and not at least two 128bit channels?(half the Wide IO 512bit spec)
    "The memory interface is 128-bits wide and supports up to 4 SODIMMs, UDIMMs or RDIMMs."

    by the end of 2014/15 id expect Samsung to be thinking about using their integrated 512bit Wide IO/Widcon to get an easy x4 the potential of the best DDR3, and OC there's its sibling HMC going into production end of 2014 for an estimated x7 times DDR4 throughput at lower power etc
    Reply
  • Spoelie - Wednesday, January 29, 2014 - link

    128bit *is* dual channel, one DDR3 DIMM has a 64bit interface. So it has the same setup of the mainstream Intel/AMD platforms. Reply
  • soaringrocks - Wednesday, January 29, 2014 - link

    I would be very interested to know what kind of LAN performance AMD can get with ARM driving the ports in Linux. Two 10G ports can generate a ton of interrupts and depending on the app/configuration could consume the lion's share of the CPUs, especially if you need to configure to small packet sizes. It's easier to drive full speed with jumbo frames, but this isn't the best configuration for a lot of uses. I can't wait to see how they do. Reply
  • insanemal - Thursday, January 30, 2014 - link

    Interrupt handling I don't think will be the issue. If the AMD chips are anything like the Calxeda (which were A7 based and had a single 10GbE) the 10GbE will have a lot of 'offload engine' silicon.

    Just to point out something, you can't even saturate a 10GbE on X86_64 if the packet size is too small. If memory serves, you can only get around 2 million packets per second on 10GbE on a powerful X86_64 running linux anyway. I think there have been some multi-threading patches as well as other things to improve this, but I don't think an ARM is going to do too much worse at standard packet sizes than an X86_64 box.

    Also, for some reference:

    http://blog.erratasec.com/2013/10/whats-max-speed-...
    Reply
  • Eloff - Thursday, January 30, 2014 - link

    That limitation is a limitation of the Linux kernel. You can do 80 million packets per second (64 byte packets) on Linux with a single measly 8-core Xeon Processor E5-2600 @ 2.0 GHz. That's with dual 40gbps NICs. The way the Linux kernel handles networking is so 1990's. Reply
  • Eloff - Thursday, January 30, 2014 - link

    Sorry, I meant to say you can do that if you bypass the kernel, e.g. with Intel DPDK, which is where those benchmark numbers come from. Reply
  • lwatcdr - Tuesday, January 28, 2014 - link

    I am sure they will get to this. I can so see this as a platform for a NAS or SAN server. With the PCI slot for a RAID card or JABOD and the up to 128 GB it is a perfect fit for storage. It is also a great option as a development system for ARM developers. It would also be a good system for VOIP as well. Seems like we will see ARM for storage and possibly front end web servers and X86 for VMs and database servers.
    AMD should donate come of these to FreeBSD, FreeNAS, and OpenNAS.
    Reply
  • Conficio - Wednesday, January 29, 2014 - link

    That is what I thought too. with 8 SATA III and some software RAID that would be good to go.

    I wonder if they would actually make a dual core version for that market. Seems like that would be enough for this purpose.

    You could also use the PCIE lanes to make a decent HPTC + NAS box as well.
    Reply
  • tuxRoller - Wednesday, January 29, 2014 - link

    The problem with software raid would be that the parity code is almost certainly written in assembler. Reply
  • insanemal - Thursday, January 30, 2014 - link

    Ceph OSD's.
    These would make KILLER OSD's.

    6 spinning rust and 2 SSD's...
    Reply
  • hoboville - Wednesday, January 29, 2014 - link

    An indirect answer: adding 10GbE to consumer MBs would increase the price too much. I think what you're not understanding is that the "Southbridge"/PCH/whateverAMDcallstheirs is the secondary controller concentrator on motherboards. The primary is the MCH, which is CPU on-die. Typically, what you see with processor manufacturers like Intel is that they portion away part of their PCI-E lanes into the PCH, so that PCH devices can have enough throughput to run at max speed without being bottlenecked and fighting for bandwidth when the PCI-E slots are full.

    So, for this motherboard, it runs 8x PCIe 3.0 instead of 16x, which is what most consumer MBs use. Why? Because these PCIe slots are typically going to be used either for more RAID cards or more network cards, not vid cards. These cards are typically 8x and 4x. So, to answer your question, the half of the normal PCIe bandwidth has been allocated to the PCH to run: 8x SATAIII ports at full speed, along with dual 10GbE ports, all while allowing those potential PCIe cards to run at full bore.

    Adding 10GbE to consumer boards would mean adding an expensive controllers to mobos (the PCH doesn't actually run the Ethernet controller, so you can have any Ethernet controller you want on a motherboard, including WiFi). Present mobos can support integrated 10GbE fine, but the board makers just have to add in the 10GbE controllers, and have enough bandwidth to support this design.
    Reply
  • chizow - Tuesday, January 28, 2014 - link

    Great name for the series, just rolls right off the tongue and meshes great with the rest of their A-series line-up. Some marketing exec definitely earned his salary right there. Reply
  • chizow - Tuesday, January 28, 2014 - link

    All kidding aside, I guess this is the culmination of their Sea Micro acquisition a few years back? I guess it will all depend how well AMD executed ARM's v8 ISA relative to the rest of the industry, but I do believe Qualcomm and Nvidia (along with IBM and Google via their OpenPower Consortium) were also targeting this market. Reply
  • FwFred - Wednesday, January 29, 2014 - link

    They didn't really execute the v8 ISA since they are taking the same A57 microarchitecture as others. I'm sure AMD's memory hierarchy will differentiate as will the GF process vs. TSMC/Samsung/Intel.

    Question is if any ARM server vendors will be using 20nm. Intel already seems to have the A1100 beat in perf/W launching one year earlier. AMD will have to win sockets for other reasons.
    Reply
  • Frenetic Pony - Wednesday, January 29, 2014 - link

    Apple seems to have bought exclusive use of 20nm for all of 2014. Nothing but Apple seems to be coming with 20nm that I've heard of, and there's cancellations of plans for 20nm from other vendors. EG Nvidia's "Maxwell" GPU architecture was announced as being 20nm awhile ago, and is now supposedly 28nm. Their "Denver" CPUs are the same.

    I think its safe to say that if Intel pulls off 14nm with a performance per watt drop this year they will officially be 3 years ahead of everyone else rather than just their previous lead of 2.
    Reply
  • Frenetic Pony - Wednesday, January 29, 2014 - link

    Err, performance per watt increase... was going to do TDP drop but then they could just lower the clockspeed and... anyway. Reply
  • extide - Wednesday, January 29, 2014 - link

    Maxwell will include both 20nm and 28nm chips. Reply
  • fteoath64 - Thursday, January 30, 2014 - link

    Do not forget that Samsung has 14nm in risk production and could pull out a flagship SoC using that tech at a limited quantity. They way Sammy uses SoC in their handsets makes it more difficult for consumers to "pick and choose" what is inside in some cases. ie more than 1 supplier for SoC. While TSMC is in fully booked capacity, they also are close in 14nm process and could well shift when the 20nm contracts finish early (maybe ?) ?. TSMC seemed very efficient in their production facilities to date managing so many customers with different solutions and delivering whatever they do. Sammy is one that speaks the least about their fab capacities. Reply
  • Mondozai - Wednesday, January 29, 2014 - link

    The name's fine, at least for those of us with grasp of English higher than that of a high schooler ;) Reply
  • meacupla - Tuesday, January 28, 2014 - link

    so wait... presumably the airflow is front to back, but they are using RAM and heatsink that is lined perpendicular to the airflow? Reply
  • npz - Tuesday, January 28, 2014 - link

    That's just the board for developers, a standard microATX which can be used in a regular case. AMD does not sell any motherboards. Reply
  • nofumble62 - Tuesday, January 28, 2014 - link

    25W chip. Where is the performance per watt or power saving that make people want to switch? Reply
  • BMNify - Tuesday, January 28, 2014 - link

    its assumed (i know you shouldn't with AMD PR) that that 25W is the total average amount at full speed using all the 8 cores and including all the internal integrated 10 GbE and the rest of the IP in use Reply
  • mczak - Tuesday, January 28, 2014 - link

    The comparison to X2150 is slightly unfair though. Because the X2150 includes a gpu (for compute purposes if you can leverage it). Its power consumption when not using that part is likely lower than those quoted 22W.
    The chips are different enough though that you can't really easily compare them, clearly indicating their different focus. The A1100 has more i/o, twice the memory channels - good for server market. Unlike the X2150 which is really a side-product of a chip intended for a different market. (Though I guess it's possible A1100 might also have a gpu in it without amd telling anyone just yet, but it seems unlikely.)
    Reply
  • twotwotwo - Wednesday, January 29, 2014 - link

    I doubt perf/watt will be the deciding thing for buyers since power for a server isn't much of its cost. (If someone used power-efficiency to get great *density* maybe that'd help, not sure.) Might have a secure niche as a cheap front-end to 128GB RAM and a pile of disks, though. Reply
  • watersb - Wednesday, January 29, 2014 - link

    Most data centers I've seen recently are limited by their cooling capacity, not necessarily cores per rack. Perf/Watt matters. Reply
  • Krysto - Wednesday, January 29, 2014 - link

    I think I've read somewhere that power consumption if 30 percent of the maintenance cost of a data center. So I think it matters. Reply
  • DarkXale - Wednesday, January 29, 2014 - link

    Power Consumption = Heat output

    And its keeping the heat under control thats challenging most data-centres.
    Reply
  • RoggerRabbit - Thursday, January 30, 2014 - link

    You obviously don't run many servers :) The opex on power alone is double what the capex is for most servers, especially in the 2 CPU arena. Reply
  • dealcorn - Tuesday, January 28, 2014 - link

    The comparison to high-end Xeon sounds fanciful. Low-end Xeon (Avoton/Rangerly) is already at 20/21 watts which is substantially lower that Seattle's 25 watts. The target market is sensitive to efficiency. Unless AMD can identify specific niches where Seattle efficiency reigns, this sound like a pipe dream. Reply
  • npz - Tuesday, January 28, 2014 - link

    Avoton vs Seattle:
    - 64GB DDR3 unbuffered vs 128GB DDR3 or DDR4, unbuffered or registered
    - x16 PCIE 2.0 vs x8 PCIE 3.0 (same total bandwidth, but Seattle will have twice the bandwidth from an add-on PCIE 3.0 x8 card)
    - AES-NI vs dedicated hardware accelerators for encryption and compression
    - 4x GigE vs 2x 10GigE
    - 6 SATA vs 8 SATA
    Reply
  • nofumble62 - Tuesday, January 28, 2014 - link

    Not surprise to look at these spec numbers because they have to add at least one year for OEM to qualify their platform before they can ship product, this is typical for server. Reply
  • KenLuskin - Wednesday, January 29, 2014 - link

    SeaMIcro is an OEM! AMD does NOT have to rely upon outside OEMs!

    AMD will sell SeaMicro servers with Seattle chips DIRECTLY to Facebook, Amazon, Baidu, MSFT, Verizon, etc..
    Reply
  • Khato - Tuesday, January 28, 2014 - link

    Ayup, in terms of the rest of the SoC AMD wins the battle against Avoton in almost all areas. (PCI-E is at best a wash - sure Seattle will have twice the bandwidth available to a PCI-E Gen3 card, but it'll have half the total bandwidth of Avoton if using Gen2 cards.)

    But will Seattle be going up against Avoton by the time it's actually available? Or will it be competing against Denverton and whatever updates Intel brings to the table with it? Could easily be that Intel will have matched or bested AMD in all of the above with Denverton, or that it's just a 'straight' die shrink of Avoton with no new features.
    Reply
  • KenLuskin - Wednesday, January 29, 2014 - link

    AMD has UN announced partnerships with large CLOUD players. The Seattle chip will be customized to the exact needs of each partner.

    Using ARM cores, AMD can quickly iterate different versions of Seattle.

    Intel takes 2 years to build a new chip!

    Even if all things are equal, CLOUD players want alternatives to Intel monopoly!!!!

    Cloud players will support AMD's ARM server chips, because they can get a customized product at a lower price!

    Reply
  • FwFred - Wednesday, January 29, 2014 - link

    4x 2.5G. You also fail to mention:
    - 20W TDP vs 25W TDP
    - 106 vs. 80 spec_int_rate_2006
    - Launch Q3'2013 vs Q4'2014/Q1'2015?
    - x86 SW compatibility (seems AMD is losing one of their big advantages with ARM)

    As Khato says, I'm not sure the competition will be Avoton
    Reply
  • iwod - Wednesday, January 29, 2014 - link

    Exactly my thoughts. These AMD SoC aren't cheap either. They are priced similar to Intel Atom, while offering little benefits in its targeting area. And Intel are already lining up Broadwell Xeon SoC as well as Denverton Server SoC.

    Yes Pref / Watts matters A LOT. But at which Watts usage region? At mW ARM wins, At 10 - 20W? Intel still wins, and by quite a large margin.
    Reply
  • gruffi - Wednesday, January 29, 2014 - link

    That's nonsense. Intel chips are expensive, very expensive. Their 22nm FinFET fabrication is very expensive. Not even close to a mainstream 28nm bulk ARM design. Except peak performance, and even that might not be the case, AMD seem to have a clear winner. Especially their feature set is vastly superior in server space. Intel cannot hide that Avoton is based on a client design, just like the Jaguar Opterons. Which is only useful in niche markets. Reply
  • FwFred - Wednesday, January 29, 2014 - link

    I challenge you on pricing. Intel has shown in the tablet market that they won't be beat on price. Could be the same in microservers if there was a perceived strategic threat.

    Avoton is no more a client design than Seattle is a phone SOC. 8 cores, 4x GBE, 64GB RAM, no GPU. Come on, can you be any more of a shill?
    Reply
  • KenLuskin - Wednesday, January 29, 2014 - link

    Fred, Intel has almost ZERO market share in Tablets, so they can LOSE money on every chip sold in order to BUY market share.

    Intel has 95% server chip market share.. If they discount, the DESTROY their biggest profit center!!!

    Nice try... but you get an F= FAIL!
    Reply
  • FwFred - Wednesday, January 29, 2014 - link

    If they hit their 40m target for tablets in 2014, it will not be insignificant.

    Do you really think AMD can undercut Intel when AMD and GF both take their margin on a lower volume part? (+ royalties to ARM)

    Intel has competed with AMD for years and kept a 60% margin. I don't see what changes here.
    Reply
  • extide - Wednesday, January 29, 2014 - link

    A57 is also a 'Client Design' here. At least call a spade a spade man. Reply
  • iwod - Wednesday, January 29, 2014 - link

    Expensive? Yes on Mobile. Did you just read the AMD SoC chip cost $100+? The Intel Avoton cost less then $150. @ Ken, Their biggest profit center are on Xeon, Not Atom Servers.

    Again, this would have been great if it came out 1 year earlier. But with Xeon SoC and Denvonton on the flag AMD offering doesn't sound too exciting for me.
    Reply
  • Krysto - Wednesday, January 29, 2014 - link

    Avoton was obsolete from the day they announced it. Intel keeps being behind ARM competition no matter what they do - even when they are AHEAD in process node being utilized. No wonder they're considering getting out of the mobile market in 2015. They're already losing money with Atom, trying to enter a market where they have no chance of even being competitive, let alone "winning". Reply
  • factual - Wednesday, January 29, 2014 - link

    That's completely untrue. When Intel's silvermont core (used in avoton) came out, it beat virtually every ARM CPU in the market in terms of perf/watt.The 14nm atom core will widen the gap even further. The reason why Intel is having problems in the mobile market has little to do with ARM CPUs, and has a lot to do with Intel's ability to put together a well-integrated world-class SOC (with integrated cpu, gpu, lte baseband, sensors, etc.).

    I have little doubt that Intel will keep widening the perf/watt gap and destroy ARM CPUs in that regard. What remains to be seen is if intel can design a well-integrated SOC (something that only Qualcomm has been able to do so far).
    Reply
  • ancientarcher - Thursday, January 30, 2014 - link

    Intel will destroy this... Intel will destroy that
    and how much market share does the bestest of the best mobile chip silvermont have?? It beat all ARM chips but still has a 0.001% market share in smartphones. that's the example you want to trot out? seriously???

    The next thing thats gonna get destroyed is Intel's margins and marketshare in servers
    Reply
  • factual - Thursday, January 30, 2014 - link

    There is no doubt that Intel is struggling in the mobile market (it only has 3% of the tablet market share and practically has no presence in the smartphones market). But Intel's problems in the mobile market has little to do with ARM and has a lot to do with lack of a leading mobile solution which is a well-integrated, world-class SOC.

    Intel faces the same barrier that has prevented Nvidia (which uses ARM) from having a significant presence in the mobile market, the same barrier that forced TI (which used ARM) to drop out of the mobile race and the same barrier that has prevented AMD from entering the mobile market at all. Qualcomm dominates the mobile market because they have the best SOC (which has cpu, gpu, cellular baseband, gps, wi-fi, sensor hub, bluetooth, etc. all on the same die), not the best CPU or even GPU.

    If Intel dropped x86 in favor of ARM today, it wouldn't help its chances of penetrating the mobile market, it would probably hurt them.

    But in the server space, it's only the cpu performance that matters not the soc! so yes Intel's leading perf/watt cpu solution, will most likely destroy ARM's chances of gaining any significant market share in the low-power server space.
    Reply
  • FwFred - Wednesday, January 29, 2014 - link

    Maybe I missed it elsewhere, but in what metrics is Avoton obsolete? Avoton seems to be the current perf/W leader in microserver space. Reply
  • HisDivineOrder - Tuesday, January 28, 2014 - link

    It's interesting to read how AMD thinks ARM is going to win out over x86. Thing is, I think the real argument AMD must make is how AMD themselves are going to win out against other companies also going ARM in much the same way.

    In x86, there's Intel and then there's AMD (and stories tell of a mythical VIA as well), but when you go ARM, you're one of bajillions. AMD doesn't stand out.
    Reply
  • siliconwars - Tuesday, January 28, 2014 - link

    Yeah I guess AMD doesn't stand out if you don't actually read the article. Reply
  • davegraham - Tuesday, January 28, 2014 - link

    relatively ignorant, HisDivineOrder. AMD will be one of SEVERAL in the ARM server space. remember, Calexda has actually needed to restructure and others in the space (think: nVidia) are relatively unproven when it comes to server technology. AMD has the ability to be rather agile here and can target devices toward highly scaled and integrated systems like: Moonshot, SeaMicro, etc. this is a first shot in the battle for scaled/dense systems that will most predominantly rely on more SoC style systems than core x86 manus can provide. Reply
  • pugster - Wednesday, January 29, 2014 - link

    These cpu's are designed to replace the servers that are currently used by datacenter farms of the likes of google, yahoo, amazon and facebook. Reply
  • name99 - Wednesday, January 29, 2014 - link

    This is the difference between amateurs and professionals. Amateurs think it is all about the specs of the core, professionals know that (for the space of interest) it is all about memory and IO.

    Yes AMD are shipping a standard ARM core, like everyone else in this space. BUT they claim they have superior skills in memory controllers and IO and, so far at least, there is no reason to doubt them.
    Reply
  • KenLuskin - Wednesday, January 29, 2014 - link

    Its NOT simply ARM, its ALL the IP sever blocks that AMD owns!

    Its the FABRIC for DENSE servers!

    Too bad you are CLUELESS
    Reply
  • jimjamjamie - Thursday, January 30, 2014 - link

    There is no need to be upset Reply
  • mczak - Tuesday, January 28, 2014 - link

    Sure about that L2 config?
    My guess would have been 2MB per 4 cores, not 1MB per 2 cores. Because this is what a basic cluster containing a57 cores straight from arm without any modifications does.
    Reply
  • RogerShepherd - Wednesday, January 29, 2014 - link

    Straight from the ARM website. A57 can be 1-4X SMP within a single processor cluster (i.e. 1-4 cores with a shared L2 cache). Multiple coherent SMP processor clusters through AMBA® 5 CHI or AMBA® 4 ACE technology.

    So, it is perfectly possible that AMD has chosen 4 clusters of 2-cores+L2.
    Reply
  • name99 - Wednesday, January 29, 2014 - link

    How good is ARM's coherent SMP cluster technology (the AMBA stuff)?
    Does it compare favorably with Intel and IBM's equivalents, or is it rather more amateurish (i.e. a lot more coherency traffic, lower frequency, more verbose transactions, not as scalable, etc etc)?
    Reply
  • mczak - Wednesday, January 29, 2014 - link

    Possible, certainly. But that will require a better interconnect between the clusters (as you have more clusters), and I don't think there's any advantage to a 2-core cluster itself (since it's designed to scale up to 4 cores in the first place).
    Looking through all the news though, I haven't seen any mention of the L2 being shared by just 2 cores elsewhere, and it's not in the presentation itself. I guess we'll see.
    Reply
  • extide - Wednesday, January 29, 2014 - link

    Yeah, I thought that was an interesting configuration, however if you think about it, it seems like a good config for a server. Especially with the limited external memory bandwidth, this configuration, which allows for more L2 overall, may be superior. Reply
  • futrtrubl - Tuesday, January 28, 2014 - link

    "AMD ensured me that the on-chip fabric" "Ensured" should be "assured". Reply
  • Krysto - Wednesday, January 29, 2014 - link

    It seems like MUCH better performance (3x more) than its Jaguar ones at the same TDP.

    That being said, I wish they made them at 20nm, especially since they're coming out in Q4 this year. Sometimes I feel like AMD has some kind of weird fetish about using old weak-sauce process nodes. It's like even when they COULD hit out of the park, they're too afraid to do it, and play the "conversative/cheap" card. It's a shame. I guess they don't feel like they have much greatness in them anymore, and they project that in the market, too.
    Reply
  • gruffi - Wednesday, January 29, 2014 - link

    Fetish? How can you use 20nm when it's not ready yet? You won't see 20nm from Glofo before 2015. New processes are always very expensive. It's better to use known processes for a new design. It minimizes costs and risks. By the way, 28nm from Glofo is quite good. It has a similar density as Intel's 22nm process. And Kaveri shows that power consumption <3 GHz is quite good as well. Reply
  • MrSpadge - Wednesday, January 29, 2014 - link

    AMD has volume contracts with GloFo, and they've just switched to 28 nm. Their 20 nm will take quite some more time. Reply
  • iwod - Wednesday, January 29, 2014 - link

    GF wont have 20nm, it will be straight to 16nm Reply
  • nutjob2 - Friday, January 31, 2014 - link

    I think the fetish is on the Intel side, because they have billions to spend on each node. Just keep in mind who pays for those billions. Reply
  • Klug4Pres - Wednesday, January 29, 2014 - link

    I am confused by the article. Your "What does the Future hold" graphic shows a prediction that ARM (not AMD) will get 25% of the server market by 2019. But you then say this:

    "Predicting 25% of the server market by 2019 may be feasible, but I'm not fond of making predictions for what the world will look like 5 years from now.

    The real question is what architecture(s) AMD plans to use to get to 25% of the server market and a substantial share of the x86 CPU market. We get the first hint with the third bullet above: "smaller more efficient x86 CPUs will be dominant in the x86 segment"."

    Where do you get the bit about AMD planning to capture 25% of the server market?

    My take on what AMD is actually saying:
    1. Total ARM share of server market will be 25%. AMD will be the leading supplier of ARM server chips.
    2. Within the x86 part of the market, smaller and more efficient CPUs will take share from today's bigger less efficient CPUs. AMD will have a substantial share of the x86 segment.
    3. AMD is not making any prediction (e.g. it is not saying 25%) on its total share of the server market.
    Reply
  • name99 - Wednesday, January 29, 2014 - link

    To me it seems like a dog whistle to Wall Street.
    What it's saying, translated, is "You guys might think Intel has better tech than us, and maybe they do, but Intel's prices are on the wrong side of history and you need to ask yourself what happens to Intel as a BUSINESS as this pricing disparity becomes ever more obvious".
    Reply
  • coder111 - Wednesday, January 29, 2014 - link

    Interesting times. x86 compatibility doesn't matter that much if you are running Linux, and Linux rules on server-side. I think these CPUs have a chance. If they actually succeed or not will depend on many things- price, performance, features, power usage, software support from commercial vendors, hardware support from OEMs and availability. I wonder how fast is Java on ARM compared to Java on x86? JVM on x86 has been very heavily optimized. How about database servers? Language interpreters for PHP, Perl, Python, Ruby? They will all run on ARM, but how fast compared to x86? Reply
  • name99 - Wednesday, January 29, 2014 - link

    That's the whole reason for the existence of Linaro. One package at a time, they are going through the standard LAMP stack and trying to fix whatever issues slow down the ARM compiled version of a program vs the x86 compiled version.

    They've been held back, to some extent, by ARM's taking its sweet time to get to 64-bit, but they have been working at it for a few years now, and you can go to their web site, see the various conference proceedings, and get a feel for the kind of thing they do.
    Reply
  • sakthibruce - Wednesday, January 29, 2014 - link

    Love to c a beagle bone running linux under 100usd Reply
  • extide - Wednesday, January 29, 2014 - link

    Beagle Bone Black. Reply
  • ddriver - Wednesday, January 29, 2014 - link

    Too bad it has terrible graphics and only up to 720p output. ODROID-U3 Community Edition makes a bit more sense, a little more expensive at 59%, plus about 10$ you will have to add a microcontroller to it, in case you want GPIO. Reply
  • extide - Wednesday, January 29, 2014 - link

    Depends on what your goals are. Not every app needs graphics. Reply
  • KenLuskin - Wednesday, January 29, 2014 - link

    Graphics is a whole other area that AMD is now DOMINATING because it is based upon success in gaming.

    Graphics is becoming more important, and this is the reason that AMD will be successful with X86 server chips that are optimized for Video/Graphics/GPGPU.
    Reply
  • hetzbh - Wednesday, January 29, 2014 - link

    One thing that I don't see anywhere: What about virtualization? I see the graph includes Hypervisors in the OS level, but no hardware virtualization anywhere.. Reply
  • tuxRoller - Wednesday, January 29, 2014 - link

    KVM has been ported since, iirc, 3.11. Xen was ported a bit earlier. Reply
  • MrSpadge - Wednesday, January 29, 2014 - link

    With a dual channel RAM interface and DDR4 these will be limited to 2 modules. I doubt there'll be 64 GB DDR4 modules anytime soon. But then 8 cores of moderate performance should be fine with DDR3 bandwidth.

    Anyway, it's a shame they didn't introduce this memory controller for Kabini (either 2 channels DDR3 or with DDR4 at least a single channel), as its GPU needs more bandwidth. And for Kaveri, as its GPU desperately needs bandwidth!
    Reply
  • maxi2mc - Wednesday, January 29, 2014 - link

    Actually they have this controller for Kaveri, but for some reason it's not activated/functional, or not yet. See this article about using only channel 0 and 3 for DDR3 http://www.anandtech.com/show/7702/amd-kaveri-docs... .

    On Phenom II they had a memory controller that supported both DDR2 and DDR3.

    The real problem is having a different socket to support another kind of DDR, maybe DDR4, or DDR3 + GDDR5 soldered.
    Reply
  • extide - Wednesday, January 29, 2014 - link

    May be a similar memory controller but not the exact same one. The kaveri one has 128bit DDR3 (2x64) and an optional 256bit GDDR5 (4x64) controller mode, where as the memory controller here is always 128bit (2x64), however can do DDR3 or DDR4. Reply
  • extide - Wednesday, January 29, 2014 - link

    With registered modules you can have more than 1 per channel with DDR4. Reply
  • SpaceRanger - Wednesday, January 29, 2014 - link

    Sever Market?? Hope it's a clean cut at least! Reply
  • BSMonitor - Wednesday, January 29, 2014 - link

    In the server space, compatibility will always be a major factor. VLIW IA-64 was never able to penetrate the x86 market mostly because x86 emulation was never very good. Getting companies away from their legacy software is always a huge challenge to any new CPU architecture that might rival x86. Reply
  • factual - Wednesday, January 29, 2014 - link

    Given the existence of Intel Avoton, I really don't see the point of merchant chip designers competing in the server space using ARM-based CPU chips. Here's why:

    1. Almost all of the software in the server space was developed/validated for x86, so ARM ISA in server space is at a huge disadvantage.

    2. Even of all the software was ported to ARM, Intel's Silvermont core (used in Avoton) currently beats all ARM-designed CPU cores as well as all ARM-compatible cores design by merchant designers like Qualcomm, in terms of perf/watt.

    3. Intel's 14nm atom core will further widen the perf/watt gap between atom and ARM CPU cores. And give Intel's huge process technology advantage and it's recent focus on low-power design, I see this trend continuing in the future.

    AMD would be much better off focusing on developing a low power x86 core and trying to compete Intel on price.
    Reply
  • Guspaz - Wednesday, January 29, 2014 - link

    Most Linux servers could migrate from x86 to ARM and never know the difference. More than that, if you were told to set up a LAMP stack on two servers, one being x86 and one being ARM, you wouldn't even realize which was which unless you specifically checked. Because distros have the relevant software for both ISAs, and your install process is going to be identical. Reply
  • factual - Wednesday, January 29, 2014 - link

    Many of the enterprise distros are not actively supporting ARM or are supporting ARM with a lag. Also, I was primarily referring to apps rather than the os itself. For example, More than 90% of the apps used by the company I work for (version control, engineering design CAD tools, etc.) are not available for ARM ISA. Reply
  • tuxRoller - Wednesday, January 29, 2014 - link

    You wouldn't run cad tools on these.
    There's no reason to target an isa when writing a VCS.
    Java.
    Also, there's a reason why fedora is the go to distro on these boards: RH is going to get into this space pretty soon. In the meantime you simply don't use them for mission critical activities.
    Reply
  • factual - Wednesday, January 29, 2014 - link

    CAD tools' performance is mission-critical so they are not written in Java, they are written in C or even assembly for best performance. VCS which is a Synopsys tool does not currently have an ARM-compatible version available, neither does any other Synopsys, Cadence, Ansys, Mentor, .. tools. Reply
  • tuxRoller - Thursday, January 30, 2014 - link

    I've no idea what kind of crap codebase these cads are based upon but I doubt they would be in assembler. If you could provide an example of a cad operation where assembly is necessary I'd be grateful. My point was not that they are written in java (I've no idea what they are written in since I don't see that as something they would advertise) by r that the company can perform a recompile if their toolchain isn't useless.
    My point, in general, is that as long as you aren't making use of assembler, which I can't see any of these programs doing, the companies can recompile to armv8.
    Reply
  • factual - Thursday, January 30, 2014 - link

    I remember reading a paper about critical parts of CAD programs being written in assembly for maximum performance (http://www.fourmilab.ch/autofile/www/subsection2_3...

    That being said, they are mostly written in C/C++ (http://www.linkedin.com/jobs2/view/6473323) and most people who've tried porting complex c code to a different ISA know that it's not as easy as using a different compiler. There are many x86-specific syntax/optimizations that need to be completely re-written (things like endian-ness, atomic read/write/lock, etc.).
    Reply
  • tuxRoller - Thursday, January 30, 2014 - link

    Neither of those links worked.
    I'm also not taking about changing compilers only changing the target arch.
    Reply
  • BMNify - Thursday, January 30, 2014 - link

    for the VERY limited assembly you talk about its only antiquated SSE2 for some small machine interface routines for some implementations, so its not even hard to beat that on the NEON SIMD that's in these 64bit cortex, any competent arm dev writes assembly in his sleep as is the custom for many arm cores for years now

    you can use https://play.google.com/store/apps/details?id=com.... on android :)
    Reply
  • nutjob2 - Friday, January 31, 2014 - link

    No one writes in assembler anymore because it's next to impossible to write better code than a decent compiler can generate, for instance GCC. The link you provide is very old.

    C (and C++) is a horrible language which does cause problems with porting, but those issues (platform dependencies) are generally handled in a library, and not really a huge problem.

    How difficult it is to port code depends very much on how it was written, just like almost any other thing you want to do with code.
    Reply
  • factual - Friday, January 31, 2014 - link

    Well, C is the most popular programming language in the world and for good reason!
    porting a complex c program with thousands of ISA specific optimizations to a different ISA is not a trivial task. x86 is ubiquitous in both the PC and server space , so a lot of apps have tons of x86 specific syntax and optimization, especially in cases were performance is critical.

    AMD's decision to design an ARM-based server chip is a marketing move more than anything else. An ARM chip in the server space will be crippled software-wise.
    Reply
  • BMNify - Thursday, January 30, 2014 - link

    you are aware of the actual linaro membership right ?
    http://www.linaro.org/members

    that the enterprise hardware vendors have their top software teams working inside the Linaro collaborative initiative

    http://www.linaro.org/engineering/engineering-grou...
    Linaro Networking Group

    http://www.linaro.org/engineering/engineering-grou...
    Linaro Enterprise Group etc...

    not only that Linaro have become one of the largest providers of up-streamed patches to the linux core code base since its initiation
    http://www.linaro.org/engineering/status

    zero copy and many more of their patches migrating to x64 and ppc etc...
    Reply
  • nutjob2 - Friday, January 31, 2014 - link

    Funny that you don't mention the price advantage of this part.

    1. You're mistaken, software is not written for a processor, it's written to be independent of the processor. It's not 1990 anymore. The obvious example is Linux. Most decent software has been ported to many platforms and ARM is just another one, a very good one.

    2. That's great but this is a different product to those competitors, and a better product. If it even gets near Intel in power efficiency (which it seems to have done) it will thrash it in cost.

    3. You're jumping the gun. Leakage is a big problem and new process nodes are not delivering huge power gains. It may be smaller but it won't be cheaper, which, again is the point. AMD is only one or two process nodes behind.

    You're missing the whole point of moving to ARM. The x86 arch is a dog with fleas. It's bloated and a dead end. Intel keeps it alive with its huge process advantage and massive R&D, both very costly. AMD, as smart as they are, can't make a purse from a sow's ear and rightly are moving to other archs. ARM is an open arch which will spur competition which is sorely lacking in the x86 world due to Intel's monopolistic practices, and that competition will deliver lower cost, higher performance and more choice. That's what the computing industry has been built on.
    Reply
  • factual - Friday, January 31, 2014 - link

    There is nothing to discuss regarding the price since AMD has not released any official prices.

    1. Many mainstream apps (written as recent as 2014!) have ISA-specific code and optimizations, that's a well known, undisputed fact! The failure called Windows RT is a proof of this. Majority of x86 software (e.g. adobe CS6) are not available for Windows RT. As a person who works in an enterprise Linux environment, I know first hand that majority of linux pro apps are not available for ARM and will not be available anytime soon! Even the enterprise distro that we use (SLES) is not available for ARM!

    2. According to this article, AMD claims that their upcoming ARM-based Opteron A1100 scores 80 in SPECint_rate benchmark at 25W TDP; now this is just a claim for a product that hasn't been released yet, in the mean time, Intel's already released Avoton scores 106 in SPECint_rate benchmark at 20W TDP! So basicaly, by the time this chip comes out, Intel's older generation chip will have a 60% perf/watt advantage over it!! By that time, Intel will have release a newer 14nm atom chip which will have an enormous perf/watt advantage over this AMD chip

    3. No, I am not. Intel already demonstrated a 14nm broadwell chip with over 30% power reduction compared to 22nm haswell. According to Intel, 14nm will also bring huge density reductions which translates into cheaper chips; and unlike TSMC and other foundries, Intel has a pretty good record of making accurate claims about their upcoming process technology.

    ARM is not an open ISA! it's 100% proprietary same as x86. The only difference is that, unlike Intel, ARM is willing to license it's ISA (architectural license) as well as its soft and hard IPs to any company who pays the license fee. The claim that ARM ISA is somehow inherently superior to x86 in terms of power consumption is patently false! these claims are marketing bs often made by people who have little or no technical knowledge. You keep repeating the word "arch", I think you don't understand the difference between ISA and uarch (microarchitecture). What determines the
    Reply
  • factual - Friday, January 31, 2014 - link

    You keep repeating the word "arch", I think you don't understand the difference between ISA and uarch (microarchitecture). What determines the performance and power efficiency of a cpu core is it's microarchitecture and manufacturing technology; The ISA the cpu core was designed for, has negligible effect on its perf/watt. Reply
  • ScouserLes - Saturday, February 01, 2014 - link

    According to Intel, the SPECint_rate for Avoton is an estimate. They did not use the SPEC 2006 benchmark software. Instead, they used a range of different benchmarks, including Antutu, and calculated a geomean. Software and workloads may have optimised.

    I wonder why Intel didn't just run the SPEC 2006 benchmark suites, and publish them. In view of the Antutu fiasco last year, I'm taking this estimate with a pinch of salt until a verified SPEC 2006 result is available.
    Reply
  • Guspaz - Wednesday, January 29, 2014 - link

    Isn't this basically an admission that AMD can't design low-power parts with competitive performance? Keveri's terrible CPU performance is mitigated by good graphical performance, but graphical performance has no relevance for enterprise use, leaving you with simply a terrible CPU. Releasing low-power server CPUs with CPU cores designed by ARM instead of AMD seems to me to be admitting to this. Reply
  • KenLuskin - Wednesday, January 29, 2014 - link

    Maybe you should read about MULLINS chip that AMD will be delivering in Q2.

    It is better than Baytrail as measured by PCmark 8, and is 250% better in graphics!
    Reply
  • Guspaz - Wednesday, January 29, 2014 - link

    OK, then... why are they using a Cortex A57 instead of an in-house CPU? Reply
  • darkich - Thursday, January 30, 2014 - link

    And they are both embarrassed by nvidia.
    The K1 reaches over 350GFLOPS, compared to ~ 65 GFLOPS of the Bay Trail !

    Other ARM GPU's are also superior to both AMD and Intel solutions. Read about Mali t760 and PowerVR series 6 XT
    Reply
  • tuxRoller - Thursday, January 30, 2014 - link

    What are you taking about?
    For one thing that GFLOPS is almost certainly for the GPU. The CPU is apparently not even armv8 but a cortex a15 r3 (http://www.tomshardware.com/reviews/tegra-k1-keple... So they are going to be slower, clock for clock, than the cortex a57.
    Reply
  • darkich - Friday, January 31, 2014 - link

    I was talking about the GPU :p
    As is clearly stated in my post
    Reply
  • tuxRoller - Saturday, February 01, 2014 - link

    It was a bit ambiguous given the thread context (CPU), but I see you did say in the second paragraph "Other ARM GPUs...". I suppose I needed to analyze it a bit more.
    Still I'd expect nvidia to be trailing both IMG and Qualcomm as usual, especially in power consumption.
    The problem is that IMG had been doing this for so long and make amazingly capable GPUs (along with owning tons of mobile graphics related patents) and Qualcomm had been hugely improving the bit boys designs they bought from amd with their tremendous engineering talent (and SUBSTANTIALLY larger budget given the company is more than an order of magnitude larger than nvidia).
    It's going to be hard for nvidia to compete, and this far that's how its been.
    Reply
  • BMNify - Thursday, January 30, 2014 - link

    what has amd's lowest grade of x86/64 at so called 2W (if you believe their PR people) and lowest grade of cgn gfx got to do with server's, beema is next lowest, then OC theres their best effort quad Kaveri that doesn't make the grade for AVX SIMD as it cant beat dual core i3/i5 in fully hand optimised assembly code for that set of SOC's Reply
  • nutjob2 - Friday, January 31, 2014 - link

    No, the "admission" is that you can't make a low power part with x86 without spending many billions on chip R&D and process, driving up costs. ARM is a much more efficient architecture and AMD is absolutely correct in moving it to the server sphere, where power usage is critical. The sooner we get rid of x86 the better. Reply
  • peterfares - Thursday, January 30, 2014 - link

    I want 10GbE in an x86 board at a reasonable price. Reply
  • honestann - Tuesday, February 04, 2014 - link

    Me too! This chip has a lot of great capabilities and features at the same time as great price/performance (assuming the estimated price is close to correct). So many of the comments in this forum ignore the stated target market, and I intend to adopt this chip for a couple entirely different embedded device purposes. Reply
  • ScouserLes - Friday, January 31, 2014 - link

    It's worth pointing out that the Opteron 1100 "benchmark" presented by AMD, and the Intel Silvermont "benchmark" it is being compared to, are estimates, not benchmarks. I have no idea how AMD arrived at their estimate, but a slideshow presented at IDF gives some details about how Intel does it. The SPEC 2006 benchmark suites play no part in the estimate released by Intel. The Antutu benchmark does, though. Reply
  • loony - Friday, January 31, 2014 - link

    I assume AMD is trying to compete with Intel as always... x86, integrated graphics, desktops, laptops, servers... They go toe to toe in all but one aspect: Intel has the Itanic. Guess AMD now finally matched Intel on that front too... Reply
  • fteoath64 - Wednesday, February 05, 2014 - link

    " I assume AMD is trying to compete with Intel as always... x86.." Yes definitely in this case targetting Avoton but with way improved I/O set in 10Ge and Sata support plus 128GB RAM space. So in terms of total system cost and effective total system power vs perf vs cost, this chip will win out. This design also gives many ways for system builders to target very different configurations for their target market servers they wanted. Some systems could be SSD tiered cache servers, some hybrid RAM cache/HDD array caches. Certainly down to the home NAS is a possibility. There will be variants of this chip with different I/O mixes in order to cater for a even more diverse set of products, priced very competitively. It is time the server space innovate way beyond large numbers of cores and huge cpu cores. Reply
  • hooflung - Friday, February 07, 2014 - link

    I would really like to have one of these things for Java 8 development. I wonder how much the initial motherboards are going to cost with that 100$ SoC on there. Toss a GCN card on it and Aparapi and it is the perfect solution for some parallel workloads with a budget on power. Would really be disruptive to the PaaS and SaaS markets for compute for certain markets. Reply
  • ramuman - Monday, February 10, 2014 - link

    I get Apple isn't playing in servers right now - but they announced the first mass production 64 bit ARM server processor. I wouldn't be surprised if they would expect to transition the Mac Pro or a next generation device of that class to ARM and RISC. Reply
  • -wooki- - Saturday, February 15, 2014 - link

    They get spanked in the x86 market and now they fall back to ARM? A market soaked with competition! It reeks of desperation. All for what, power efficiency? Reply
  • -wooki- - Saturday, February 15, 2014 - link

    I love their slides, "Megadata centers" "commands"...hahahaha "smaller more efficient x86 cpu's" thanks captain obvious.

    Stick with what you know AMD, improve your x86 processors and take back market share. While my speculation is worthless here, ARM has little place in the market and even less soon when they are being dominated by 32,64 core x86 processors. I just struggle see the strategy here.
    Reply
  • texadactyl - Wednesday, February 19, 2014 - link

    The ARM strategy is not aimed at gaming desktops. They already have Android (Linux kernel) pretty much locked up and it is natural for them to evolve to other GNU/Linux system environments. For example, Redhat Enterprise Linux and other back-end GNU/Linux systems are seriously supporting ARM architectures because they see economic benefits in hardware competition. As a side-note, the open-hardware hobbyist community has moved completely to ARM because of support and cost reasons.

    In any case, time will tell what happens to Intel, AMD, and ARM. It should be interesting to watch the product evolutions as competition for our attention and $$ increases.
    Reply
  • editorsorgtfo - Thursday, March 06, 2014 - link

    > Anand said: "AMD assured me that the on-chip fabric is capable of ..."

    Apparently the first Chip has no clothes (much like http://en.wikipedia.org/wiki/The_Emperor's_New... ), the next Chip will.

    Source: http://www.enterprisetech.com/2014/01/30/amd-appli... .
    Reply
  • CNCJoe - Wednesday, April 30, 2014 - link

    This really cant come soon enough. I have been dying each day since I saw these specs. I really hope they skip DDR3 and just go straight to DDR4. The second these are released I am picking up three of them. (router, NAS, desktop) specs:
    -A PF box with one nic to the modem, the second nic as DHCP to a managed switch, that pcie will be married to an AC1900 card, perhaps some SSD caching if my wallet allows
    -A freeNAS box (hopefully) with teamed ports to the switch. 5-6 6TB drives (yeah right, 3 for now), again SSD caching as future option
    -The desktop. I might have to put down the old horse they call windows and hook up a mule *cough* BSD *cough*. Who knows I might even try a diskless desktop, but most likely a pair of cheap SSDs (if win: raid 1; if BSD: raid z;) Again both ports to the switch, 20Gbps of glory.
    Its been a while since I bought any new tech, I'm twitchy, restraining myself from buying new gigabit stuff. Its time to retire the noise-spewing space heaters.
    Hurry up! I'm get'n grey over here!
    Reply

Log in

Don't have an account? Sign up now