Agner Fog, a Danish expert in software optimization is making a plea for an open and standarized procedure for x86 instruction set extensions. Af first sight, this may seem a discussion that does not concern most of us. After all, the poor souls that have to program the insanely complex x86 compilers will take care of the complete chaos called "the x86 ISA", right? Why should the average the developer, system administrator or hardware enthusiast care?

Agner goes in great detail why the incompatible SSE-x.x additions and other ISA extensions were and are a pretty bad idea, but let me summarize it in a few quotes:
  • "The total number of x86 instructions is well above one thousand" (!!)
  • "CPU dispatching ... makes the code bigger, and it is so costly in terms of development time and maintenance costs that it is almost never done in a way that adequately optimizes for all brands of CPUs."
  • "the decoding of instructions can be a serious bottleneck, and it becomes worse the more complicated the instruction codes are"
  • The costs of supporting obsolete instructions is not negligible. You need large execution units to support a large number of instructions. This means more silicon space, longer data paths, more power consumption, and slower execution.
Summarized: Intel and AMD's proprietary x86 additions cost us all money. How much is hard to calculate, but our CPUs are consuming extra energy and underperform as decoders and execution units are unnecessary complicated. The software industry is wasting quite a bit of time and effort supporting different extensions.
 
Not convinced, still thinking that this only concerns the HPC crowd? The virtualization platforms contain up to 8% more code just to support the incompatible virtualization instructions which are offering almost exactly the same features. Each VMM is 4% bigger because of this. So whether you are running Hyper-V, VMware ESX or Xen, you are wasting valuable RAM space. It is not dramatic of course, but it unnecessary waste. Much worse is that this unstandarized x86 extention mess has made it a lot harder for datacenters to make the step towards a really dynamic environment where you can load balance VMs and thus move applications from one server to another on the fly. It is impossible to move (vmotion, live migrate) a VM from Intel to AMD servers, from newer to (some) older ones, and you need to fiddle with CPU masks in some situations just to make it work (and read complex tech documents). Should 99% of market lose money and flexibility because 1% of the market might get a performance boost?

The reason why Intel and AMD still continue with this is that some people inside feel that can create a "competitive edge". I believe this "competitive edge" is neglible: how many people have bought an Intel "Nehalem" CPU because it has the new SSE 4.2 instructions? How much software is supporting yet another x86 instruction addition?
 
So I fully support Agner Fog in his quest to a (slightly) less chaotic and more standarized x86 instruction set.
Comments Locked

108 Comments

View All Comments

  • JohanAnandtech - Monday, December 7, 2009 - link

    I have to agree that Opensource has an advantage when adopting new ISAs, even within the x86 world. The speed at which Linux adopted and used x86 64 bit to it's full potential was very impressive, compared to the Windows world (where 64 bit is still causing troubles on the desktop).

    Then again, if you have invested years of your own paid workforce in a software, I don't think it is viable to opensource your software. So for some software, closed source might continue to be the most efficient strategy. And in that case we don't want x86 to go away, but to be more standarized so devs do not have to worry about extra code to debug.
  • azmodean - Monday, December 7, 2009 - link

    While I am an open-source developer, I have my user hat on right now. My point is that the user's ability to migrate to a new architecture is empowered by the use of open source technologies.

    I think the ability of the software to migrate in this way is going to be a telling advantage if ARM and specifically TI's OMAP platform continue to appear in more and more high-end ultraportable devices. Now to be fair, only 4 out of the 5 retail OMAP devices I can think of use Linux (N900, Droid, TouchBook and Pandora use Linux, but not the iPhone), but even the holdout iPhone heavily utilizes Open Source software.

    Back to the topic at hand though, if the Open Source ecosystem gains enough of a foothold, it's possible that it would allow new architectures to break into some areas of the CPU market. I'm not holding my breath for x86's stranglehold on the desktop/laptop market to go away any time soon, but perhaps we'll have a bit more competition between x86 and ARM at least on the extreme low-power end of the scale.
  • haplo602 - Tuesday, December 8, 2009 - link

    you are forgetting that the CPU is only one part of the system. peripheral device drivers are the major problem for wide OSS adoption.

    f.e. I can run linux on my old pa-risc workstation, but only in text console, as the gfx card has no support in linux (and never will have). same for other devices.

    OSS can only go so far on its own.

    I admired the PPC ISA once. It was a nice piece of work. I work with pa-risc and itanium systems at work and I think they are quite good alternatives. but again the device driver support is an issue. you simply cannot put an nvidia card into an itanium workstation and expect to game on it :-)
  • AluminumStudios - Tuesday, December 8, 2009 - link

    If clean, simple, well maintained instruction sets were really necessary, x86 wouldn't have won and the various dead or near-dead RISC architectures would still be around.

    The world wanted backwards compatible as well as features (and prices) that the owners and makers of better architectures couldn't or wouldn't give. So we evolved to the current x86 state. Just like the cost and danger of cutting out every human's appendix is to high to make it practical to do as a matter of course, there's nothing that can be done about x86. Intel and AMD have gotten pretty good at engineering bigger and fatter chips. I'm happy enought without needing that extra 8% of power savings or performance.
  • Entz - Monday, December 7, 2009 - link

    Companies are not going to give up there source code. Too much time and money spent developing it, only to give it to all your competitors for free. This is even more important to middleware vendors, such as game engines.

    The better approach would be to have all applications compiled to an intermedite language (i.e. Java / .NET). Then have optimized compilers and libraries built into the OS for specific processors -- Provided by Intel/AMD. Then let them go nuts on x86 instructions. Those can be opensource and maintained by a community.

  • SixOfSeven - Monday, December 7, 2009 - link

    If the instruction set is getting to be too much of a mess, it presents an opening for a processor which implements a subset of this hairball and leaves the rest of the work to the compiler, relying on faster execution, smaller chip area (these days, giving more cores in the same space), etc. to compensate.

    If we're at this point, we should see such a processor; if we don't see such a processor, things either aren't so bad or we're missing an opportunity to make a lot of money. Take your pick.

    Yes, I realize the article is mostly talking about different instruction sets across the two manufacturers. But the underlying problem, if it is a problem, is the idea that the instruction set is the place to locate new functionality.
  • Scali - Monday, December 7, 2009 - link

    I think we've been at this point for many years... Thing is, everytime such a processor is launched onto the market, it is killed with brute force.
    One example is PowerPC... when it was first being used in the early 90s, PowerPC was a good alternative to x86, generally delivering better performance.
    However, since Apple/Motorola/IBM didn't have such a large market as Intel/AMD had, they didn't have the same amount of resources to keep improving the CPU at the same rate as x86 did.
    A few years ago, Motorola stopped development of the PowerPC altogether... Apple turned to IBM for PowerPCs for a while, but eventually moved to x86.

    I think that if PowerPC development had the same resources as x86, it would probably still be ahead of x86 today.
  • alxx - Wednesday, December 9, 2009 - link

    You mean all those millions of Power ISA chips in playstation3 , xbox2 and nintendo wii ?
    Plus used in embedded and communications and not to mention also in IBMs power series (Power isa includes IBM power , Power PC and Cell PPE)
  • zonan4 - Thursday, December 10, 2009 - link

    I just wanted to play game... as long it make my game faster i don't care about this... move along people
  • Scali - Thursday, December 10, 2009 - link

    They may be PowerPC CPUs, but they aren't competitive with desktop x86 processors in terms of performance (well, Cell is a special case, but its performance comes from its special parallel design, not from the fact that it uses the PowerPC ISA).

    POWER is not PowerPC. I was specifically talking about PowerPC.

Log in

Don't have an account? Sign up now