Intel's Core 2 E4000 and Pentium E2000

Intel has two processor lines that compete in the mainstream space, both based on its famed Core 2 micro-architecture: the Core 2 Duo E4000 series and the Pentium Dual Core E2000 series.

Click for full size image

The chart below explains the differences between these two processors and the Core 2 Duo E6000 chips we normally review:

Intel CPU Features Overview
  Pentium E2000 Core 2 Duo E4000 Core 2 Duo E6000
Number of Cores 2 2 2
L2 Cache Size 1MB 2MB 4MB
FSB 800MHz 800MHz 1066MHz+
Intel VT No No Yes
Intel TXT No No Yes
Intel EIST Yes Yes Yes
Intel 64-bit Yes Yes Yes
Intel Execute Disable Bit Yes Yes Yes

As you can see, the main differences between the chips are L2 cache size and FSB, although with the Pentium Dual Core and Core 2 E4000 chips you do lose out on Virtualization and Trusted Execution support, two features not widely used in the mainstream desktop market yet.


At the sacrifice of a larger cache and a couple of features, you get some very affordable CPUs based on Intel's strongest architecture to date. Buyers beware, however: the Pentium Dual Core processors are not the same thing as Intel's Pentium D processors; the former comes with model numbers like the rest of the Core 2 lineup (e.g. Pentium Dual Core E2160) while the Pentium D is NetBurst based and still uses the old Pentium 4 model numbers (e.g. Pentium D 945). We'd take the slowest Pentium Dual Core over the fastest available Pentium D, so stay away from the last remnants of the NetBurst architecture if you know what's best for you.

Understanding the Cheap Chips: AMD's BE Line Pricing
Comments Locked

44 Comments

View All Comments

  • Justice4all - Tuesday, December 11, 2007 - link

    I have to strongly disagree with the notion that the Nvidia chipsets listed in this article are "finicky", especially the Quadro based boards.

    I have 15 machines with the M2NBP VM CSM Asus boards running in an electrical engineering environment without a single failure. I also have at least that many M2N based boards running the same engineering applications (Matlab, Cadence, Visual Studio, etc...) with zero failures, BSOD's or issues of any kind. This is across all of the current flavors of Windows and Linux OS's. Multiple machines are also running VMWare with various virtual Linux or Windows based machines.

    Nearly every machine is either running the integrated graphics or an Nvidia based 6,7 or 8 GPU based card. While most of the machines are running Crucial memory (533,667,800) some are currently running with the no name brand sticks from my local parts distributor (the machine I'm typing this on). The only thing all these machines have in common is that they are built using Antec cases and power supplies, which may or may not be the key factor with my experience vs yours.

    All these variables with ZERO failures to date. No issues with drivers, applications or hardware failures period. 30+ machines is a fairly decent sampling IMO, and I think speaks more for the stability of these particular chipsets than what you've presented here. Unless of course your sampling was more than just one or two boards.

    Honestly, it sounds as though you either had issues with ESD or happened to get a bad board or two. If thats the case, I don't think painting the whole family as something to stay away from as being good advice.

    To put this into perspective, I maintain approx 200+ computers with all versions of the major operating systems on the market (Linux,Win,Sol). The computers are everything from tablets to server/workstations and run a large spectrum of the electrical engineering apps. The computers run the gamut from PII's to the latest quad cores. It has been my experience that the Nvidia based machines have been the most pleasurable machines to deal with to date.


    All that said, I still find your site to be very informative when I am trying make purchasing decisions for our department. The only reason for this comment was that it was so out of line with my experiences that I felt compelled to at least show some evidence to the contrary.
  • Zds - Sunday, November 11, 2007 - link

    Very nice and needed article indeed. Most of the systems I consult for friends fall into "bang for the buck" category, so this kind of round-up is just what was needed.

    The only thing I think should have been made clearer was the significance of the idle power consumption. Most of the systems built today spend most of their lifetime sitting idle. Web browsing, document writing, chatting, they all have the processor run practically with no load. So unless you are powering the machine only to game the load power consumption matters, and this kind of systems are very unsuitable for uses like that.

    So what IMO should be included would be power usage comparison with something like 90:10 weight between idle and load power consumption. It would reflect the true impact of these systems to your electricity bill and to the environment. And while electricity is still relatively cheap, altough coming up in price, environment is not. And with low-power system comes the additional advantage to your ears: less noise needed to keep it running.

    Another point is the discrete graphics card is not always an option, even if some light and casual gaming was planned as discrete graphics cards tend to either cost money or be noisy. Naturally there are passive discrete cards, and ones with otherwise acceptable cooling solutions, but those feats take the price of the system to a new level.

    So, with 90:10 idle:load weighting the ranking list would look like this:
    AMD Athlon X2 BE-2350 (2.1GHz) - 52.2W
    AMD Athlon 64 X2 5000+ (690G) - 59.2W
    AMD Athlon 64 X2 5000+ (2.6GHz) - 61.9W
    Intel Pentium E2160 (1.8GHz) - 62.2W
    Intel Core 2 Duo E4500 (2.2GHz) - 64.3W
    Intel Core 2 Duo E4400 (2.0GHz) - 64.4W
    Intel Pentium E2140 (1.6GHz) - 64.8W
    Intel Core 2 Duo E6550 (2.33GHz) - 67.5W

    Too bad there was no load number given to BE-2350 with 690G - it looks like the most promising combination out of these; enough graphics power for casual gaming and lowish power consumption.
  • ShawnD1 - Friday, October 12, 2007 - link

    Thank god he put overclocking as part of the review. It makes the review that much better.
  • Spoelie - Tuesday, October 2, 2007 - link

    "Half Life 2 is finally actually playable on integrated graphics"

    640x480 lowest quality settings??
    It depends on your definition of playable.
    I wouldn't call anything playable below 800x600 to 1024x768 on low to medium settings..
  • zargblatt - Sunday, September 30, 2007 - link

    BOOO!

    This has the smell of intel fanboy. And it sertainly dont help that anandtech has been running Intel commercials for the last 3 months....

    Why are all the test intel stronghold such ad Divx encoding and 3D rendering? And do really midrange PC buyers use their computer for this?

    And why is terrible hardware chosen for AMD?

    I love the concept of comparing processors based on price. But the test has to be relevant to the user of this group, and the charts shouldnt be misleading. By that i mean adding an Intel prosessor way out of pricerange wich always top the charts will sertainly fool the casual reader. Pls remove the 6550 from the gaming tests.

    Also instead of 3d rendering test wich supports sse4, you should do office and web browser rendering tests instead. And adding a midrange gfx cardis much more realistic than a 8800GTX.

  • Justin Case - Sunday, September 30, 2007 - link

    "unfortunately, Penryn will also improve DivX performance by around 10%"

    Unfortunately? Looks like a definite "fortunately" to me!

    A more relevant issue here is how DivX itself evolves. Changes to the software (ex., using SSE4 / SSE5) are likely to have a bigger impact on speed than changes to the CPU. In any case, x.264 is at least as important as DivX, these days.

    Also, this is not quite correct:

    "While we're hearing rumors that Phenom will clock higher than K8, Penryn will be on a cooler running 45nm process, which should allow for even higher clock speeds"

    With both manufacturers going for a "power-conscious" approach, heat is no longer the main determining factor for clock speeds. Phenom / Barcelona is having trouble scaling the clock speed but it is not running too hot. A smaller process by itself does not guarantee higher clock speeds. It does guarantee lower power consumption, though (unless they screw up royally), which can also be a relevant factor when picking a CPU. And, unlike the Xeons, Penryn isn't crippled by FB-DIMM power consumption, so direct CPU power comparisons are more relevant.
  • smokedturkey - Saturday, September 29, 2007 - link

    the ABIT NF-M2 nView is an awesome board, and with your "finicky" Geforce 6150 chipset. Took my Opty 1210 all the way to 3.1 ghz, and it runs Vista x64 and anything I could throw at it just fine. Haven't found memory it won't run nor software/hardware. +1 for Abit!
  • Schugy - Saturday, September 29, 2007 - link

    I don't overclock and I have bought only a few boards so far: The worst was the Asus A7V 133 1.05 (200 €) (didn't support a 2000+ Palomino) that doesn't run stable at all with an Athlon 900B. The total opposite is the cheap (55€) ASRock K7S8XE that runs perfectly stable with athcool energy saving enabled.
    Quite good is the K7S5A that isn't compatible with athcool. That's why the Athlon 900B uses more power than my Sempron 3000+ (Socket A) at idle.
    I'll wait until later revisions of the K10 are available to buy a new PC but it might once again have a SIS chipset on an ASRock mainboard. If you only care about stability you will buy the cheapest board that makes no problems.
    Of course I would reward modern voltage transformers and power saving integrated graphics with a few bucks, if there was a board with which I could integrated/dedicated graphics on the fly. (Wonder if X.org will ever support switching between SIS Mirage III/Ati Mobile Radeon and e.g. ATI 2600)
  • MarkerBCH - Saturday, September 29, 2007 - link

    What would be really valuable is benchmarks of these processors with a midrange GPU like the 860GT/S with default settings. These are the probable upgrades many of us with one of these processors are going to get, and it would be great to see which of these processors would be a bottleneck, and which of them wont.

    Plugging in a state of the art GPU and lower the resolution is great to see the differences between the processors, but it doesnt help us decide if a particular processor is underpowered for a given GPU or if the bottleneck lies elsewhere.
  • sprockkets - Saturday, September 29, 2007 - link

    It did work fine for me, although there are plenty comments about its pickyness on newegg's comments for the board. For me, the DVI didn't work, and who knows why.

Log in

Don't have an account? Sign up now