Newcastle, the 512k cache version of Athlon64, is in the AMD Roadmap for the first half of 2004.


Imagine the surprise when we stumbled across the 3000+ for sale at several sites this week. The specifications were wrong at most sites, and got changed several times without getting them completely right, but there was no mistake that the Athlon64 3000+ is for sale at just over $200 for the OEM (bare chip) version. This is about half the price of the 3200+, so we couldn’t resist getting one in to see what was really being sold and how it performed.


The chip arrived a couple of days ago, and it certainly appears to be Newcastle. Clock speed is exactly the same at 2.0GHz as the 3200+. The only difference that we can see is the L2 cache is 512kb instead of the 1Mb found on the 3200+. Regardless of the botched specs you are seeing, the Athlon64 3000+ being advertised for mainstream prices is a Socket 754 running at 2.0GHz with 512kb cache.


AMD even added the new 3000+ to their 1,000 lot Processor price list on December 15th. You can see the full 12/15/03 price list at AMD. Anand is preparing an in-depth look at Newcastle, but we knew that our readers would enjoy a preview of the performance of the chip as it compares to other processors. With the early Christmas present from AMD, we couldn’t help but rush it into an Athlon64 board that we were testing and put it through its paces. How much difference does that 512k cache make in performance?

Basic Features: Athlon64 Processors
Comments Locked

75 Comments

View All Comments

  • Phiro - Wednesday, December 24, 2003 - link

    Reflex: Don't worry, the silent majority is with you on this one.

    Go Reflex, go Reflex, go Reflex!
  • Wesley Fink - Wednesday, December 24, 2003 - link

    #53 - We did not mean to imply that the Athlon64 was not selling, it is just not selling at the rate AMD would like right now. The article you refer to is SYSTEM sales, and the A64 is stated to be top 10 in Canada. In my opinion, the 3000+ will definitely kick that up a huge amount.

    In checking every local white box dealer, not one had an Athlon64 actually in stock for sale. Their bread-and-butter are mainsteam PC's, and the $450 A64 was a "Special Order" item. Athlon XP, on the other hand, were featured in most ads from the same dealers. Now that the 3000+ is out, I see A64 featured at these same dealers.

    Intel/Celeron/P4 has been the domain of the big manufacturers, like HP and Dell, that sell in the chain stores. Whether AMD wants to hear it or not, AMD has been a much larger part of the "white-box" market. If the "white-box" dealers weren't using A64, then AMD was losing many sales. The 3000+ moves into a new price niche and will, in my opinion, sell VERY well.
  • dvinnen - Wednesday, December 24, 2003 - link

    I figure this is worth a post for Pumpkinierre:

    http://www.theinquirer.net/?article=13332

    Seems to be selling fine to me. It's on eof the best selling PCs at TigerDirect, the 3000+ will no dought help even sell better. Even Anandtech can be wronge once in a while. As for your cache argument, the reason peopole bought the cacheless celerons was because they where great overclockers and cheap, not because they where low latency. The rest of argumenthas already been torn to shreads so I won't bother.
  • Reflex - Wednesday, December 24, 2003 - link

    I'm sorry guys, I don't see a point in this debate any longer. Its fairly obvious that Pumkin simply does not know what he is talking about, and certainly not what cache does for a CPU. Its main purpose is to hide latencies inherant in the asyncronous design of modern CPU's and memory, and the more of it the better it does that job. Furthermore, most of what is contained in cache is not instructions themselves, but rather pointers to exact locations in memory that specific data/instructions are located enabling much faster retrieval of that information. The more cache you have, the more of this type of information it can store, more than making up for any latencies caused by the extra step of searching cache...

    I have played with all the CPU's mentioned in these articles. I had my hands on the Athlon64 over two years ago. It is leaps and bounds beyond other architectures in many respects, and one of those is its combination of large cache size and integrated memory controller. It will never be outperformed by a Duron, nor by a lower cache version of the same chip. Feel free to use whatever you think is best for your own rig, but advocating this to others is doing them a disservice. And while the Celeron 300 was certainly valued for its overclockability, it was *never* considered better in overall performance than the Pentium II 450 in *any* respect. The lack of cache crippled the chip, even in gaming, although it had less of an impact in that arena than in some other tasks. Your history is more than a bit revisionist.

    Anyways guys, I'm through arguing with someone who obviously knows nothing about what he is speaking of. I will continue to respond to those of you with the ability to actually go read the reviews and who's arguments do not consist of you simply deciding that since AT's reviews don't match up with your personal opinion that AT and the rest of the world is wrong and that you are right. ;> I require a bit more scientific proof than your opinion, especially seeing as I do know how this stuff works having worked on it myself.
  • AnonymouseUser - Wednesday, December 24, 2003 - link

    TrogdorJW said: "For the educated market (most of the people reading Anandtech, ArsTechnica, Tom's Hardware Guide, HardOCP, etc.), the PR ratings are pretty much meaningless."

    Judging the "educated market" by the comments some members have made to this article and many others preceding it, they aren't as "educated" as one would expect. Take, for example, the following statement concerning cache: "128K is probably optimum for gaming." (for proof of this ignorance, see the following comparison: http://anandtech.com/cpu/showdoc.html?i=1927) -_^

    Pumpkinierre barfed: "Stick with celerons and durons, you'll have fun and money to boot."

    Exactly why are you arguing over the top end processors while still advocating the low end?
  • PrinceGaz - Wednesday, December 24, 2003 - link

    @Pumpkinierre: What exactly are wrong with game benchmarks? It doesn't make any significant difference to how a game runs whether someone is sitting there pressing keys and moving the mouse, or if the game itself is playing back a recorded demo of the same. The actual game-code executed is the same in both cases, it just takes its input from a different source. The recording is just as unpredictable as far as the CPU is concerned as someone playing it there and then.

    Less cache is never going to improve the performance of games, especially not the 128K of cache you seem to be promoting. Every single gaming benchmark gave higher performance with the A64 3200+ than the A64 3000+ (except those that were gfx-card bound where they were roughly identical) and the only difference between the two processors is the 3200+ has twice as much cache. More cache clearly resulted in more speed.

    If the cache were halved again to 256K, the loss in performance would be even greater, and halving it once more to 128K would have a serious impact. Just compare the performance in the budget CPU article of the 1.6GHz Duron (128K+64K) to the 1.47GHz Athlon XP (128K+256K) and you'll see the Duron lost every game test (sometimes by over 20% difference) to the 8% lower-clocked AthlonXP, because 192K total cache isn't enough for it to run well. The smaller the cache, the more of an impact it has.

    You keep mentioning about how less cache improves the minimum frame-rate, or the "smoothness", or that they have lower-inertia than processors with more cache. What a load of garbage! Minimum frame-rates caused by the CPU will be hit that much harder if the processor has to keep going to system-memory because the data it needs isn't cached. The last thing you want is a system with very little cache like you're advocating.

    I like your strange suggestion that a system with less cache has less inertia, as if you can actually notice the delay caused by larger cache CPUs when playing. Actually the memory-controller makes more difference to the latency or inertia as the P4 3.2 in the test had a considerably greater memory-latency of over 200 nanoseconds, compared to under 100nS for both Athlon 64 chips. Personally I've never been bothered by delays of a few hundred nanoseconds while playing even the most intensive games, in fact theres no way *anyone* will actually notice a delay caused by whether or not the processor decides to access main-memory or cache. But it'll be faster in the long run if it usually finds what it needs in a larger cache.

    A 512K L2 cache seems adequate to give good performance in games as there isn't a major improvement when it is increased to 1024K (but that does add considerably to the size and cost of the chip). On the other hand, 256K does reduce the performance noticeably (compare a Barton to a simiarly clocked T'Bred) and cutting 256K of cache doesn't make so much difference to the size. Therefore 512K seems a good balance and an ideal cache size for gamers. Certainly far better than 128K :p
  • Pumpkinierre - Wednesday, December 24, 2003 - link

    Forgot to add my opinion of P4EE as requested by #47. Basically a rebadged Xeon to compete with a rebadged opteron (FX51). Both at absurd prices. At least the P4EE doesnt need reg. memory and ABIT o'clocked it to 4 Gig at COMDEX on an IC7-G (also my mobo)with standard cooling (4.5 with specialist stuff- fastest cpu in the universe!). Given that it is a new core (galatin) and stepping there might be a bit of poke in it but others (AT?) didnt find much headroom.
    Yes the benchmarks show 8-15% better than the 3.2P4 but for gaming youd be better off with the latter for the same cache reasons as I've stated in the previous posts. With the exception of a fast 128-256K L1 cache, the P4 cache arrangement is the next best thing with a very small 8K cache (notice smaller than the 16K L1 on the original pentium and done for a reason-lower latency) working inclusively (L1 content always present in L2) with a 512K L2 cache. For gaming this is superior to the exclusive arrangement on AMD chips which gives a larger combined cache at the expense of latency. This goes a long way to explaining the smoothness of P4's over A-XP experienced by gamers who have tried both. K8 smoothness (due to its low latency mem.controller etc.) is also already legendary. The P4EE probably has an inclusive L2 in L3 cache but you're getting into serious server territory with a 2Mb L3 cache.
    Intel understand latency (witness PAT technology) and they are also aware of the internet benchmarking community. So to combat the FX51 which was a stupid kneejerk release (reg. mem. sckt940 etc.) from AMD they cynically released the heavy cache P4EE which would show up good in the predictable gaming benchmarks but in truth be worse than a standard P4 3.2 in actual play. The high price is meant to catch the well heeled fools who think they are getting the best intel gaming CPU. Stick with celerons and durons, you'll have fun and money to boot.
  • Pumpkinierre - Wednesday, December 24, 2003 - link

    Yes 47 KF in most cases overwriting is all that is required with the possible exception of the exclusive L1 that AMD use in both K7 and K8. If the prediction algorithm deems that the information in the L1 cache is more important than some other data in the L2 cache then it is more efficient to write back the information to L2 (ie purge) than have to recall the same information from slower DRAM, should the CPU require different data/commands to be available momentarily in the L1 cache.

    With regards to memory latency etc..You have to remember I dont even agree with any of the benchmarks or testing methods in reference to gaming. The only testing that is valid is a repetitive user controlled execution of a particular gaming program sequence. Power of a system is defined by the highest average minimum frame rate and latency/smoothness by the smallest average difference between maximum and minimum frame rates. It is integral that the computer system/game latency be defined as a system may work well on one game but not another. As far as all the other benchmarks go you may as well run Business Winstone for all that it will tell you.

    The memory controller works on a 16 or 32 bit data path from the DRAM same as the CPU data bus so the time to fully load a 256K cache will be half of a 512K and 1/4 of a 1MB cache. So cache size will have an effect on system latency especially if the cache has to be frequently refreshed.
    In the case of data, the CPU only require 4 bytes but it must be the right 4 bytes of data. If each time the data addresses are changed there is no predictive link between the new ones and the previous ones then the cache is refreshed at a latency cost. The strike rate for 2d/text applications is high. For fast interactive games very low. So a compromise between cache size and response must be struck. Early celerons (no cache) were loved by gamers for their low inertia but panned by the benchmarkers. Even the 128K celerons flew and regarded by many as better than P2 450s with half speed 256 or 512K cache for games. 128K is probably optimum for gaming. Larger caches have been put on desktops to accomodate other apps Office graphics, CAD internet as desktops are jack of all trades. more importantly 128K of cache would get K8 prices down as die size would be halved and capacity doubled
  • Reflex - Wednesday, December 24, 2003 - link

    I believe if you read the line that was quoted from me you will observe the fact that I said 'noticable' decrease in performance. On a purely theoretical level yes you can lose a clock cycle or two on a larger cache size. However this is *never* going to be noticable in a real world application, only in an app that does nothing but measure clock cycles for information in cache. In the real world this performance 'hit' is completely meaningless, and as time goes on and programs get larger, CPU's will begin to show more and more of a difference due to the cache sizes. Compare a Duron with 128/64kb L1/2 to an equivilent speed Athlon with 128/256(unlocking the multiplier and setting the bus speed and clock speed identical). There is a definite performance improvement going to the Athlon, specifically due to the larger cache size. Theoretically it has slightly higher latency but realistically it means nothing in the end result.

    I stand by my statement, a larger cache size is *never* a bad thing, as long as it is running at the same clock rate and bus width as the smaller cache. Any theoretical losses in performance are more than made up by future gains as apps start to utilize more cache.

    And if your going to tell me that a gamer *only* uses their computer for a single threaded game, then don't quote multimedia encoding benchmarks to me next time you want to talk about where the P4 shines. Last I checked most people use their PC's for a *lot* of tasks, gaming being one of them. And in games, the Athlon 64/FX is pretty much the cats meow, at any cache size regardless of your personal opinion of how it should be rated. Makes me wonder what your opinion of the P4EE is considering all the cache on that thing. ;)

    BTW, to the people talking about 64bit code being twice as large, well, its not quite like that, however I do believe you are correct that larger caches will play a bigger role with 64bit code than with 32bit code, time will tell what is optimal...

    #45: You hit the nail on the head. In an ideal world, consumers would do their research and learn for themselves what is best, however I personally can admit to the fact that in a lot of cases I simply do not have the time. Ratings systems are currently a necessary evil in the PC market, and until Intel is not market leader that it currently is, they will continue to be necessary. I'd love to see AMD's True Performance Initiative take flight, but unfortunatly that is highly unlikely as Intel has no motivation to do so as it stands now...
  • arejerjejjerjre - Wednesday, December 24, 2003 - link

    In other words Intel doesn't cheat!

Log in

Don't have an account? Sign up now