Barcelona Architecture: AMD on the Counterattack
by Anand Lal Shimpi on March 1, 2007 12:05 AM EST- Posted in
- CPUs
Sideband Stack Optimizer
Intel's very first Pentium M introduced a feature Intel called its dedicated stack manager. As its name implies, the dedicated stack manager was used to handle all x86 stack operations (i.e. push, pop, call, return). The purpose of the stack manager was to keep those stack operations, which are frequently used with function calls in code, separate from the rest of the x86 instruction stream sent to the CPU. The dedicated stack manager would handle decode and "execution" of these operations so that they wouldn't clog up the processor's decoders and execution units later in the pipeline. Intel essentially "widened" the core by offloading some operations to separate hardware.
With Barcelona, AMD is introducing a similar technology it is calling a Sideband Stack Optimizer. Stack instructions no longer go through the 3-way decoder and stack operations no longer go through the integer execution units, effectively widening Barcelona at minimal cost. The Sideband Stack Optimizer, like Intel's dedicated stack manager, features its own adder that handles all stack operations. It's a small tweak that can help overall performance, and it's simply one that made sense for AMD to implement.
Faster Loads
When looking at the performance of the Athlon 64 and Intel's Core 2 processors, it's easy to understand why Intel has a strong performance advantage in applications that make heavy use of SSE. But what about applications like gaming and business apps that should greatly benefit from AMD's on-die memory controller? Is the Core 2's larger L2 cache and aggressive prefetchers all that it needs to overcome AMD's on-die memory controller?
One major aspect of Intel's Core micro-architecture advantage is its ability to allow load instructions to bypass previous load and store instructions. On average, about 1/3 of all instructions in a program end up being loads, thus if you can improve load performance you can generally impact overall application performance pretty significantly. With Intel's Core micro-architecture, it's possible for loads to be re-ordered to ensure that instructions dependent on those loads get the data they need without waiting for costly memory accesses.
Core also allowed for loads to be moved ahead of stores, which was previously not allowed due to the possibility that an earlier store could invalidate the data that was just loaded. Intel figured that the possibility of a store writing over a load ends up being very small, on the order of 1 - 2%, therefore with a reasonably accurate predictor you could correctly guess when re-ordering a load ahead of a store was possible. Intel's Core 2 based processors feature prediction logic to guess whether a store and a load share the same memory address; if the predictor determines that they won't, then it allows the load to be re-ordered ahead of the store. In the small chance that the predictor is incorrect however, the load has to be redone at the cost of a pipeline flush (similar to what happens if the processor mispredicts a branch).
AMD's K8 architecture had no equivalent scheme for allowing the out of order execution of loads ahead of other loads and stores, so even without an on-die memory controller Intel was able to execute some memory operations faster than AMD. Barcelona fixes this problem through an almost identical scheme to what Intel implemented in its Core 2 processors.
Barcelona can now re-order loads ahead of other loads, just like Core 2 can. It can also execute loads ahead of other stores, assuming that the processor knows that the two don't share the same memory address. While Intel uses a predictor to determine whether or not the store aliases with the load, AMD takes a more conservative approach. Barcelona waits until the store address is calculated before determining whether or not the load can be processed ahead of it. By doing it this way, Barcelona is never wrong and there's no chance of a mispredict penalty. AMD's designers looked at using a predictor like Intel did but found that it offered no performance improvement on its architecture. AMD can generate up to three store addresses per clock as it has three AGUs (Address Generation Units) compared to Intel's one for stores, so it would make sense that AMD has a bit more execution power to calculate a store address before moving a load ahead of it.
The out of order load execution improvements to Barcelona should prove to be even more effective than they were in Core 2 given that AMD previously couldn't do any reordering of loads before the Int/FP schedulers whereas Core Duo could do a limited amount of re-ordering.
Intel's very first Pentium M introduced a feature Intel called its dedicated stack manager. As its name implies, the dedicated stack manager was used to handle all x86 stack operations (i.e. push, pop, call, return). The purpose of the stack manager was to keep those stack operations, which are frequently used with function calls in code, separate from the rest of the x86 instruction stream sent to the CPU. The dedicated stack manager would handle decode and "execution" of these operations so that they wouldn't clog up the processor's decoders and execution units later in the pipeline. Intel essentially "widened" the core by offloading some operations to separate hardware.
With Barcelona, AMD is introducing a similar technology it is calling a Sideband Stack Optimizer. Stack instructions no longer go through the 3-way decoder and stack operations no longer go through the integer execution units, effectively widening Barcelona at minimal cost. The Sideband Stack Optimizer, like Intel's dedicated stack manager, features its own adder that handles all stack operations. It's a small tweak that can help overall performance, and it's simply one that made sense for AMD to implement.
Faster Loads
When looking at the performance of the Athlon 64 and Intel's Core 2 processors, it's easy to understand why Intel has a strong performance advantage in applications that make heavy use of SSE. But what about applications like gaming and business apps that should greatly benefit from AMD's on-die memory controller? Is the Core 2's larger L2 cache and aggressive prefetchers all that it needs to overcome AMD's on-die memory controller?
One major aspect of Intel's Core micro-architecture advantage is its ability to allow load instructions to bypass previous load and store instructions. On average, about 1/3 of all instructions in a program end up being loads, thus if you can improve load performance you can generally impact overall application performance pretty significantly. With Intel's Core micro-architecture, it's possible for loads to be re-ordered to ensure that instructions dependent on those loads get the data they need without waiting for costly memory accesses.
Core also allowed for loads to be moved ahead of stores, which was previously not allowed due to the possibility that an earlier store could invalidate the data that was just loaded. Intel figured that the possibility of a store writing over a load ends up being very small, on the order of 1 - 2%, therefore with a reasonably accurate predictor you could correctly guess when re-ordering a load ahead of a store was possible. Intel's Core 2 based processors feature prediction logic to guess whether a store and a load share the same memory address; if the predictor determines that they won't, then it allows the load to be re-ordered ahead of the store. In the small chance that the predictor is incorrect however, the load has to be redone at the cost of a pipeline flush (similar to what happens if the processor mispredicts a branch).
AMD's K8 architecture had no equivalent scheme for allowing the out of order execution of loads ahead of other loads and stores, so even without an on-die memory controller Intel was able to execute some memory operations faster than AMD. Barcelona fixes this problem through an almost identical scheme to what Intel implemented in its Core 2 processors.
Barcelona can now re-order loads ahead of other loads, just like Core 2 can. It can also execute loads ahead of other stores, assuming that the processor knows that the two don't share the same memory address. While Intel uses a predictor to determine whether or not the store aliases with the load, AMD takes a more conservative approach. Barcelona waits until the store address is calculated before determining whether or not the load can be processed ahead of it. By doing it this way, Barcelona is never wrong and there's no chance of a mispredict penalty. AMD's designers looked at using a predictor like Intel did but found that it offered no performance improvement on its architecture. AMD can generate up to three store addresses per clock as it has three AGUs (Address Generation Units) compared to Intel's one for stores, so it would make sense that AMD has a bit more execution power to calculate a store address before moving a load ahead of it.
The out of order load execution improvements to Barcelona should prove to be even more effective than they were in Core 2 given that AMD previously couldn't do any reordering of loads before the Int/FP schedulers whereas Core Duo could do a limited amount of re-ordering.
83 Comments
View All Comments
JustKidding - Friday, March 2, 2007 - link
So what you are saying is that it's not the size of your cache that matters as much as how well you use it.VooDooAddict - Thursday, March 1, 2007 - link
With Cache size differences usually having small impact on performance for Athlon64s, the slight trade off for better yields and margins seems the better choice for AMD here.
Regs - Thursday, March 1, 2007 - link
Where was this article 8 months ago? ;)I agree with Anands closing article that AMD now needs it's own "snowball effect" for the next couple of years. 4-5 years with a sitting target against a giant like Intel prooved to be costly in terms of competivness.
We all saw it coming when Intel developed the first Pentium M. It looks like AMD got the message as well and started the Barcelona project. Maybe AMD learned their lesson.
iwodo - Thursday, March 1, 2007 - link
So bascially all intel 's C2D improvement are made into Barcelona. And apart from Virtualization improvement there are nothing new from AMD that Intel doesn't have?On performance note Barcelona doesn't seem to offer better clock scaling. I.e even if it is 30% faster then its current K8 it will only have slight advantage against C2D clock per clock. Not to mention it is up against Penryn. Although Penryn is nothing much then a few minor tweaks and more cache. It does allow intel to scale higher in clock speed.
And given AMD slow roll out rate, and AMD limited production capacity Barcelona never seem like much of a threat.
The article does not mention anything about FP improvement. Are AMD keeping them secret for now or is that all we are going to see?
Spoelie - Thursday, March 1, 2007 - link
The FP improvement is the SSE improvement, and according to the theory it's more powerful than what core2 duo is offering.There are improvements mentioned that are not in core2 (+ other way around, like instruction fusing), and improvements that are inspired on the same principle but implemented differently. The architectures themselves differ widely (see earlier article that compares K8 with Core2 - reservation station etc.) so different implementations of principally the same optimizations on a different architecture will have vastly different effects. Even after these improvements, the capabilities (how much can you decode, etc) of each read nothing alike. And if it were all the same, AMD has the platform advantage, so it would still end up faster by virtue of nothing else but that. Some guesstimates made by varying sites would put Barcelona ahead in FP code and at the same level or slightly behind in INT code. But those are just guesstimates.
What I'm trying to say here is that barcelona is still very different from core2, and that we just don't know yet in which direction the pendulum will swing ;)
Shintai - Thursday, March 1, 2007 - link
No....precisely in theory is where Barcelona lacks. Core 2 Duo could in theory do 6 64bit or 3 128bit SSE instructions per cycle. Barcelona can do 4 64bit or 2 128bit. AMD provided this information aswell.Griswold - Thursday, March 1, 2007 - link
Wishful thinking.Spoelie - Thursday, March 1, 2007 - link
Hmmm, in the earlier article, there was explicit emphasis on the fact that 2 of the 3 units are symmetric in core2, but I'm not too sure what it means. It does imply however that those 3 units of core2 can only be used fully in certain combinations, and are not 3 independent units. On 128-bit performance, what was said is this: "so the Core architecture has essentially at least 2 times the processing power here [compared to K8]". Not 3 times, but "at least" 2 times, so again the 3 times will probably only be in certain situations.The next paragraph said this:
"With 64-bit FP, Core can do 4 Double Precision FP calculations per cycle, while the *Athlon64* can do 3."
So K8 was not at such a big disadvantage when it came to 64-bit SSE, if Barcelona doubles everything SSE, it should come ahead in this area.
So to me it looks like for 128-bit, core2 will be faster in some situations, on par in others, and for 64-bit, Barcelona would be ahead.
If this is wrong, I do not know where some of the articles I read over time came from, implying Barcelona would be better overall in SSE.
Shintai - Friday, March 2, 2007 - link
Core 2 got 3 individual SSE ports:http://www.realworldtech.com/page.cfm?ArticleID=RW...">http://www.realworldtech.com/page.cfm?ArticleID=RW...
AMD says 4 double:
http://www.anandtech.com/showdoc.aspx?i=2768&p...">http://www.anandtech.com/showdoc.aspx?i=2768&p...
And 64 or 128bit doesnt matter. I dont know how you think that way.
Barcelona got 2 SSE ports. They are able to do 2 128bit or 4 64bit. Most 128bit actually contains 2 64bit or 4 32bit.
Core 2 got 3 SSE ports. They are able to do 3 128bit or 6 64bit.
flyck - Friday, March 2, 2007 - link
core duo has 3 SSE units but they are not symmetric, meaning that not every unit can execute all commands. Core duo can do at best 4DP flops/ cycle. the same as barcelona.