Understanding the Cell Microprocessor
by Anand Lal Shimpi on March 17, 2005 12:05 AM EST- Posted in
- CPUs
Cell’s Dynamic Logic
Although it’s beyond the scope of this article, one of the major problems with static CMOS circuits are the p-type transistors, and the fact that for every n-type transistor, you also must use a p-type transistor.There is an alternative known as dynamic or pseudo-NMOS logic, which gets around the problems of static CMOS while achieving the same functionality. Let’s take a look at that static CMOS NOR gate again:
The two transistors at the top of the diagram are p-type transistors. When either A or B are high (i.e. have a logical 1 value), then the p-type transistor gates remain open, with no current flowing. In that case, the output of the circuit is ground, or 0 since the complementary n-type transistors at the bottom function oppositely from their p-type counterparts (e.g. current can flow when the input is high).
Thus, the NOR gate outputs a 1 only if all inputs are 0, which is exactly how a NOR gate should function.
Now, let’s take a look at a pseudo-NMOS implementation of the same NOR gate:
There are a few things to notice here. First and foremost, the clock signal is tied to two transistors (a p-type at the top, and an n-type at the bottom) whereas there was no clock signal directly to the NOR gate in our static CMOS example. There is a clock signal fed to the gate here.
Cell’s implementation goes one step further. The p-type transistor at the top of the circuit and the n-type transistor at the bottom are clocked on non-overlapping phases, meaning that the two clocks aren’t high/low at the same time.
The way in which the gate here works is as follows: inputs are first applied to the logic in between the clock fed transistors. The top transistor’s gate is closed allowing the logic transistors to charge up. The gate is then opened and the lower transistor’s gate is closed to drain the logic transistors to ground. The charge that remains is the output of the circuit.
What’s important about this is that since power is only consumed during two non-overlapping phases, overall power consumption is lower than static CMOS. The downside is that clock signal routing becomes much more difficult.
The other benefit is lower transistor count. In the example of the 2-input NOR gate, our static CMOS design used 4 transistors, while our pseudo-NMOS implementation used 4 transistors as well. But for a 3-input NOR gate, the static CMOS implementation requires 6 transistors, while the pseudo-NMOS implementation requires 5. The reasoning is that for a CMOS circuit, you have 1 p-type transistor for every n-type, while in a pseudo-NMOS circuit you only have two additional transistors beyond the bare minimum required to implement the logic function. For a 100-input NOR gate (unrealistic, but a good example), a static CMOS implementation would require 200 transistors, while a pseudo-NMOS implementation would only require 102.
By making more efficient use of transistors and lowering power consumption, Cell’s pseudo-NMOS logic design enables higher clock frequencies. The added cost is in the manufacturing and design stages:
- As we mentioned before, clock routing becomes increasingly difficult with pseudo-NMOS designs similar to that used in Cell. The clock trees required for Cell are probably fairly complex, but given IBM’s expertise in the field, it’s not an insurmountable problem.
- Designing pseudo-NMOS logic isn’t easy, and there are no widely available libraries from which to pull circuit designs. Once again, given IBM’s size and expertise, this isn’t much of an issue, but it does act as a barrier for entry of smaller chip manufacturers.
- Manufacturing such high speed dynamic logic circuits often requires techniques like SOI, but once again, not a problem for IBM given that they have been working on SOI for quite some time now. There’s no surprise that Cell is manufactured on a 90nm SOI process.
70 Comments
View All Comments
scrotemaninov - Thursday, March 17, 2005 - link
#23: True, but I believe that when the SPE's access the outside memory they go through the cache. Sure it's a lower coherancy than we're used to but it's not much worse.Houdani - Thursday, March 17, 2005 - link
18: Top Drawer Post.20: Thanks for the links!
fitten - Thursday, March 17, 2005 - link
"Given the speed of the interconnect and the fact that it is cache-coherant,"Only the PPC core has cache. The individual SPEs don't have cache - they have scratchpad RAM.
#22: I believe the PPC core is a dual issue core that just happens to be 2xSMT.
AndyKH - Thursday, March 17, 2005 - link
Great article.Anand, Could you please clarify something:
I had the impression that the PPE was a SMT processor in the sense that it had to be executing 2 threads in order to issue 2 instructions per clock. In other words: I didn't think the PPE control logic could decide to issue 2 instructions from the same thread at any given clock tick, but rather that it absolutely needed an instruction from each thread to issue two instructions.
After reading the article, I don't assume my impression is right, but a comment from you would be nice.
As I come to think about it, my impression is rather identical to 2 seperate single thread in-order cores. :-)
Koing - Thursday, March 17, 2005 - link
Cell looks VERY interesting.Any of you guys seen Devil May Cry 3 on the PS2? Looks great imo same with T5 and GT4.
Cell at first will be tough like most consoles. BUT eventually THE developers will get around it and make some very solidly good looking games.
Lets hope they are innovative and not just rehashed graphics and nothing else.
Thanks for the great article.
Koing
scrotemaninov - Thursday, March 17, 2005 - link
I really hate just dumping loads of links, but this basically is the available content on the CELL.http://arstechnica.com/articles/paedia/cpu/cell-1....
http://arstechnica.com/articles/paedia/cpu/cell-2....
http://realworldtech.com/page.cfm?ArticleID=RWT021...
http://www.blachford.info/computer/Cells/Cell0.htm...
http://www.realworldtech.com/page.cfm?ArticleID=RW...
http://www.hpcaconf.org/hpca11/papers/25_hofstee-c...
http://www.hpcaconf.org/hpca11/slides/Cell_Public_... (slides)
mrmorris - Thursday, March 17, 2005 - link
Brilliant article, there are few places for in-depth hardcore technology presentations but Anandtech never fails.scrotemaninov - Thursday, March 17, 2005 - link
Real concurrency is hard to do for the programmers. It's a real pain to get it right and it's hard to debug. Systematic analysis just gets too complex as there are just too many states, you end up with a huge graph/markov-model and it's just impossible to solve it tractably.Superscalar and SMT just try to increase ILP at the CPU level without burdening the programmer or compiler-writer. However, we've pretty much come to the end of getting a CPU to go faster - at 5GHz, LIGHT travels 6cm between clocks, and an electic PD will travel slower. As it is, in the P4 pipeline, there are at least 2 stages which are simply there to allow signals to propogate across the chip. Clearly, going faster in Hz isn't going to make the pipeline go faster.
So the ONLY thing that they can do now is to put lots of cores on the same chip and then we're going to have to deal with real concurrency. IBM/Sony are doing it now with CELL and Intel will do it in a few years. It's going to happen regardless. What we need is languages which can support real concurrency. The Java Memory Model is an almost ideal fit for the CELL, but other aspects don't work out so well, maybe. We need Pi-calculus/Join-calculus constructs in languages to be able to really deal with these cpus efficiently.
Your comments about CELL not being general purpose enough are a little wrong. IBM /already/ has the CELL in workstations and are evaluating applications that will work well. Given the speed of the interconnect and the fact that it is cache-coherant, I think we'll be seeing super-computers based on many CELLs, it's an almost ideal fit (as it is, you've almost got ccNUMA on a single chip). Also, bear in mind that this is IBM's 5th (or 6th?) generation of SMT in the PPE - they've been at it MUCH longer than Intel - IBM started it in the mid-90s around the same time that the Alpha crew were working on the EV8 which was going to have 8-way thread-level parallelism (got canned sadly).
Also, if you look at IBMs heavy CPUs - the POWER5, that has SMT and dispatches in groups of 8 instructions, not the 3/4 that AMD/Intel manage.
What I'm saying here, is that sure, the SPEs don't have BPTs of BTBs, they're all 2-way dispatch and not greater, but, they all run REALLY fast, they have short pipelines (so the pain of the branch misprediction won't be so bad), and, IBM have had software branch prediction available since the POWER4, so they've been at it a few years and must have decided that compilers really can successfully predict branch directions.
Backwards compatibility doesn't matter. Sure, Microsoft took several years to support AMD64 but that didn't stop take up of the platform - everyone just ran Linux on it (well, everyone who wanted to use the 64bit CPU they'd bought). It'll only be a few months after the CELL is out that we'll have to wait until Linux can be built on it. 100quid says Microsoft will never support it.
Frankly, considering that it's far more likely to go into super-computer or workstation environments, no one there gives a damn about backwards compatibility or Windows support. No one in those environments /wants/ a damn paper clip.
Reflex - Thursday, March 17, 2005 - link
#14: Replace 'lazy developers' with 'developers on a budget' and you will have a true statement. Its not an issue of laziness, its an issue of having the budget to optimize fully for a platform.GhandiInstinct - Thursday, March 17, 2005 - link
Wow Super CPU and SUPER RAMBUS? AHHHH!This will replace my computer. PS3 that is.