Seeing "these should be taken with a grain of salt" comment, I wonder if anyone ever did comparison of past announcements and how accurate they were? Was Imagination announcement for 6XT close to reality? How about similar pre-release figures given by NVidia, Intel etc... ?
I would say given the recent benchmark performances of both the Nexus 9 and iPad 2, both Nvidia and Imagination/Apple delivered on their claimed performance increases from this time last year and CES in the form of A8X and Tegra K1 (Denver). There's really only 2 players right now in the tablet ARM-based SoC market if you are looking at pure performance, Apple with PowerVR and Nvidia, everyone else is ~1.5 generations behind.
It's kinda silly to say only Tegra and Apple exist on the SoC market at the moment - The fact that those two happen to have just had their latest designs put to market in high profile tablet devices. There hasn't been a high end ARM tablet put to market in recent months that wasn't Tegra or an iDevice. So sure, they are technically winning that extremely specific race, but in the wider mobile sector (which uses the exact same SoCs most of the time) no-one has a clear lead.
The most recent Snapdragon flagship won't be around for a while yet and the Exynos 5233 was only in some Note 4s and is yet to be in a mass market device, there's no suggestion that either of those two won't be competitive with the Apple and Nvidia going forward.
Obviously more recent chips tend to be better, so the ones we've seen most recently are very likely the most powerful of what there is at this exact second. But they certainly don't blow anything out of the water. Apple and Nvidia are doing good at this exact moment. Their chips compare well in benchmarks to what else is on the market. But those benchmarks are certainly not much ahead of what was on the market when they were released, and there's absolutely nothing to suggest that they are now somehow ahead of Qualcomms next generation, and the Exynos 5 series is certainly on par with them.
When the Tegra 4 came to market it was impressively ahead of the curve in graphics especially. But sadly no-one bought into all that much back then, and everyone else has continued pushing on with their own designs and closed that gap to the point where it's an even race. There is no generation gap what so ever.
And the thing is that none of this is even relevant anymore anyway. Mobile devices don't need better graphics. We don't make use of what we have already. Even if you do happen to be 15 and (somehow) have a flagship phone I severely doubt even someone with your buckets of free time and large periods waiting for buses and/or being driven around actually appreciate the (slight) difference between this years and last years flagships in terms of GPU power alone. Beyond that the amount of CPU horsepower in modern devices is laughably overblown now to the point where outside benchmarking you genuinely have to try very very hard to tap even a fraction of what the processor can do. At this point in the evolution of the mobile space the specs just don't make enough of a difference to wow people now. Last years flagships are still fine devices. The year befores are too. And the year befores. They might not all have the latest software, but they still do what they originally did just as well as ever they did it.
You're pretty much right, but neglect to mention two things: improvements in features (like always-on voice detection, higher resolutions, better camera's etc; and then there are improvements like battery life, which, as owner of the One (m7) can appreciate: I don't care for the features or performance of the One (m8) but I would gladly spend a few hundred bucks for its battery life. And the move from a Snapdragon 600 to 80x is big in that regard...
When first Tegra 4 came out, there already was Snapdragon 800(devices with those SoCs came out in summer 2013). When first device with Tegra K1 came out(Xiaomi MiPad), there wasn't anything, that could compete with it. When second device came out(Shield Tablet), there still wasn't anything. And when third K1 device came out(Nexus 9, with dual-core Denver CPU) there was ONLY iPad Air 2. K1 and A8x are the most powerful ARM SoCs right now, and that will be changed only in 2015. Exynos 7 Octa and Snapdragon 805 cant compete with K1 or A8X. But, there is one major problem with K1 and A8X - these are SoCs for tablets, not for smartphones. So, yeah, in performace side Nvidia and Apple are 1.5 generations ahead.
Exactly, and Nvidia still has 2 major trump cards to play with Erista which will have both 20nm and Maxwell GPU arch, so they'll extend their lead again as early as Q2.
The problem Nvidia (and Apple) is their performance advantages right now are going to a blackhole, there's nothing that really takes advantage of this lead and therefore, they can't use this as an advantage to establish clear market dominance (at least for Nvidia on the Android side). Their challenge will be to grow the gaming ecosystem to take advantage of this benefit, but in the meantime, they will continue to iterate and dominate the competition.
Who's buying into the hype when there's no need to? The latest two products on the market show Nvidia and Apple are head and shoulders above the competition and there is nothing on Qualcomm's roadmap that will challenge this until maybe next year, by which time Nvidia for sure will have punted the ball out of their range again with Erista (Maxwell and 20nm). Apple won't have as many levels to pull to improve A9, unless they also go with Maxwell GPU IP.
In response to the last paragraph, I thought the only competitor for Imagination's GPU was Mali (since both are the only GPUs that can be licensed to integrators)... Kepler/Maxwell and Adreno aren't in the "business space".
Nvidia SoC tech can't be licensed. The terms they've put out are intentionally prohibitative. What the can do though, and have done, is legally claim to offer "licenses", and then sue all the market players when no one opts to spring for it and price themselves out of the market. Which is also what they've done.
In such a high end configuration why bother with FP16 Flops. It's a waste of die space. How relevant is FP16 in mobile configurations though nowadays? How huge is the quality degradation on a 7 inch screen?
"How relevant is FP16 in mobile configurations though nowadays?"
Actually, very. FP16 is still significantly used because it can deliver the necessary quality at lower power. Games will tend towards FP32, but for desktop composition and such FP16 is more than plenty and is a task the GPU is frequently doing.
When you use "lowp" in your shader it is using FP16. Shaders written for mobile games typically use lowp as much as possible, as it greatly increases performance, but desktop GPUs do not use them at all.
On the other hand, seeing mobile devices/tablets get similar level of GPU as dedicated consoles like PS3 and xbox360 is nice. Well, ok, ps3 and xbox360 are now 1 generation old, but still these are MOBILE GPUs.
"Though the first PowerVR Series 6XT-equipped products have only recently launched – including the unexpectedly powerful iPad Air 2..."
You stole that from my mounth.... iPad Air 2 has an insanely fast GPU, they just kind of crushed it in that area. The A8X matches the best nvidia offering at significally lower power with insane sustained performance.
And now you got an Series 7 GPU's... With rumors of Apple pushing heavilly for an 16nm Fin Fet Plus proces from TSMC which got some big power savings coupled with these GPU's... I think we will see that yearly doubling of GPU performance once more (and possibly last time, at least until we are stucked with silicone).
The question is, how big those are, with series 6XT we saw a 50% efficiency gain, but a far bigger GPU size as well. As TSMC 16FF+ is not that much of a shrink at all it could be a problem going with an even bigger architecture.... Even then an A9 and A9X build on a mature 20nm process with GT7400/7600 and somewhat higher clock speeds could bring some prety big improvements as well.
Then there is nvidia, can Apple defeat them like this year ? Or will nvidia finaly show what they can do for mobile ?
Really? It seems (from preliminary results) the Nexus 9 is behind, although the difference is negligible. We must have different interpretations of numbers and words...
I wonder why Samsung has dropped PowerVR for their Exynos in favor of Mali. From what we've seen in the last couple of years, Imagination GPU seem superior
Newsflash..the technology is ADVANCING. And mobile (ultra low power) GPU technology is actually the fastest advancing technology so far. The GX6650 reaches well over 300GFLOPS on a same TDP as GSX 543MP4 used to reach around 50GFLOPS in the Vita.
The Adreno 430 will reach over 250GFLOPS in 3W TDP, and will be out in smartphones in a few months.
And this PoverVR 7 series will probably reach 500GFLOPS in under 5W TDP.
Oh I wouldn't lay the claims out if I couldn't defend it. It is super easy and should really be obvious to anyone with some interest in mobile chips. Here, behold the technological advancement.
The Alpha has smaller battery, same sized screen of same resolution, yet it's GPU is on average 8 TIMES faster. And the cherry on top - go for battery endurance comparison between the two.
iPad Air 2 has only 50% battery, same sized screen and resolution, and its GPU is approximately 5 TIMES more powerful. Battery endurance? Basically the same.
Note 3 with the bigger and far more dense screen and almost equal battery capacity actually improves on the endurance. But the real kicker - its Adreno 330 can do 158 GFLOPS, compared to Note 2's 20 GFLOPS!
I could go on and on.
Of course, the process difference plays its part but the architectural efficiency improvements are much greater (for example Mali T8xx and PowerVR series 7 designs this year are bringing yet another 40% and 60% efficiency improvements in the SAME process and clock speed compared to previous generation).
And overall advancement in mobile GPU fps per watt is nothing short of unprecedented.
A ~50% improvement, combined with a node shrink, is nothing to sneeze at. I really though we would be stopping the enormous yearly improvements by now. Glad I was wrong!
What a beasts will be next years ipads and iphones.
Actually, there is no comparison, considering that HD 5000 sucks 3 times more power. In an unrestricted environment, this should absolutely demolish Intel GPU, even the new architecture brought with the Core M.
But the Maxwell Tegra GPU looks even worse for Intel.
It really will be interesting to see what happens to Core M if Intel decides to continue using PowerVR in Atom lineup.. Intel's own integrated graphics are no match at all in a given power envelope
Intel decides which design to implement... if they want to intentionally use fewer clusters, they can. If they want to limit max GPU clocks and willfully deploy a gimped IMC to starve it for bandwidth, they can. They'll make sure Atom still slots in below higher-end Core solutions.
What would be more interesting is if they were to implement a full-blown Series 7 setup in some of their higher-end chips. 16 clusters with aggressive clocks would definitely be interesting. But it'll never happen.
But that's the thing, the sole purpose of Atom chips is to compete on the most competitive chip environment there is - battling ARM in an effort to get some ground in mobile.
If Intel intentionally cripples Atom just to stay bellow the Core M, then it can continue to watch ARM chips reigning supreme.
I'd really love to see Intel using a high performance core for tablet CPU's. I know Intel plans to improve on Bay Trail's GPU performance, but I think we need something really good. (Then again, using PVR might result in driver related problem.)
They have their own tech, and they are sticking too it. (Hint, they use FLOSS drivers for Android, you can check code repos to see whats coming).
They still have some good arguments for themselfs: 1) They are still kings of node process. 2) They have open source gpu drivers. (Game devs do not need to sit in front of black box, guessing why their code is so slow!) 3) They still have huge cash stockpile.
I'm not saying that 1-3 mean that they will win lots of designs, but they can wait for their turn.
First of all, iris pro has 128MB of cache, not 500MB. Second, you are comparing a chip that pulls ~55 watts to a gpu used in sub 5 watt chips. not even a close comparison. Third, if this powervr 7 can hit 300 GFLOP, then it will already be a third of the way to meeting the fastest version of iris pro (832 GFLOP), and heck, the A8X already is a fourth of the way there with 231 GFLOP. Fourth, it;s called a smartphone, not a smarphone.
Isn't that crazy? It was only announced in January and the first shipping implementation was only 10 months later; that means the first working silicon was before August!
It's never been officially confirmed. They just assume it's 6XT. I wonder if Apple could get early access to features of the 7XT, seeing as they own 20% of the company.
Yes, but Apple has a special relationship with with Imagination Technologies. Who's to say that it's not a "6XT delux", with some features from the 7 series?
People comparing GPU performance by the Gflops numbers are being unwise. It can be use only as an approximation of GPU performance with a huge margin or error that can be bigger than 100%. Radeon 4890 has more Gflops than GTX480 and which one is faster? GTX480 and it's faster by a factor of 2. So don't be so quick to compare performance across architectures by one simple metric.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
49 Comments
Back to Article
Badelhas - Monday, November 10, 2014 - link
Second!Nenad - Monday, November 10, 2014 - link
Seeing "these should be taken with a grain of salt" comment, I wonder if anyone ever did comparison of past announcements and how accurate they were? Was Imagination announcement for 6XT close to reality? How about similar pre-release figures given by NVidia, Intel etc... ?It would be interesting article on its own ;p
chizow - Monday, November 10, 2014 - link
I would say given the recent benchmark performances of both the Nexus 9 and iPad 2, both Nvidia and Imagination/Apple delivered on their claimed performance increases from this time last year and CES in the form of A8X and Tegra K1 (Denver). There's really only 2 players right now in the tablet ARM-based SoC market if you are looking at pure performance, Apple with PowerVR and Nvidia, everyone else is ~1.5 generations behind.kron123456789 - Monday, November 10, 2014 - link
That's why i wanna see what beast Nvidia will announce at CES 2015.LostAlone - Tuesday, November 11, 2014 - link
It's kinda silly to say only Tegra and Apple exist on the SoC market at the moment - The fact that those two happen to have just had their latest designs put to market in high profile tablet devices. There hasn't been a high end ARM tablet put to market in recent months that wasn't Tegra or an iDevice. So sure, they are technically winning that extremely specific race, but in the wider mobile sector (which uses the exact same SoCs most of the time) no-one has a clear lead.The most recent Snapdragon flagship won't be around for a while yet and the Exynos 5233 was only in some Note 4s and is yet to be in a mass market device, there's no suggestion that either of those two won't be competitive with the Apple and Nvidia going forward.
Obviously more recent chips tend to be better, so the ones we've seen most recently are very likely the most powerful of what there is at this exact second. But they certainly don't blow anything out of the water. Apple and Nvidia are doing good at this exact moment. Their chips compare well in benchmarks to what else is on the market. But those benchmarks are certainly not much ahead of what was on the market when they were released, and there's absolutely nothing to suggest that they are now somehow ahead of Qualcomms next generation, and the Exynos 5 series is certainly on par with them.
When the Tegra 4 came to market it was impressively ahead of the curve in graphics especially. But sadly no-one bought into all that much back then, and everyone else has continued pushing on with their own designs and closed that gap to the point where it's an even race. There is no generation gap what so ever.
And the thing is that none of this is even relevant anymore anyway. Mobile devices don't need better graphics. We don't make use of what we have already. Even if you do happen to be 15 and (somehow) have a flagship phone I severely doubt even someone with your buckets of free time and large periods waiting for buses and/or being driven around actually appreciate the (slight) difference between this years and last years flagships in terms of GPU power alone. Beyond that the amount of CPU horsepower in modern devices is laughably overblown now to the point where outside benchmarking you genuinely have to try very very hard to tap even a fraction of what the processor can do. At this point in the evolution of the mobile space the specs just don't make enough of a difference to wow people now. Last years flagships are still fine devices. The year befores are too. And the year befores. They might not all have the latest software, but they still do what they originally did just as well as ever they did it.
Just stop buying into the hype.
jospoortvliet - Tuesday, November 11, 2014 - link
You're pretty much right, but neglect to mention two things: improvements in features (like always-on voice detection, higher resolutions, better camera's etc; and then there are improvements like battery life, which, as owner of the One (m7) can appreciate: I don't care for the features or performance of the One (m8) but I would gladly spend a few hundred bucks for its battery life. And the move from a Snapdragon 600 to 80x is big in that regard...kron123456789 - Tuesday, November 11, 2014 - link
When first Tegra 4 came out, there already was Snapdragon 800(devices with those SoCs came out in summer 2013). When first device with Tegra K1 came out(Xiaomi MiPad), there wasn't anything, that could compete with it. When second device came out(Shield Tablet), there still wasn't anything. And when third K1 device came out(Nexus 9, with dual-core Denver CPU) there was ONLY iPad Air 2.K1 and A8x are the most powerful ARM SoCs right now, and that will be changed only in 2015. Exynos 7 Octa and Snapdragon 805 cant compete with K1 or A8X. But, there is one major problem with K1 and A8X - these are SoCs for tablets, not for smartphones.
So, yeah, in performace side Nvidia and Apple are 1.5 generations ahead.
chizow - Wednesday, November 12, 2014 - link
Exactly, and Nvidia still has 2 major trump cards to play with Erista which will have both 20nm and Maxwell GPU arch, so they'll extend their lead again as early as Q2.The problem Nvidia (and Apple) is their performance advantages right now are going to a blackhole, there's nothing that really takes advantage of this lead and therefore, they can't use this as an advantage to establish clear market dominance (at least for Nvidia on the Android side). Their challenge will be to grow the gaming ecosystem to take advantage of this benefit, but in the meantime, they will continue to iterate and dominate the competition.
chizow - Wednesday, November 12, 2014 - link
Who's buying into the hype when there's no need to? The latest two products on the market show Nvidia and Apple are head and shoulders above the competition and there is nothing on Qualcomm's roadmap that will challenge this until maybe next year, by which time Nvidia for sure will have punted the ball out of their range again with Erista (Maxwell and 20nm). Apple won't have as many levels to pull to improve A9, unless they also go with Maxwell GPU IP.http://anandtech.com/show/8670/google-nexus-9-prel...
lilmoe - Monday, November 10, 2014 - link
In response to the last paragraph, I thought the only competitor for Imagination's GPU was Mali (since both are the only GPUs that can be licensed to integrators)... Kepler/Maxwell and Adreno aren't in the "business space".chizow - Monday, November 10, 2014 - link
Nvidia GPU SoC tech can be licensed, it just hasn't happened (yet).takeship - Tuesday, November 11, 2014 - link
Nvidia SoC tech can't be licensed. The terms they've put out are intentionally prohibitative. What the can do though, and have done, is legally claim to offer "licenses", and then sue all the market players when no one opts to spring for it and price themselves out of the market. Which is also what they've done.chizow - Wednesday, November 12, 2014 - link
And you know this how? I think the various lawsuits ongoing will be the impetus to establish the first licensees of Nvidia IP.hahmed330 - Monday, November 10, 2014 - link
In such a high end configuration why bother with FP16 Flops. It's a waste of die space. How relevant is FP16 in mobile configurations though nowadays? How huge is the quality degradation on a 7 inch screen?Ryan Smith - Monday, November 10, 2014 - link
"How relevant is FP16 in mobile configurations though nowadays?"Actually, very. FP16 is still significantly used because it can deliver the necessary quality at lower power. Games will tend towards FP32, but for desktop composition and such FP16 is more than plenty and is a task the GPU is frequently doing.
lefty2 - Tuesday, November 11, 2014 - link
When you use "lowp" in your shader it is using FP16. Shaders written for mobile games typically use lowp as much as possible, as it greatly increases performance, but desktop GPUs do not use them at all.coder111 - Monday, November 10, 2014 - link
Ok, so are they going to provide proper Linux drivers?http://en.wikipedia.org/wiki/Free_and_open-source_...
If not, they SUCK!
On the other hand, seeing mobile devices/tablets get similar level of GPU as dedicated consoles like PS3 and xbox360 is nice. Well, ok, ps3 and xbox360 are now 1 generation old, but still these are MOBILE GPUs.
GC2:CS - Monday, November 10, 2014 - link
"Though the first PowerVR Series 6XT-equipped products have only recently launched – including the unexpectedly powerful iPad Air 2..."You stole that from my mounth.... iPad Air 2 has an insanely fast GPU, they just kind of crushed it in that area.
The A8X matches the best nvidia offering at significally lower power with insane sustained performance.
And now you got an Series 7 GPU's... With rumors of Apple pushing heavilly for an 16nm Fin Fet Plus proces from TSMC which got some big power savings coupled with these GPU's... I think we will see that yearly doubling of GPU performance once more (and possibly last time, at least until we are stucked with silicone).
The question is, how big those are, with series 6XT we saw a 50% efficiency gain, but a far bigger GPU size as well. As TSMC 16FF+ is not that much of a shrink at all it could be a problem going with an even bigger architecture.... Even then an A9 and A9X build on a mature 20nm process with GT7400/7600 and somewhat higher clock speeds could bring some prety big improvements as well.
Then there is nvidia, can Apple defeat them like this year ?
Or will nvidia finaly show what they can do for mobile ?
pSupaNova - Monday, November 10, 2014 - link
I think you have it the wrong way round.Nvidia has the more powerful CPU core uses less transistors and was running on a worse process.
While if you bothered to read this article has implemented more functions in its GPU Tessellation, DirectX 11, OpenGL 4.4 & Android Extension Pack.
Anandtech has the K1 tabs beating the IPad Air in the GPU tests.
http://www.anandtech.com/show/8670/google-nexus-9-...
lucam - Monday, November 10, 2014 - link
Really? It seems (from preliminary results) the Nexus 9 is behind, although the difference is negligible.We must have different interpretations of numbers and words...
darkich - Monday, November 10, 2014 - link
I wonder why Samsung has dropped PowerVR for their Exynos in favor of Mali.From what we've seen in the last couple of years, Imagination GPU seem superior
Laxaa - Monday, November 10, 2014 - link
I would love to see this architecture used in a PS Vita successor. Too bad it will never happen.darkich - Monday, November 10, 2014 - link
600 GFLOPS on a, say, 720p 5.5" screen should easily allow close to PS4-level graphicsprzemo_li - Tuesday, November 11, 2014 - link
And sub hour times-on-batter => usless device.Can't do it!
PS/XO consume more then 200W
Smartphones are limited to 5W
So to make them equally strong smartphones gpus must be 50x more energy efficient!
darkich - Tuesday, November 11, 2014 - link
Newsflash..the technology is ADVANCING.And mobile (ultra low power) GPU technology is actually the fastest advancing technology so far.
The GX6650 reaches well over 300GFLOPS on a same TDP as GSX 543MP4 used to reach around 50GFLOPS in the Vita.
The Adreno 430 will reach over 250GFLOPS in 3W TDP, and will be out in smartphones in a few months.
And this PoverVR 7 series will probably reach 500GFLOPS in under 5W TDP.
przemo_li - Wednesday, November 12, 2014 - link
Can You defend that claim?What if we normalize it against used power.
Cause if we get FPS/W its not so rosy for mobile.
Reason is simple. Mobile GPUs also had biggest W budget increases! Bigger and bigger batteries meant that performance would not needed to be limited.
(Compared to dGPUs that have limited max W, which can not be passed)
So its:
Wats, node process, architectures VS node process, architectures.
(And mobile get better node processes quicker too!)
darkich - Tuesday, November 18, 2014 - link
Oh I wouldn't lay the claims out if I couldn't defend it.It is super easy and should really be obvious to anyone with some interest in mobile chips.
Here, behold the technological advancement.
http://www.gsmarena.com/samsung_galaxy_alpha_vs_on...
The Alpha has smaller battery, same sized screen of same resolution, yet it's GPU is on average 8 TIMES faster.
And the cherry on top - go for battery endurance comparison between the two.
And check this one
http://www.gsmarena.com/compare.php3?idPhone1=4620...
iPad Air 2 has only 50% battery, same sized screen and resolution, and its GPU is approximately 5 TIMES more powerful.
Battery endurance?
Basically the same.
How about this one?
http://www.gsmarena.com/compare.php3?idPhone1=5665...
Note 3 with the bigger and far more dense screen and almost equal battery capacity actually improves on the endurance.
But the real kicker - its Adreno 330 can do 158 GFLOPS, compared to Note 2's 20 GFLOPS!
I could go on and on.
Of course, the process difference plays its part but the architectural efficiency improvements are much greater (for example Mali T8xx and PowerVR series 7 designs this year are bringing yet another 40% and 60% efficiency improvements in the SAME process and clock speed compared to previous generation).
And overall advancement in mobile GPU fps per watt is nothing short of unprecedented.
Ps. sorry about the late reply
darkich - Tuesday, November 18, 2014 - link
Correction.. IPad Air 2 has more than 10 times (probably around 15 times a) ster GPU than iPad 3, and about 5 times faster than the iPad 4.asendra - Monday, November 10, 2014 - link
A ~50% improvement, combined with a node shrink, is nothing to sneeze at. I really though we would be stopping the enormous yearly improvements by now. Glad I was wrong!What a beasts will be next years ipads and iphones.
asendra - Monday, November 10, 2014 - link
This thing should be very close if not better than Intel HD 5000, which are the current macbook airs gpu.That should give intel a lot to think about.
darkich - Monday, November 10, 2014 - link
Actually, there is no comparison, considering that HD 5000 sucks 3 times more power.In an unrestricted environment, this should absolutely demolish Intel GPU, even the new architecture brought with the Core M.
But the Maxwell Tegra GPU looks even worse for Intel.
darkich - Monday, November 10, 2014 - link
It really will be interesting to see what happens to Core M if Intel decides to continue using PowerVR in Atom lineup.. Intel's own integrated graphics are no match at all in a given power envelopeAlexvrb - Monday, November 10, 2014 - link
Intel decides which design to implement... if they want to intentionally use fewer clusters, they can. If they want to limit max GPU clocks and willfully deploy a gimped IMC to starve it for bandwidth, they can. They'll make sure Atom still slots in below higher-end Core solutions.What would be more interesting is if they were to implement a full-blown Series 7 setup in some of their higher-end chips. 16 clusters with aggressive clocks would definitely be interesting. But it'll never happen.
darkich - Tuesday, November 11, 2014 - link
But that's the thing, the sole purpose of Atom chips is to compete on the most competitive chip environment there is - battling ARM in an effort to get some ground in mobile.If Intel intentionally cripples Atom just to stay bellow the Core M, then it can continue to watch ARM chips reigning supreme.
ET - Tuesday, November 11, 2014 - link
I'd really love to see Intel using a high performance core for tablet CPU's. I know Intel plans to improve on Bay Trail's GPU performance, but I think we need something really good. (Then again, using PVR might result in driver related problem.)przemo_li - Tuesday, November 11, 2014 - link
They aren't.They have their own tech, and they are sticking too it.
(Hint, they use FLOSS drivers for Android, you can check code repos to see whats coming).
They still have some good arguments for themselfs:
1) They are still kings of node process.
2) They have open source gpu drivers. (Game devs do not need to sit in front of black box, guessing why their code is so slow!)
3) They still have huge cash stockpile.
I'm not saying that 1-3 mean that they will win lots of designs, but they can wait for their turn.
djgandy - Monday, November 17, 2014 - link
2) Yeah they know why its slow. Because its running on an Intel GPU.przemo_li - Tuesday, November 11, 2014 - link
Intel GPU have (in iris configs) 500 mb of on die cache.Can't do on smarphone.
TheinsanegamerN - Tuesday, November 11, 2014 - link
First of all, iris pro has 128MB of cache, not 500MB. Second, you are comparing a chip that pulls ~55 watts to a gpu used in sub 5 watt chips. not even a close comparison. Third, if this powervr 7 can hit 300 GFLOP, then it will already be a third of the way to meeting the fastest version of iris pro (832 GFLOP), and heck, the A8X already is a fourth of the way there with 231 GFLOP. Fourth, it;s called a smartphone, not a smarphone.przemo_li - Wednesday, November 12, 2014 - link
Correct 128mb.But still that mean unachievable performance.
(Not saying that Intel can actually at their 22nm process, squeeze that into smartphones.)
vFunct - Monday, November 10, 2014 - link
Has the 6XT been confirmed to be in the A8X?Wasn't it only announced earlier this year?
Ryan Smith - Monday, November 10, 2014 - link
Yes. A8 and A8X use Series6XT (the presence of ASTC is a dead giveaway).michael2k - Monday, November 10, 2014 - link
Isn't that crazy? It was only announced in January and the first shipping implementation was only 10 months later; that means the first working silicon was before August!milli - Tuesday, November 11, 2014 - link
Public announcement ≠ availability to licenseeslefty2 - Tuesday, November 11, 2014 - link
It's never been officially confirmed. They just assume it's 6XT. I wonder if Apple could get early access to features of the 7XT, seeing as they own 20% of the company.michael2k - Tuesday, November 11, 2014 - link
The presence of texture compression HW implies that it's a 6XT.lefty2 - Tuesday, November 11, 2014 - link
Yes, but Apple has a special relationship with with Imagination Technologies. Who's to say that it's not a "6XT delux", with some features from the 7 series?milli - Tuesday, November 11, 2014 - link
Ryan, is there no way to figure out the actual clock frequency of the GPU in A8X?LeptonX - Thursday, November 13, 2014 - link
People comparing GPU performance by the Gflops numbers are being unwise. It can be use only as an approximation of GPU performance with a huge margin or error that can be bigger than 100%.Radeon 4890 has more Gflops than GTX480 and which one is faster? GTX480 and it's faster by a factor of 2.
So don't be so quick to compare performance across architectures by one simple metric.