Yet it takes Imagination to actually do something new and innovative with it.
And if you're going to be a dick, then everyone knows US companies buy out everything, thats why you have such few options. Facebook, Google, MS, Apple constantly acquire companies. And no competition for ISP's, Phone networks... Just big players and monopolies.
Hardly the best market segment to realize raytracing graphics. Ray tracing is brute-force, power and memory hungry, not exactly the kind of task that stacks well with mobile platforms.
BTW you can make very ugly graphics with raytracing just as you can make amazing graphics without it.
This is where people are used to 3D on desktop archs vs. tile-based rendering assume things are true on both. Tile-based architectures have a bunch of trade offs but one of the great things is that geometry are computed before any shading. Also, unlike the desktop archs the various parts of the GPU are ideally always doing some task at the loss of latency. Different trade offs than what you outline.
Correct me if I am wrong, but this is the case with desktop graphics as well. Shading is a dedicated stage that already works on rasterized geometry, and you cannot rasterize before you compute the geometry. So what are you actually saying?
The geometry isn't rasterized necessarily. The Geo used is a basic one that allows the tasks to be broken up. Every tile of a scene is in a different stage after the geometry is computed and scenes are sliced. Shading, coloring, early culling and late culling it is all asynchronous. Draw calls are processed as the come in and often before an entire frame is complete. This is why excessive draw calls can slow down a tile-based system.
A big question is how much this is going to enlarge the die size to add. If it's a major hit in SoC area and cost it might never be able to get beyond the initial chicken and egg problem.
This seems like something that would very likely be implemented by Apple first. They build their own SOC's and have typically been willing to spend more die space on graphics performance than android phones. They could introduce it along side a new phone with the new SoC and a new version of iOS which contains new APIs to help make this easier for developers. It also is eye candy that is pretty easy to show off as the new feature on a new phone.
I'll second this notion as Apple was one of the first to really push OpenCL. Adding hardware accelerated ray tracing would be another instance of Apple moving more to dedicated hardware and off of CPU's.
That would be a huge mistake on their part, at least in the near future. They've been using aggressive, high performing PowerVR designs from the start. I don't think anything they cook up on their own will touch Wizard any time soon, so I really doubt they're going to abandon PowerVR designs for the foreseeable future. With that being said, even if it does use a little more power and take up more space, that doesn't mean it won't at least end up in an iPad initially, and the iPhone later (as a smaller variant of one of the smaller Rogue designs perhaps).
Anyway, a popular iOS device or two deploying this technology would help drive adoption and push OpenRL onto other platforms.
There are many ways for Apple to achieve their goals. For example, there is no reason they can't license PowerVR designs, but with permission to add whatever they want to them. That would allow them to (a) integrate the design more closely into future SOCs (eg more efficient L3 shared between CPU and GPU, likewise more efficient coherency and MMU interaction) (b) make the design more aggressive in directions they want, even if PowerVR has concluded these changes are too specialized for their mass audience (basically the same thing as Apple pushing 64-bit faster than ARM itself pushed it)
And I do agree with kpb321. This is likely to be "usefully real" with Apple first. Regardless of whether or not they're the first to ship this HW, they'll probably be the first with APIs that integrate this functionality into the rest of the graphics stack, along with cool [if gratuitous] UI usages just for fun.
I was replying to a post that posited that Apple might be going to their own GPU design, eg. not licensing a PowerVR design. As far as agressive goes, PowerVR has been quite aggressive... case in point, Wizard - a hybrid RT/raster setup. As far as better integration goes, Apple already is responsible for integrating the PowerVR designs into it's SOC. If it wants to make changes, and it has the resources/time, I don't think PowerVR is going to stand in their way - it's Apple's SOC and they licensed the design.
Also if Apple has any particular wants or needs above and beyond, they could work directly with PVR (I'd bet they already do), instead of waiting until a design is finished and then trying to modify it to better suit their needs. Eventually they even could make something in-house, but I don't see it happening any time soon if they don't want to fall behind.
I could so see Apple using this on their UI and Map app's Flyover implementation to render lighting and transparency on building glass panels for example, the possibilities are limitless. Hardware ray tracing acceleration could greatly speed up the rendering time of Flyover when zooming in to high detail areas in addition to creating realistic shadows and AO based on accurate global illumination to simulate real time of day. If say you're looking at Flyover images in the evening, the cameras and ambient sensor on the phone can detect the current lighting situation in real time and replicates it in the Flyover images using ray tracing techniques!
"Apple is invested in Imagination because of graphics..not CPU. Ray-tracing is set to become important in the near future and Imagination has something of a lock on this."
The biggest boost for a technology like this would be to get something like a console design win.
If using their hardware, you can make a game using raytracing effects that renders at 30+ FPS at normal game resolutions, then I think certainly there are developers that would be interested in what they could do with that.
The problem is that very few developers want to write a bunch of custom rendering code and build special art for the sake of a handful of players that have some custom hardware - unless maybe you're paying them to do it.
Unless flagging sales force Nintendo to refresh the Wii U there aren't likely to be any major consoles for Imagination to win anytime soon. Low end android stick platforms don't count at present because none of them are operating in sufficient volume to drive technology as opposed to being a target for ports of existing titles and low budget indie games.
without a real time fractal ray tracing block and code to use that for things like fractal image sharpening ,perhaps while turning that bitmap into a fractal 3D map then it seems a little underwhelming in 2014 , are they even going to include a dedicated fast and low power wideIO FiFO memory block in there any time soon ?
WideIO and other high bandwidth memory technologies aren't something we implement. That's really something for our customers to take care of (with our help of course).
ryszu, it may be a false assumption on my part, but i would have thought you at least did some initial trials with your test chips with both wideIO and 2/4 stacked HMC to see where you can optimize the designs for the coming UHD-1 and especially the later UHD-2 7680x4320 10bit/12bit screens.
surely someone needs to finally take that step and actually use WideIO on their IP Soc before samsung twist it and make "widcon" the official bad name ;) it was suposed to be out in retail last year and we all know that will make your kit fly a lot faster longer term at lower power usage , so cant imagination use some and STRONGLY advise the next customer to use it as a show peace, please ask the board to authorise that asap, AS PEOPLE WILL BUY IT.
regarding the fractal block and generic fractal image sharpening code, you would be wise to just download a current FFMPEG and add any such fractal ray tracing code as a plugin filter for that to get maximum coverage , just ask them on IRC to review any such code you might release to get it included easily oc....
Our emulation platform is fully flexible in that respect. We can connect the GPU to something that represents a next-gen interconnect for testing, to make sure it works well if the customer decides to implement it.
I'll take a look at the fractal image sharpening, thanks for the pointer.
As neat as this announcement is in the mobile sector, I'm eager to see that this could do on the desktop. Far more power available to scale this architecture and genuinely reach movie level quality in real time at 1080p.
Kind of wonder what PowerVR could do if they scaled up to discreet desktop GPU power draw levels. I know they exited that market before, but it would be interesting with their current lead in performance per watt in the low end.
Modern rasterizer engines do a good job of coping with the complexities and problems of advanced lighting. But if ray tracing could with dedicated hardware could reduce the cost of this (power, die size, development) then there's a clear market for such hybrid engines. Use either tech where they make the most sense.
I am a game developer, and implementing a hybrid raytracing engine just for one GPU vendor isn't going to happen, not in this market anyway. Maybe if everyone else were to get onboard with an incredibly similar set of fixed function hardware so porting would be easy, or if Imagination were to suddenly dominate the market it would happen. But it isn't so neither is widespread adoption of using this.
There could be some tech demo like little indie game you can play with it because someone is interested. Maybe if Imagination builds pro level cards with this built in specialized raytracing renderers will use it to accelerate stuff. But in games? No way.
To be clear, I'm not saying raytracing won't happen in realtime at some point. This could be good research for the future. But being the only vendor with such today? It's not the route for quick adoption.
We're working with middleware vendors to integrate the technology, and it seems like that's a good first step for us to take. If we can convince those guys, and also talk about the details behind the integration and what it took for them to be interested, it might spur on people like yourself to experiment and see what's possible for your own technology.
We have to start somewhere, otherwise it'll never get off the ground, but nor do we expect every developer no matter their size or resources to all jump in straight away together. We'll take it step-by-step and then hopefully you come along for the ride when you're ready and your market is.
Cool! If is there some resource abut what it can do? I'd love to try throwing a handful of render targets at it to see if it can go through something like screenspace raytracing. Hope you guys start to get it off the ground, it might not be the immediate future but being able to do something like GI by just shooting rays is going to be a lot easier.
With all iOS devices using pvr gpus, I do kinda expect the 2015 generation to ship with this hardware and then about two years later nearly every iOS device would have this hardware. Looking how far everyone adapted to the retina displays and now the 64bit CPUs, I really do expect and hope to see this work out very well. I wonder how one would actually develop for it though, I am really looking forward to experiment with such hardware in some maybe not THAT distant future :).
Sounds great and if it works as advertised I wish them success. But realistically I don't see how this can take off especially if game developers require special effort to implement this. It should be in the driver meaning if a specific OpenGL (or dx) command/effect is called, the driver must tell the gpu to use raytracing. All in all it sounds like something that should be in desktop GPUs.
Are you telling me this is the first time this is being included in ANY GPU, including desktop ones? Why would Imagination add something like this and push for it in development if AMD and NVIDIA haven't already for the desktop? You'd think if even limited real-time ray tracing was possible to do in combination with rasterization that it would already be happening on the desktop where loads more power and memory are available.
Given that Apple tends to license the highest-end GPU IP from Imagination for their SOCs, I would be quite surprised if this doesn't make it into a future iPhone/iPad. That should give the technology a sufficiently large user base to make it worthwhile for developers to implement hybrid rasterizer/ray tracer engines (assuming that the resulting image quality improvements are worth the performance tradeoffs). The impetus will then be on the other SOC vendors to implement fixed-function ray tracers in the Android space (or figure out how to handle ray tracing less-inefficiently in shaders). Given how long it takes for the SOC vendors to converge on any common feature set, I'd say we have a good long wait ahead of us before this comes to Android in any appreciable market share.
The above comparison example is more than contrived. It's ridiculous to even suggest that rasterisation alone cannot provide shadows and reflections. The quality of reflections provided by environments maps can more than provide an almost identical results to the RT version above and probably cost significantly less to compute. Also looking at the shadows and transparency, these can be approximated to be almost identical again to the RT example above. Yes they may not be mathematically correct, but who really cares when used in the context of a game ? In fact if you just an an environment map to the car bonnet above, you would be hard stretched to distinguish which is which. I'm all for innovation in this space. I have been developing high end games for years, but when you use a crappy example that is not comparing like for like, it's fustrating.
Seems like this technology could be used for building (eventually) very power-efficient render farms. You can cram a bunch of ARM processors together in a tight space, and make use of the custom ray-tracing hardware to do ray-tracing faster than the software-based methods used now.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
45 Comments
Back to Article
B3an - Tuesday, March 18, 2014 - link
This is very interesting. Id love to play a ray tracing based game, even if its just for something different and to see how it handles lighting.Always the UK companies who are most innovative.
nathanddrews - Tuesday, March 18, 2014 - link
Download Quake Wars: Ray Traced and see for yourself.http://youtu.be/mtHDSG2wNho
B3an - Tuesday, March 18, 2014 - link
Thanks but i've seen that before and can never find anywhere to download it from...SarahKerrigan - Tuesday, March 18, 2014 - link
Imagination acquired their raytracing tech from Caustic, who was to the best of my knowledge American.But I suppose "always the UK companies who blow the most money on M&A" doesn't sound as good.
B3an - Tuesday, March 18, 2014 - link
Yet it takes Imagination to actually do something new and innovative with it.And if you're going to be a dick, then everyone knows US companies buy out everything, thats why you have such few options. Facebook, Google, MS, Apple constantly acquire companies. And no competition for ISP's, Phone networks... Just big players and monopolies.
ddriver - Tuesday, March 18, 2014 - link
Hardly the best market segment to realize raytracing graphics. Ray tracing is brute-force, power and memory hungry, not exactly the kind of task that stacks well with mobile platforms.BTW you can make very ugly graphics with raytracing just as you can make amazing graphics without it.
errorr - Tuesday, March 18, 2014 - link
This is where people are used to 3D on desktop archs vs. tile-based rendering assume things are true on both. Tile-based architectures have a bunch of trade offs but one of the great things is that geometry are computed before any shading. Also, unlike the desktop archs the various parts of the GPU are ideally always doing some task at the loss of latency. Different trade offs than what you outline.Mondozai - Tuesday, March 18, 2014 - link
Why do you insist on using logic in the internet? Don't you understand that ddriver prefers to rant in a deranged fashion undisturbed?Mondozai - Tuesday, March 18, 2014 - link
*should say "on the internet".ddriver - Tuesday, March 18, 2014 - link
I am commenting on the article, unlike you with your personal bashing. Get a life!ddriver - Tuesday, March 18, 2014 - link
Correct me if I am wrong, but this is the case with desktop graphics as well. Shading is a dedicated stage that already works on rasterized geometry, and you cannot rasterize before you compute the geometry. So what are you actually saying?errorr - Wednesday, March 19, 2014 - link
The geometry isn't rasterized necessarily. The Geo used is a basic one that allows the tasks to be broken up. Every tile of a scene is in a different stage after the geometry is computed and scenes are sliced. Shading, coloring, early culling and late culling it is all asynchronous. Draw calls are processed as the come in and often before an entire frame is complete. This is why excessive draw calls can slow down a tile-based system.DanNeely - Tuesday, March 18, 2014 - link
A big question is how much this is going to enlarge the die size to add. If it's a major hit in SoC area and cost it might never be able to get beyond the initial chicken and egg problem.kpb321 - Tuesday, March 18, 2014 - link
This seems like something that would very likely be implemented by Apple first. They build their own SOC's and have typically been willing to spend more die space on graphics performance than android phones. They could introduce it along side a new phone with the new SoC and a new version of iOS which contains new APIs to help make this easier for developers. It also is eye candy that is pretty easy to show off as the new feature on a new phone.Kevin G - Tuesday, March 18, 2014 - link
I'll second this notion as Apple was one of the first to really push OpenCL. Adding hardware accelerated ray tracing would be another instance of Apple moving more to dedicated hardware and off of CPU's.vFunct - Wednesday, March 19, 2014 - link
i think Apple is moving to their own GPU designs, given their hiring spree over the last few years.Alexvrb - Wednesday, March 19, 2014 - link
That would be a huge mistake on their part, at least in the near future. They've been using aggressive, high performing PowerVR designs from the start. I don't think anything they cook up on their own will touch Wizard any time soon, so I really doubt they're going to abandon PowerVR designs for the foreseeable future. With that being said, even if it does use a little more power and take up more space, that doesn't mean it won't at least end up in an iPad initially, and the iPhone later (as a smaller variant of one of the smaller Rogue designs perhaps).Anyway, a popular iOS device or two deploying this technology would help drive adoption and push OpenRL onto other platforms.
name99 - Wednesday, March 19, 2014 - link
There are many ways for Apple to achieve their goals. For example, there is no reason they can't license PowerVR designs, but with permission to add whatever they want to them. That would allow them to(a) integrate the design more closely into future SOCs (eg more efficient L3 shared between CPU and GPU, likewise more efficient coherency and MMU interaction)
(b) make the design more aggressive in directions they want, even if PowerVR has concluded these changes are too specialized for their mass audience (basically the same thing as Apple pushing 64-bit faster than ARM itself pushed it)
And I do agree with kpb321. This is likely to be "usefully real" with Apple first. Regardless of whether or not they're the first to ship this HW, they'll probably be the first with APIs that integrate this functionality into the rest of the graphics stack, along with cool [if gratuitous] UI usages just for fun.
Alexvrb - Wednesday, March 19, 2014 - link
I was replying to a post that posited that Apple might be going to their own GPU design, eg. not licensing a PowerVR design. As far as agressive goes, PowerVR has been quite aggressive... case in point, Wizard - a hybrid RT/raster setup. As far as better integration goes, Apple already is responsible for integrating the PowerVR designs into it's SOC. If it wants to make changes, and it has the resources/time, I don't think PowerVR is going to stand in their way - it's Apple's SOC and they licensed the design.Also if Apple has any particular wants or needs above and beyond, they could work directly with PVR (I'd bet they already do), instead of waiting until a design is finished and then trying to modify it to better suit their needs. Eventually they even could make something in-house, but I don't see it happening any time soon if they don't want to fall behind.
ShattaAD - Friday, March 21, 2014 - link
I could so see Apple using this on their UI and Map app's Flyover implementation to render lighting and transparency on building glass panels for example, the possibilities are limitless. Hardware ray tracing acceleration could greatly speed up the rendering time of Flyover when zooming in to high detail areas in addition to creating realistic shadows and AO based on accurate global illumination to simulate real time of day. If say you're looking at Flyover images in the evening, the cameras and ambient sensor on the phone can detect the current lighting situation in real time and replicates it in the Flyover images using ray tracing techniques!TETRONG - Tuesday, March 18, 2014 - link
I hinted at this about a month ago on VR-Zone"Apple is invested in Imagination because of graphics..not CPU. Ray-tracing is set to become important in the near future and Imagination has something of a lock on this."
http://vr-zone.com/articles/arms-ian-ferguson-inte...
twtech - Tuesday, March 18, 2014 - link
The biggest boost for a technology like this would be to get something like a console design win.If using their hardware, you can make a game using raytracing effects that renders at 30+ FPS at normal game resolutions, then I think certainly there are developers that would be interested in what they could do with that.
The problem is that very few developers want to write a bunch of custom rendering code and build special art for the sake of a handful of players that have some custom hardware - unless maybe you're paying them to do it.
DanNeely - Tuesday, March 18, 2014 - link
Unless flagging sales force Nintendo to refresh the Wii U there aren't likely to be any major consoles for Imagination to win anytime soon. Low end android stick platforms don't count at present because none of them are operating in sufficient volume to drive technology as opposed to being a target for ports of existing titles and low budget indie games.Kevin G - Tuesday, March 18, 2014 - link
Well PowerVR did make it into the Dreamcast back in the day....vFunct - Wednesday, March 19, 2014 - link
Maybe a new console from Samsung/Google/Apple/Amazon?BMNify - Tuesday, March 18, 2014 - link
without a real time fractal ray tracing block and code to use that for things like fractal image sharpening ,perhaps while turning that bitmap into a fractal 3D map then it seems a little underwhelming in 2014 , are they even going to include a dedicated fast and low power wideIO FiFO memory block in there any time soon ?ryszu - Tuesday, March 18, 2014 - link
WideIO and other high bandwidth memory technologies aren't something we implement. That's really something for our customers to take care of (with our help of course).BMNify - Wednesday, March 19, 2014 - link
ryszu, it may be a false assumption on my part, but i would have thought you at least did some initial trials with your test chips with both wideIO and 2/4 stacked HMC to see where you can optimize the designs for the coming UHD-1 and especially the later UHD-2 7680x4320 10bit/12bit screens.surely someone needs to finally take that step and actually use WideIO on their IP Soc before samsung twist it and make "widcon" the official bad name ;) it was suposed to be out in retail last year and we all know that will make your kit fly a lot faster longer term at lower power usage , so cant imagination use some and STRONGLY advise the next customer to use it as a show peace, please ask the board to authorise that asap, AS PEOPLE WILL BUY IT.
regarding the fractal block and generic fractal image sharpening code, you would be wise to just download a current FFMPEG and add any such fractal ray tracing code as a plugin filter for that to get maximum coverage , just ask them on IRC to review any such code you might release to get it included easily oc....
ryszu - Wednesday, March 19, 2014 - link
Our emulation platform is fully flexible in that respect. We can connect the GPU to something that represents a next-gen interconnect for testing, to make sure it works well if the customer decides to implement it.I'll take a look at the fractal image sharpening, thanks for the pointer.
Kevin G - Tuesday, March 18, 2014 - link
As neat as this announcement is in the mobile sector, I'm eager to see that this could do on the desktop. Far more power available to scale this architecture and genuinely reach movie level quality in real time at 1080p.errorr - Tuesday, March 18, 2014 - link
Imag. Tech. submitted the linked patent last week outlining their ray tracing.Much more to their charts and block diagrams. https://docs.google.com/viewer?url=patentimages.st...
errorr - Tuesday, March 18, 2014 - link
Also see https://docs.google.com/viewer?url=patentimages.st...This is a higher level of how the change to their archeticture/pipline caused by the above.
tipoo - Tuesday, March 18, 2014 - link
Kind of wonder what PowerVR could do if they scaled up to discreet desktop GPU power draw levels. I know they exited that market before, but it would be interesting with their current lead in performance per watt in the low end.MrSpadge - Tuesday, March 18, 2014 - link
Modern rasterizer engines do a good job of coping with the complexities and problems of advanced lighting. But if ray tracing could with dedicated hardware could reduce the cost of this (power, die size, development) then there's a clear market for such hybrid engines. Use either tech where they make the most sense.Frenetic Pony - Tuesday, March 18, 2014 - link
I am a game developer, and implementing a hybrid raytracing engine just for one GPU vendor isn't going to happen, not in this market anyway. Maybe if everyone else were to get onboard with an incredibly similar set of fixed function hardware so porting would be easy, or if Imagination were to suddenly dominate the market it would happen. But it isn't so neither is widespread adoption of using this.There could be some tech demo like little indie game you can play with it because someone is interested. Maybe if Imagination builds pro level cards with this built in specialized raytracing renderers will use it to accelerate stuff. But in games? No way.
Frenetic Pony - Tuesday, March 18, 2014 - link
To be clear, I'm not saying raytracing won't happen in realtime at some point. This could be good research for the future. But being the only vendor with such today? It's not the route for quick adoption.ryszu - Tuesday, March 18, 2014 - link
We're working with middleware vendors to integrate the technology, and it seems like that's a good first step for us to take. If we can convince those guys, and also talk about the details behind the integration and what it took for them to be interested, it might spur on people like yourself to experiment and see what's possible for your own technology.We have to start somewhere, otherwise it'll never get off the ground, but nor do we expect every developer no matter their size or resources to all jump in straight away together. We'll take it step-by-step and then hopefully you come along for the ride when you're ready and your market is.
Frenetic Pony - Tuesday, March 18, 2014 - link
Cool! If is there some resource abut what it can do? I'd love to try throwing a handful of render targets at it to see if it can go through something like screenspace raytracing. Hope you guys start to get it off the ground, it might not be the immediate future but being able to do something like GI by just shooting rays is going to be a lot easier.zodiacfml - Wednesday, March 19, 2014 - link
I agree. One has to try again. I believe in Ray Tracing, the image quality is just undeniable.Slin - Tuesday, March 18, 2014 - link
With all iOS devices using pvr gpus, I do kinda expect the 2015 generation to ship with this hardware and then about two years later nearly every iOS device would have this hardware. Looking how far everyone adapted to the retina displays and now the 64bit CPUs, I really do expect and hope to see this work out very well.I wonder how one would actually develop for it though, I am really looking forward to experiment with such hardware in some maybe not THAT distant future :).
beginner99 - Wednesday, March 19, 2014 - link
Sounds great and if it works as advertised I wish them success. But realistically I don't see how this can take off especially if game developers require special effort to implement this. It should be in the driver meaning if a specific OpenGL (or dx) command/effect is called, the driver must tell the gpu to use raytracing. All in all it sounds like something that should be in desktop GPUs.antef - Wednesday, March 19, 2014 - link
Are you telling me this is the first time this is being included in ANY GPU, including desktop ones? Why would Imagination add something like this and push for it in development if AMD and NVIDIA haven't already for the desktop? You'd think if even limited real-time ray tracing was possible to do in combination with rasterization that it would already be happening on the desktop where loads more power and memory are available.asgallant - Thursday, March 20, 2014 - link
Given that Apple tends to license the highest-end GPU IP from Imagination for their SOCs, I would be quite surprised if this doesn't make it into a future iPhone/iPad. That should give the technology a sufficiently large user base to make it worthwhile for developers to implement hybrid rasterizer/ray tracer engines (assuming that the resulting image quality improvements are worth the performance tradeoffs). The impetus will then be on the other SOC vendors to implement fixed-function ray tracers in the Android space (or figure out how to handle ray tracing less-inefficiently in shaders). Given how long it takes for the SOC vendors to converge on any common feature set, I'd say we have a good long wait ahead of us before this comes to Android in any appreciable market share.Dave Jones - Thursday, March 20, 2014 - link
The above comparison example is more than contrived. It's ridiculous to even suggest that rasterisation alone cannot provide shadows and reflections. The quality of reflections provided by environments maps can more than provide an almost identical results to the RT version above and probably cost significantly less to compute. Also looking at the shadows and transparency, these can be approximated to be almost identical again to the RT example above. Yes they may not be mathematically correct, but who really cares when used in the context of a game ? In fact if you just an an environment map to the car bonnet above, you would be hard stretched to distinguish which is which. I'm all for innovation in this space. I have been developing high end games for years, but when you use a crappy example that is not comparing like for like, it's fustrating.darkfoon - Sunday, March 23, 2014 - link
Seems like this technology could be used for building (eventually) very power-efficient render farms. You can cram a bunch of ARM processors together in a tight space, and make use of the custom ray-tracing hardware to do ray-tracing faster than the software-based methods used now.