Last year Lucidlogix came to us with a rather amazing claim: we can do multi-GPU better than the guys who make the video cards in the first place. Through their Hydra technology, Lucid could intercept OpenGL and DirectX API calls, redistribute objects to multiple video cards, and then combine the results into a single video frame. This could be done with dissimilar cards from the same company, even different companies altogether. It would be multi-GPU rendering, but not as you currently know it.
That was in August of 2008, when the company was first showcasing its technologies in hopes of finding a suitor. In 2009 they found that suitor in MSI, who are anchoring their new high-end Big Bang line of motherboards with the Hydra. After some bumps along the way, Lucid and MSI are finally ready to launch the first Hydra-equipped board: The Big Bang Fuzion.
We’ve had the Fuzion in our hands for over a month now, as the hardware has been ready well ahead of the software. Lucid has been continuing to develop the software side, and the two parties are finally ready to sign off on the finished product, although Hydra is still very much a work in progress.
The Big Bang Trinergy, the Fuzion's identical twin
As we’re currently in Las Vegas for CES (where MSI is launching the Fuzion), today we’ll be taking a quick look at the performance and compatibility of the Hydra, to answer the most burning of questions about the technology. Once we’re back from CES, we will be following that up with an in-depth look at image quality, edge cases, and other deeper issues. We’ve only had the newest drivers for a few days now, so we haven’t had a chance to give it a complete workover.
Finally, this is just a look at the Hydra technology itself. We’ll have a separate review of the Fuzion board as a motherboard at a later time. However it’s virtually identical to MSI’s other Big Bang board, the NVIDIA NF200-equipped Trinergy. The only significant difference between the boards is that the Fuzion has the Hydra chip, while the Trinergy has the NF200.
With that out of the way, let’s get started.
47 Comments
View All Comments
shin0bi272 - Thursday, January 7, 2010 - link
I know thats what I was saying. The technology was supposed to be more worthwhile than this. Plus you cant mix gpus with a regular motherboard so Id have to get another 8800gtx to match mine on my sli supported motherboard. Or if I wanted to go with ati's new card Id have to get 2x5870's ($400ea)and a new crossfire mobo($150) to go crossfire instead. That's more expensive than getting this $350 mobo and adding a 5870 to my 8800gtx. Even if I went with 2 5850's at $300 each its still more expensive than buying this $350 mobo and one 5850. So you see why I really was hoping this tech would work better than it does.This would really do well in the budget mobo market IMO, so that people who didnt want to pay 300+ dollars for a motherboard then buy two video cards could use an old card and get better performance than they would have by just getting the low end mobo and using their old gpu.
If they can get it to be truly scalar (or close to it) like they originally claimed it would be then maybe some other motherboard makers will pick it up but if they dont get it fixed it will end up falling by the wayside as a tech that missed its time (sort of like the hardware physx cards).
Then again the crossfire 5850 in the call of juarez test got nearly scalar performance increases itself which is sort of new isnt it? isnt it the norm for crossfire and sli setups to do 40-50% better than single cards not 94%? Could just be my erroneous recollection but I dont recall near perfect doubling of fps with sli or crossfire before.
GourdFreeMan - Thursday, January 7, 2010 - link
It is an amazing technological feat that they got this working at all, but in the few games in which it does work properly the performance is frankly terrible. Look at what happens when you pair a 5850 and a GTX280 -- equal or worse performance to a 5850 by itself, when theoretically you should get a ~75% gain over a single card.FATCamaro - Thursday, January 7, 2010 - link
This technology had fail written all over it. They unleashed a big sack of fail...danger22 - Thursday, January 7, 2010 - link
maybe the amd 5000 cards are to new to have support for hyrda? what about trying some older lower end cards? just for interest... i know you wouldn't put them in a $350 mobochizow - Thursday, January 7, 2010 - link
Sadly, I think Lucid missed their window of opportunity as the need for Hydra largely evaporated with X58 and certainly P55's mainstream launch, offering support for both CF and SLI on the same platform. The only real hope for Hydra was the prospect of vendor-agnostic multi-GPU with better-than-AFR scaling.Those lofty goals seem to be unrealistic now that we've seen the tech, and with its current slew of problems and its incredibly high price tag, I just don't see the technology gaining any significant traction over the established, supported multi-GPU AFR methods.
The article touched on many of the key problems early on, but never really drilled down on them, hopefully we'll see more on this in the next installment, especially IQ and latency:
Guru3D did an extensive review as well and found CF scaled significantly better than Hydra without fail. Add in the various vendor-specific feature compatibility questions and an additional layer of driver profiles that need to be supported and synchronized between potentially 3 parties (Nvidia, ATI sync'd to each Lucid release) and you've got yourself a real nightmare from an end-user perspective.
I'm impressed they got their core technology to work, I was highly skeptical in that regard, but I don't think we'll be hearing too much about this technology going forward. Its too expensive, too overly complicated and suffers from poor performance and compatibility. I don't see the situation improving any time soon and its clearly going to be an uphill struggle to get their drivers/profiles in order with both older titles and new releases.
sheh - Thursday, January 7, 2010 - link
I agree. It's interesting from a technical standpoint but not many would want to go through all the fuss of SLI/CF (noise, heat, power) plus having to worry about the compatiblity of two or three sets of drivers at the same time. And that's assuming costs weren't high, and performance was better.I suspect in 1-2 years NV or ATI will be buying this company.
(I'm somewhat surprised even SLI/CF exists, but maybe the development costs aren't too high or it's the bragging rights. :))
chizow - Thursday, January 7, 2010 - link
Not so sure about that, Intel has actually been pumping venture capital into Lucid for years, so I'm sure they're significantly vested in their future at this point. I actually felt Lucid's Hydra was going to serve as Intel's CF/SLI answer not so much as a straight performance alternative, but rather a vessel to make Larrabee look not so....underwhelming.
Think about it, sell a Larrabee for $200-$300 that on its own, is a terrible 3D rasterizer and pair it up with an established, competent GPU from Nvidia or ATI and you'd actually get respectable gaming results. Now that Larrabee has been scrapped for the foreseeable future, I'd say Intel's financial backing and plans for Hydra are also in flux. As it is now, Hydra is in direct competition with the PCIe controllers they provide for little added cost that support both SLI and CF natively (licensing fee needed for SLI). In comparison, the Hydra 200 chip reportedly costs an additional $80!
TemjinGold - Thursday, January 7, 2010 - link
I think the issue I see is that X-mode will most commonly be used by people looking to save a few bucks when upgrading by combining the card they already have with a new one they buy. Unfortunately, I seriously doubt that this is the same crowd that would shell out $350 for a mobo. That just leaves A and N modes, which the Hydra currently loses horribly to CF/SLI.If the Hydra was put on a cheap mobo, I might see where it could be appealing. But someone who spends $350 on a mobo will most likely just shell out for 2-3 new gfx cards at the same time rather than going "gee, I could put this $50 to use if I reuse my old video card."
AznBoi36 - Thursday, January 7, 2010 - link
Right on. If I were to spend that much on a mobo, I wouldn't be thinking about saving some money by using an old video card, and in no way would I be mis-matching cards anyway. Seeing all this performance issues, I wonder how 3-way would be like...ExarKun333 - Thursday, January 7, 2010 - link
3-way would be ideal. ;)