Guys I really appreciate you throwing away StarWind! W/o even letting people know what configuration did you use, did you enable caching, did you use flat image files, did you map whole disk rather then partition, what initiator did you use (StarPort or MS iSCSI), did you apply recommended TCP stack settings etc. Probably it's our problem as we've managed to release the stuff people cannot properly configure but why did not you contact us telling you have issues so we could help you to sort them out?
With the WinTarget R.I.P. (and MS selling it's successor thru the OEMs only), StarWind thrown away and SANmelody and IPStor not even mentioned (and they are key players!) I think your review is pretty useless... Most of the people are looking for software solutions when you're talking about "affordable SAN". Do you plan to have second round?
If you get a chance, it would be great to see what kind of performance you get out of an iscsi hba, like the one from qlogic.
When it gets down to it, the DAS numbers are great for a baseline, but what if you have 4+ servers running those io tests? That's what shared storage is for anyhow. Then compare the aggregate io vs DAS numbers?
For example, can 4 servers can hit 25MB/s each in the SQLio random read 8kb test for a total of 100MB/s ? How much is cpu utilization reduced with one or more iscsi hba in each server vs the software drivers? Where/how does the number of spindles move these numbers? At what point does the number of disk overwhelm one iscsi hba, two iscsi hba's, one FC hba, two FC hbas, and one or two scsi controllers?
IMHO iscsi is the future. Most switches are cheap enough that you can easily build a seperate dedicated iscsi network. You'd be doing that if you went with fiber channel anyhow, but at a much higher expense (and additional learning curve) if you don't already have it, right?
Then all we need is someone who has some really nice gui to manage the system - a nice purdy web interface that runs on a virtual machine somewhere, that shows with one glance the health, performance, and utilization of your system(s).
System(s) have Zero faults.
Volume(s) are at 30.0 Terabytes out of 40.00 (75%)
CPU utilization is averaging 32% over the last 15 minutes.
Memory utilization is averaging 85% over the last 15 minutes.
IOs peaked at 10,000 (50%) and average 5000 (25%) over the last 15 minutes.
The problem is that currently you only got two choices: expensive CX4 copper which is short range (<15 m) and not very flexible (it is a like infiniband cables) or Optic fiber cabling. Both HBAs and cables are rather expensive and require rather expensive switches (still less than FC, but still). So you the price gap with FC is a lot smaller. Of course you have a bit more bandwidth (but I fear you won't get much more than 5 GBit, has to be test of course), and you do not need to learn fc.
Personally I would like to wait for 10 gbit over UTP-cat 6... But I am open to suggestion why the current 10 gbit would be very interesting too.
I think the cx4 card (~$800)is pretty damn cheap for what you get: (and remember it doesn't have pci-x limitations).
Check out the intel marketing buzz on iSCSI and the junk they're doing to speed up TCP/IP, too. It's good reading, and I'd love to see the hype tested in the real world.
I agree with you that UTP-cat 6 would be much better, more standardized, much cheaper, better range, etc. I know that, but if this is we've got now, so be-it, and I think it's pretty killer, but I haven't tested it : ).
Dell, cisco, hp, and others have CX4 adapters for their managed switches - they aren't very expensive and go right to the backplane of the switch.
these are the current 10gbe intel flavors:
copper: Intel® PRO/10GbE CX4 Server Adapter
fibre:
Intel® PRO/10GbE SR Server Adapter
Intel® PRO/10GbE LR Server Adapter
Intel® 10 Gigabit XF SR Server Adapters
a pita is the limited number of x8 PCI-E slots in most server mobos.
keep up your great reporting.
best, nw
First off, great article. I'm looking forward to the rest of this series.
From everything I've read coming out of MS, the StorPort driver should provide better performance. Any reason why you chose to go with SCSIPort? Emulex offers drivers for both on their website.
Love that anandtech is going into this direction :D
Realy looking forward to your iscsi article. Only used fiber connected sans, have a ibm ds6800 at work :) Never used iscsi but veeery interested into it, what I have heard so far is that its mostly just very good for development purposes, not for production enviroments. And that you should turn of I think chaps or whatever it its called on the switches, so the icsci san doesnt overflow the network with are you there when it transfers to the iscsi target.
quote: Love that anandtech is going into this direction :D
Just wait a few weeks :-). Anandtech IT will become much more than just one of the many tabs :-)
quote: And that you should turn of I think chaps or whatever it its called on the switches, so the icsci san doesnt overflow the network with are you there when it transfers to the iscsi target.
We will look into it, but I think it should be enough to place your iSCSI storage on a nonblocking switch on separate VLAN. Or am I missing something?
"Common Ethernet switch ports tend to introduce latency into iSCSI traffic, and this reduces performance. Experts suggest deploying high-performance Ethernet switches that sport fast, low-latency ports. In addition, you may choose to tweak iSCSI performance further by overriding "auto-negotiation" and manually adjusting speed settings on the NIC and switch. This lets you enable traffic flow control on the NIC and switch, setting Ethernet jumbo frames on the NIC and switch to 9000 bytes or higher -- transferring far more data in each packet while requiring less overhead. Jumbo frames are reported to improve throughput as much as 50%. "
quote: We have been working with quite a few SMEs the past several years, and making storage more scalable is a bonus for those companies.
I'm just wondering this sentence was linked to an article about a Supermicro dual node server. So you considere Supermicro an SME, or are you saying their servers are sold to SME's? I just skimmed the Supermicro article, so perhaps you were working with an SME in testing it? I got the feeling from the sentence that you meant to link to an article where you had worked with SME's in some respect.
no, Supermicro is not an SME in our viewpoint :-). Sorry, I should have been more clear, but I was trying to avoid that the article lost it's focus.
I am head of a serverlab in the local university and our goal is applied research in the fields of virtualisation, HA and Server sizing. One of the things we do is to develop software that helps SME's (with some special niche application) to size their server. That is what the link is going towards, a short explanation of the stresstesting client APUS which has been used to help quite a few SMEs. One of those SMEs is MCS, a software company who develops facility management software. Basically the logs of their software were analyzed and converted by our stresstesting client into a benchmark. Sounds a lot easier than it is.
Because these applications are used in real world, and are not industry standard benchmarks that the manufacturers can tune to the extreme, we feel that this kind of benchmarking is a welcome addition to the normal benchmarks.
quote: We will. What kind of incompatibility do you expect? It seems to me that the filesystem is rather independent from the storage rack.
I only ask because every cluster file vendor suggests that not all SAN systems are capable of handling multiple requests to the same LUN simultaneously.
I can't imagine that they couldn't, since I think that cluster file systems are the "killer app" of SANs in general.
And yes, we'll do our best to get some of the typical storage devices in the labs. Any reason why you mention these one in particular (besides being the lower end of the SANs)
Both Dell and IBM are aggressively pushing these in the SMB sector around here (Israel). Their main competition is NetApp FAS270 line, which is considerably more expensive.
It's a good idea to define all your acronyms the first time you use them in an article.
Sure, a quick google told me what an SME was, but it's helpful to the casual reader, who would otherwise be directed away from your page.
What's funny, is that you were particular about defining FC, SAN, HA on the first page, just not the title term of your article.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
21 Comments
Back to Article
Anton Kolomyeytsev - Friday, November 16, 2007 - link
Guys I really appreciate you throwing away StarWind! W/o even letting people know what configuration did you use, did you enable caching, did you use flat image files, did you map whole disk rather then partition, what initiator did you use (StarPort or MS iSCSI), did you apply recommended TCP stack settings etc. Probably it's our problem as we've managed to release the stuff people cannot properly configure but why did not you contact us telling you have issues so we could help you to sort them out?With the WinTarget R.I.P. (and MS selling it's successor thru the OEMs only), StarWind thrown away and SANmelody and IPStor not even mentioned (and they are key players!) I think your review is pretty useless... Most of the people are looking for software solutions when you're talking about "affordable SAN". Do you plan to have second round?
Thanks once again and keep doing great job! :)
Anton Kolomyeytsev
CEO, Rocket Division Software
Johnniewalker - Sunday, November 11, 2007 - link
If you get a chance, it would be great to see what kind of performance you get out of an iscsi hba, like the one from qlogic.When it gets down to it, the DAS numbers are great for a baseline, but what if you have 4+ servers running those io tests? That's what shared storage is for anyhow. Then compare the aggregate io vs DAS numbers?
For example, can 4 servers can hit 25MB/s each in the SQLio random read 8kb test for a total of 100MB/s ? How much is cpu utilization reduced with one or more iscsi hba in each server vs the software drivers? Where/how does the number of spindles move these numbers? At what point does the number of disk overwhelm one iscsi hba, two iscsi hba's, one FC hba, two FC hbas, and one or two scsi controllers?
IMHO iscsi is the future. Most switches are cheap enough that you can easily build a seperate dedicated iscsi network. You'd be doing that if you went with fiber channel anyhow, but at a much higher expense (and additional learning curve) if you don't already have it, right?
Then all we need is someone who has some really nice gui to manage the system - a nice purdy web interface that runs on a virtual machine somewhere, that shows with one glance the health, performance, and utilization of your system(s).
System(s) have Zero faults.
Volume(s) are at 30.0 Terabytes out of 40.00 (75%)
CPU utilization is averaging 32% over the last 15 minutes.
Memory utilization is averaging 85% over the last 15 minutes.
IOs peaked at 10,000 (50%) and average 5000 (25%) over the last 15 minutes.
Pinch me!
-johhniewalker
afan - Friday, November 9, 2007 - link
You can get one of the recently-released 10Gbps PCI-E TCP/IP card for <$800, and they support iSCSI.here's one example:
http://www.intel.com/network/connectivity/products...">http://www.intel.com/network/connectivi...oducts/p...
The chip might be used by Myricom and others, (I'm not sure), and there's a linux and a bsd driver - a nice selling point.
10gb ethernet is what should really change things.
They look amazing on paper -- I'd love to see them tested:
http://www.intel.com/network/connectivity/products...">http://www.intel.com/network/connectivi...ucts/ser...
JohanAnandtech - Saturday, November 10, 2007 - link
The problem is that currently you only got two choices: expensive CX4 copper which is short range (<15 m) and not very flexible (it is a like infiniband cables) or Optic fiber cabling. Both HBAs and cables are rather expensive and require rather expensive switches (still less than FC, but still). So you the price gap with FC is a lot smaller. Of course you have a bit more bandwidth (but I fear you won't get much more than 5 GBit, has to be test of course), and you do not need to learn fc.Personally I would like to wait for 10 gbit over UTP-cat 6... But I am open to suggestion why the current 10 gbit would be very interesting too.
afan - Saturday, November 10, 2007 - link
Thanks for your answer, J.first, as far as I know, CX4 cables aren't as cheap as cat_x, but they aren't all _that_ expensive to be a showstopper. If you need more length, you can go for the fibre cables -- which go _really_ far:
http://www.google.com/products?q=cx4+cable&btn...">http://www.google.com/products?q=cx4+ca...amp;btnG...
I think the cx4 card (~$800)is pretty damn cheap for what you get: (and remember it doesn't have pci-x limitations).
Check out the intel marketing buzz on iSCSI and the junk they're doing to speed up TCP/IP, too. It's good reading, and I'd love to see the hype tested in the real world.
I agree with you that UTP-cat 6 would be much better, more standardized, much cheaper, better range, etc. I know that, but if this is we've got now, so be-it, and I think it's pretty killer, but I haven't tested it : ).
Dell, cisco, hp, and others have CX4 adapters for their managed switches - they aren't very expensive and go right to the backplane of the switch.
here are some dell switches that support CX-4, at least:
http://www.dell.com/content/products/compare.aspx/...">http://www.dell.com/content/products/co...er3?c=us...
these are the current 10gbe intel flavors:
copper: Intel® PRO/10GbE CX4 Server Adapter
fibre:
Intel® PRO/10GbE SR Server Adapter
Intel® PRO/10GbE LR Server Adapter
Intel® 10 Gigabit XF SR Server Adapters
a pita is the limited number of x8 PCI-E slots in most server mobos.
keep up your great reporting.
best, nw
somedude1234 - Wednesday, November 7, 2007 - link
First off, great article. I'm looking forward to the rest of this series.From everything I've read coming out of MS, the StorPort driver should provide better performance. Any reason why you chose to go with SCSIPort? Emulex offers drivers for both on their website.
JohanAnandtech - Thursday, November 8, 2007 - link
Thanks. It is something that Tijl and myself will look into, and report back in the next article.Czar - Wednesday, November 7, 2007 - link
Love that anandtech is going into this direction :DRealy looking forward to your iscsi article. Only used fiber connected sans, have a ibm ds6800 at work :) Never used iscsi but veeery interested into it, what I have heard so far is that its mostly just very good for development purposes, not for production enviroments. And that you should turn of I think chaps or whatever it its called on the switches, so the icsci san doesnt overflow the network with are you there when it transfers to the iscsi target.
JohanAnandtech - Thursday, November 8, 2007 - link
Just wait a few weeks :-). Anandtech IT will become much more than just one of the many tabs :-)
We will look into it, but I think it should be enough to place your iSCSI storage on a nonblocking switch on separate VLAN. Or am I missing something?
Czar - Monday, November 12, 2007 - link
think I found ithttp://searchstorage.techtarget.com/generic/0,2955...">http://searchstorage.techtarget.com/generic/0,2955...
"Common Ethernet switch ports tend to introduce latency into iSCSI traffic, and this reduces performance. Experts suggest deploying high-performance Ethernet switches that sport fast, low-latency ports. In addition, you may choose to tweak iSCSI performance further by overriding "auto-negotiation" and manually adjusting speed settings on the NIC and switch. This lets you enable traffic flow control on the NIC and switch, setting Ethernet jumbo frames on the NIC and switch to 9000 bytes or higher -- transferring far more data in each packet while requiring less overhead. Jumbo frames are reported to improve throughput as much as 50%. "
This is what I was talking about.
Realy looking forward to the next article :)
Lifted - Wednesday, November 7, 2007 - link
I'm just wondering this sentence was linked to an article about a Supermicro dual node server. So you considere Supermicro an SME, or are you saying their servers are sold to SME's? I just skimmed the Supermicro article, so perhaps you were working with an SME in testing it? I got the feeling from the sentence that you meant to link to an article where you had worked with SME's in some respect.
JohanAnandtech - Wednesday, November 7, 2007 - link
no, Supermicro is not an SME in our viewpoint :-). Sorry, I should have been more clear, but I was trying to avoid that the article lost it's focus.I am head of a serverlab in the local university and our goal is applied research in the fields of virtualisation, HA and Server sizing. One of the things we do is to develop software that helps SME's (with some special niche application) to size their server. That is what the link is going towards, a short explanation of the stresstesting client APUS which has been used to help quite a few SMEs. One of those SMEs is MCS, a software company who develops facility management software. Basically the logs of their software were analyzed and converted by our stresstesting client into a benchmark. Sounds a lot easier than it is.
Because these applications are used in real world, and are not industry standard benchmarks that the manufacturers can tune to the extreme, we feel that this kind of benchmarking is a welcome addition to the normal benchmarks.
hirschma - Wednesday, November 7, 2007 - link
Is the Promise gear compatible with Cluster File Systems like Polyserve or GFS? Perhaps the author could get some commentary from Promise.JohanAnandtech - Wednesday, November 7, 2007 - link
We will. What kind of incompatibility do you expect? It seems to me that the filesystem is rather independent from the storage rack.hirschma - Thursday, November 8, 2007 - link
I only ask because every cluster file vendor suggests that not all SAN systems are capable of handling multiple requests to the same LUN simultaneously.
I can't imagine that they couldn't, since I think that cluster file systems are the "killer app" of SANs in general.
FreshPrince - Wednesday, November 7, 2007 - link
I think I would like to try the intel solution and compare it to my cx3...Gholam - Wednesday, November 7, 2007 - link
Any chance of seeing benchmarks for LSI Engenio 1333/IBM DS3000/Dell MD3000 series?JohanAnandtech - Wednesday, November 7, 2007 - link
I am curious why exactly?And yes, we'll do our best to get some of the typical storage devices in the labs. Any reason why you mention these one in particular (besides being the lower end of the SANs)
Gholam - Thursday, November 8, 2007 - link
Both Dell and IBM are aggressively pushing these in the SMB sector around here (Israel). Their main competition is NetApp FAS270 line, which is considerably more expensive.ninjit - Wednesday, November 7, 2007 - link
It's a good idea to define all your acronyms the first time you use them in an article.Sure, a quick google told me what an SME was, but it's helpful to the casual reader, who would otherwise be directed away from your page.
What's funny, is that you were particular about defining FC, SAN, HA on the first page, just not the title term of your article.
microAmp - Wednesday, November 7, 2007 - link
I was just about to post something similar. <thumbsup>