Performance Data

Because we were only able to get our hands on a release candidate version of WHS for the performance testing, all the results here need to be taken with a grain of salt. The WHS RC is quite good, especially in comparison to rockier launches like Vista, but we expect the performance numbers in particular to have changed slightly between the RC and the final version.

It's worth noting that the network packet throttling problem with Vista is still in play as of this writing. As a result all of our tests are under Windows XP SP2 unless otherwise noted, and when they're run on Vista it is with Multimedia Class Scheduler Service disabled to prevent throttling. Although this problem has existed in Vista since it has shipped, this is about the worst time it could come to light for Microsoft. Until it's fixed, Vista users wanting to move their media off of a personal computer and onto a WHS server will definitely want to hold off on doing so. Even though the problem with throttling isn't one with WHS, the problem occurring in Vista still drags down WHS.

Client Test Bed
Processor Intel Core 2 Quad QX6850
(3.00GHz/1333MHz)
RAM G.Skill DDR2-800 (2x2GB)
Motherboard Gigabyte GA-P35-DR3R (Intel P35)
System Platform Drivers Intel 8.1.1.1012
Hard Drive Maxtor MaXLine Pro 500GB SATA
Video Cards 1 x GeForce 8800GTX
Video Drivers NV ForceWare 163.44
Power Supply OCZ GameXStream 700W
Desktop Resolution 1600x1200
Operating Systems Windows Vista Ultimate 32-Bit
Windows XP SP2
.

Server Test Bed
Processor AMD Athlon X2 4600+
(2.40GHz/400MHz)
RAM OCZ DDR-400 (4x512MB)
Motherboard ASUS A8N-SLI Premium (nForce 4 SLI)
System Platform Drivers NV 6.69
Hard Drive 2x Western Digital Caviar Raid Edition 2(400GB)
Power Supply OCZ GameXStream 700W
Operating Systems Windows Home Server RC
.

We'll start with testing WHS's file server abilities by transferring files back and forth. With a gigabit network, the bottleneck will be the transfer abilities of our hard drives, so if WHS is achieving maximum performance it should be able to move data at speeds around the maximum of our hard drives. We'll be using a RAM disk on the client side to isolate the performance of WHS.

Also on this graph will be the performance of WHS while attempting to do file transfers in the middle of a balancing operation. Because of the importance in balancing data for data retention and performance reasons, WHS will sometimes need to balance folders even during times of backups and file transfers. This doesn't seem very common in our use since it's related to total use of the WHS server, but it needs to be noted all the same. WHS does seem to take steps to avoid balancing during heavy use when possible.

At 53MB/sec up and 67MB/sec down, the results are very close to those that we've seen WD RAID edition hard drives do previously. For users with gigabit networks, it looks like it's very possible for WHS to offer performance virtually equal to having the drives installed locally. Speeds while balancing aren't very impressive though, not that we expected them to be.

The other metric of WHS's performance is how it handles backups. Unlike pure file transfers, backups aren't "brain-dead" operations and require work on behalf of both the server and the client. The client needs to figure out what data is to be sent to the server, and the server is responsible for keeping all of that data organized and compressed. WHS backup performance is also heavily dependent on what else is already in the backup cache, because WHS avoids backing up anything redundant down to the cluster level.

These specific tests were run with empty caches as a worst-case scenario; actual performance of the initial backup on a new machine (as long as it's not the first machine) should be faster. These tests are being done on clean Windows installations, with the second "incremental" backup being done immediately after the first backup completes. This is more optimistic than a real incremental backup since virtually no data changes, but in doing it this way we can establish a floor for approximately how long the scan process takes. The reference sizes for these installations are 2.3GB for XP and 5.4GB for Vista, after factoring out the system page file and other files that WHS backup filters out.

Both Vista and XP turn in respectable, although not amazing backup times. Using the incremental backup as the baseline, we achieved an average backup speed of about 20MB/sec. This is well below what we've seen on our file transfer tests, but still fast enough to complete these backups in a short amount of time; since WHS doesn't have any true peers we don't have anything else to properly compare it to. In an actual deployment with real incremental backups and common data, we expect the results to be a lot closer to that of the incremental times.

We also took the liberty of backing up the XP machine again once the Vista machine was backed up in order to measure the size of the backup cache on the WHS server. Even with these clean installs, there's about 2GB of savings on the WHS server; 7.7GB of data is only taking up 5.7GB of space. Like Previous Versions on Vista, these savings should grow as more data is added to the backup cache.

WHS As A Webserver/Gateway/Everything Else Initial Thoughts
Comments Locked

128 Comments

View All Comments

  • ATWindsor - Sunday, September 9, 2007 - link

    All NAS-boxes have horrible performance. (at least all I have seen). It hardly seems fair to use benchmarks from them, when this is a "Proper" computer, there are plenty of benchmarks from software raid 5 run on "real" computers to find, see this for instance:

    http://www.tomshardware.com/2004/11/19/using_windo...">http://www.tomshardware.com/2004/11/19/...appen/pa...

    MDADM is as far as i know even faster, hower for whs it would likely be built on the software-raid of win2003.
  • Gholam - Sunday, September 9, 2007 - link

    All NAS-boxes have horrible performance.

    Wrong. Proper NAS boxes have superb performance. Look at NetApp FAS270 for example. Of course a FAS270 in a typical configuration will run you in the $20,000-30,000 range.

    That Tom's Hardware test is running a 2.8GHz CPU. http://www.pcpro.co.uk/reviews/121499/tranquil-t7h...">This WHS box is running a 1.3GHz VIA C7, for example.

    Also, WHS is designed to be easily and transparently expandable by end-user using external drives. Please show me a RAID setup of any kind that will work in a mixed ATA/SATA/USB/FireWire configuration with drives of varying sizes.
  • ATWindsor - Sunday, September 9, 2007 - link

    Ok, all consumer NAS-boxes then, I thought that much was implicit. It doesn't matter anyway, the point is that your comparison to a box like that isn't very good when it comes to "proving" that software-raid automatically has bad performance.

    A lot of boxes with WHS will be using a CPU that is better than a 1.3 Via, if the hardware isn't suited for the job, then you just don't run a software raid5, it's that easy.

    I don't see how the WHS storage-pool is incompatible with raid as a concept, a raid-array presents itself as a single drive, more or less, wich can be merged into the storagepool if one feels like it.
  • Gholam - Monday, September 10, 2007 - link

    Infrant ReadyNAS NV+ is a consumer level NAS. However, it's built on an SBC running a 1.4GHz Celeron M ULV, and in actual testing outperforms many self-built systems. On the other hand, it also costs over $1000 without drives.
  • ATWindsor - Monday, September 10, 2007 - link

    The benches I have seen points to a read-performance of 30 MB/s give or take lets say 10 MB, thats hardly good performance, it doesn't even outperform a single drive. One can easily build a software raid with several times better speed.
  • Gholam - Sunday, September 9, 2007 - link

    WHS is made to run on low-power, low-end and old hardware; calculating parity blocks in software is bad enough on a modern desktop CPU, an old PIII/800 or a VIA C3/C7 (present in some OEM WHS box implementations) will get murdered.

    In addition, recovering data from a failed RAID5 array is quite difficult, requiring specialized (and expensive) software as well as user expertise. Recovering data from a failed WHS box with duplication is as simple as mounting the drives separately.
  • ATWindsor - Sunday, September 9, 2007 - link

    The raid will not fail before two drives goes down, if that happens in WHS, you still need to run recovery-software and hope to get out data. WHS will be run on diffrent kinds of systems, even the cheapest of CPUs today are pretty powerful. More than powerful enough to get reasonable spped on raid5. Why limit WHS in this way? That is exactly the problem I'm adressing, the lack of flexibility, the reasoning that all WHS-users have the same needs, I think a pretty large number of WHS-machines wich poeple build themself will have performance several times higher then a P3@800, if not most.
  • Gholam - Sunday, September 9, 2007 - link

    The raid will not fail before two drives goes down

    Oh how I WISH that was true. Let me give you a recent example I've dealt with. HP/Compaq ProLiant ML370G2 with SmartArray something (641? don't remember) running a 4x36 RAID5 array, Novell Netware 5.0. DLT VS80 tape backup drive. Worked for 4 years or so, then the tape died. Took the organization in question 4 months to buy a new one, LTO-2 - which means they've had 4 months without backups. Downed the server, connected the new tape, booted - oops, doesn't boot. Their "IT guy", in his infinite wisdom, connected the tape to the RAID controller, instead of onboard SCSI - which nuked the array. It didn't go anywhere, the controller didn't even report any errors, but NWFS crashed hard. They ended up rolling back to 4 months old backups because pulling data out of a corrupt RAID5 array would've cost several thousands.

    I work for a small company that specializes in IT outsourcing for small and medium businesses - basically shops that are too small to afford a dedicated IT department, and we give them the entire solution: hardware, software, installation, integration, advisory, support, etc - and I've got many stories such as this one. We also deal with home users, but not as much.

    This said, I don't consider RAID5 a suitable for home use, at least not yet. It's too expensive and dangerous - mirroring files across a bunch of drives is cheaper and easier. Also, as far as I understand, when a drive in WHS drive pool fails, it automatically syncs protected folders into free space on remaining drives, so the window where your data is vulnerable is quite small. RAID5, on the other hand, will be vulnerable until you replace the drive (which can take days or even weeks) and then until it finishes rebuilding (which can also take a very long time on a large array). You can keep a hotspare, but then you'll be eating up another drive - in case of 4 drives, RAID5+hotspare eats you the same 50% as RAID1/RAID10 - while WHS mirroring makes your entire free space function as hot spare.
  • ATWindsor - Sunday, September 9, 2007 - link

    Hardly a very plausible scenario for a home user, of course a RAID can go down if you mess it up, but you can just as easily mess up non-raided drives to the point that running recovery-software is needed, when it comes to normal drive-failiures two of them have to die.

    If you only need 2 Drives worth of storage, you might as well mirror, but when you need for instance 10, it adds up, but drive-cost, electricity PSU-size and physical size (especially if you want a backuo-machine in adition, I would never keep my data on only one computer like that). If the syncing is going to work,you also need to have at least a disk of usalble free space, so you basically need to "waste" a whole disk on that to if you wnat to get hot-spare-functionality.



  • Gholam - Monday, September 10, 2007 - link

    Hardly a very plausible scenario for a home user, of course a RAID can go down if you mess it up, but you can just as easily mess up non-raided drives to the point that running recovery-software is needed, when it comes to normal drive-failiures two of them have to die.

    Not quite. WHS balances data between drives, so if one of them becomes corrupt and one of the copies of your protected data is gone, you can still access it on the other - no extra tools required, just mount the drive in a Windows system. You will only lose it if both drives become corrupt simultaneously.

    If the syncing is going to work,you also need to have at least a disk of usalble free space, so you basically need to "waste" a whole disk on that to if you wnat to get hot-spare-functionality.

    Again, not quite. Since you protect the data on a per-folder basis, your free space requirement depends on the actual amount of data you're keeping redundant, not the total, and there's little point in wasting redundant storage on backups - they're redundancy in and of themselves.

Log in

Don't have an account? Sign up now