Comments Locked

38 Comments

Back to Article

  • kensiko - Friday, January 6, 2012 - link

    This option is available since I would say 10 years in Windows. It's just done in the disk manager, that's all that is different.
  • davepermen - Friday, January 6, 2012 - link

    No, it's actually very different (and works parallel to what you can do in disk manager).

    Thin Provisioning being one key term of difference.
  • B3an - Friday, January 6, 2012 - link

    Exactly. It's massively different and one of the best things about Win 8.

    From what i can see you get the performance of RAID-0 speeds, but you can also mirror your data 2 or 3 times. Plus if one disk in the Storage Space fails, it's no problem, you can add another disk easily, and in the meantime the Storage Space continues to work as normal.

    It also automatically copies any mirrored data that was lost on the failed drive over to the newly added drive without you having to do anything. You'll probably never have to worry about losing data ever again.

    Theres also an unlimited amount of drives you can add.

    Fragmentation isn't an issue either, no need to ever run diskcheck or defrag.

    It works with SSD's, HDD's, and either using USB, SATA, or SAS - any of this works in ANY combinations. Disks can be different sizes too.

    You can set an SSD in a pool as a backing device to improve performance (if you're also using it with mechanical HDD's).

    If a drive fails you get a notification, but you can also set it up to send you an automatic email (great for server admins).

    It's heavily integrated with the NTFS storage system. More so than Drive Extender.

    It makes Windows Home Server's 'Drive Extender' feature obsolete.

    + Many other things i cant be bothered to list.

    Everyone should use this feature, and i hope OEM's ship atleast some PC's with it enabled by default (i'm sure they will). The only downside i can think of is that if you mirror your data then obviously it's going to use more drive space. But apart from that theres only advantages.
  • dustofnations - Saturday, January 7, 2012 - link

    Indeed, this seems to be a parallel to the long-standing LVM (Logical volume manager) feature present in Linux since 2.2
  • Braumin - Friday, January 6, 2012 - link

    I think you better read a bit more about this. This is way more than what is available now with Windows. Sure you could span disks before, or do softwar RAID, but this is way more than that.

    Thin provisioning, RAID like features but with different size and speed disks, simple pool additions.

    Basically this is Windows Home Server Drive Extender done right. This is what they wanted to add to WHS 2011, but ran out of time.

    Even more exciting is that it will be on both the desktop and server versions of Windows 8.
  • kmmatney - Friday, January 6, 2012 - link

    'Basically this is Windows Home Server Drive Extender done right. '

    I haven't found much to complain about with drive extender in WHS 1.0? Thisnew features would almost make WHS obsolete, except that WHS has automated network backups. Hopefully this will be in the next release of WHS (if they still decide to sell it)
  • Braumin - Saturday, January 7, 2012 - link

    I had some major performance issues with Drive Extender when it would rebalance the drives. I am actually happier with WHS 2011 by a long ways.

    I don't think it would kill WHS because it does so much more. Media Streaming, centralized PC backups, network files etc. This technology would be an obvious spot for WHS vNext.
  • msnight04 - Friday, January 6, 2012 - link

    Actually, we can't. Windows does not currently allow multiple, physical storage devices to be used as 1 logical storage drive. Here's an example of what this technology would allow you to do. Let's say you have 1 partition your computer uses. We'll say the c:\ drive. If you run out of disk space, you would typically need to remove files or buy a different hard drive. If choosing the latter option, you would either need to image or reinstall Windows. With this new technology, you could purchase another hard drive and simply plug it into your computer. All of the physical storage devices could then be used as one logical c:\ drive.

    This was a feature built in Windows Home Server and now Microsoft is bringing it to general consumers in Windows 8.
  • Araemo - Friday, January 6, 2012 - link

    "Windows does not currently allow multiple, physical storage devices to be used as 1 logical storage drive."
    Actually, it does.. it just has a list of painful limitations and weaknesses.

    Windows server allows to to do semi-proper software raid(Using multiple physical disks as one logical volume), and the workstation version allows I believe, raid 0 or JBOD (IE, you run out of space on your C drive, add another drive, and 'extend' C onto it. The downside being that if either drive dies, windows can't access 'any' of the files on either (you can recover the ones on the non-dead drive with some easy tools, but windows will claim the volume is inaccessible).

    This is much closer to what ZFS offers, though where ZFS wants you to plan from the physical layer up.. this seems like they want you to plan from the logical layer down. The real features will probably end up being very similar. (Though I haven't heard anything about block-level checksums, the killer feature that will drag my file server to ZFS.)
  • jaydee - Friday, January 6, 2012 - link

    But how would this work if you had a smaller SSD and larger mechanical drive? Obviously you don't want the SSD filling up with media files, and you don't want software installations being placed on a mechanical drive. How does the "logical drive" know which physical drive to put data in?
  • hechacker1 - Friday, January 6, 2012 - link

    Looking at the comments, it seems that you are able to specify the SSD in a pool as a backing device for the journal (specifically to speed up parity writes). All of this is done through powershell if you have a specific configuration in mind.

    But otherwise, the storage pool will tread all disks equally and just spread 256MB chunks over them.

    But it doesn't look useful for a system drive however, since you can't boot from a pool (at least not yet?).

    Still, using an SSD as a backing store might just give us built-in SSD caching to RAID 5 schemes for writes, which is pretty much RAID 5's major crux. I'd hope they also do read caching though (doubt it).
  • damianrobertjones - Friday, January 6, 2012 - link

    Ok... say you have the following:

    80Gb SSD: C: drive, system files
    750Gb HD: Re-directed movie/picture/documents folders

    You then ADD the following
    1Tb HD: Clean drive

    You then add this drive to the pool and windows copies/images your files to this drive. Your files are ALWAYS in two places at the same time.
  • Musafir_86 - Friday, January 6, 2012 - link

    -For those who want a bit more technical details on this, you may refer to these TechNet articles/blog posts (authored by the very same Mr. Rajeev Nagar here):

    http://blogs.technet.com/b/server-cloud/archive/20...

    http://blogs.technet.com/b/server-cloud/archive/20...

    Regards,
    -Musafir_86.
  • marc1000 - Friday, January 6, 2012 - link

    At home, I want my files in the disc I choose. Very few people need to expand a single drive letter over more than 1 disc. But at enterprise or some extreme cases, this could be useful.
  • B3an - Friday, January 6, 2012 - link

    Whats the point of having loads of individual disks? Do you like losing data and having slow speeds?

    With Storage Spaces, you get improved speed as it's a little like RAID-0, but you can also mirror all data, Even if one disk fails you dont lost any data and you can still access the drive as usual. You can even continue to add as many drives as you like. This is a killer feature for many people including home users. It also renders loads of software and hardware RAID solutions and similar setups obsolete.
  • marc1000 - Saturday, January 7, 2012 - link

    But how many people actually NEED over 2tb of data all the time, with fast speeds, at home? I have a simple setup for over 3 years now (100gb + 1tb), with games, dvd/br ripping, a few VMs to study, and never used more than 800gb at any single moment. Whoever wants to store tons of movies can use big slow disks or external storage. That's my point: very few people wil really NEED this feature.
  • B3an - Sunday, January 8, 2012 - link

    Why do you mention 2TB?? Because it's in the pics here? A storage pool can be of ANY size.

    Many people have important data on there PC's even if it's just family photo's or something. With this feature you could have for example a 200GB mirrored pool. Even if the speed isn't needed for some the data backup will be a big bonus and theres no bloatware or any other requirements to use it.
  • marc1000 - Sunday, January 8, 2012 - link

    Yes, the mirroring part is good. Easy and simple backup. I mentioned 2tb because now it's a common size for hdd's, so anyone could have one. The only people who will need more than this are the extreme users.
  • stevetb - Thursday, March 22, 2012 - link

    Marc1000,

    Wasn't that funny when Bill Gates said back in the 1970's that no one would ever need more than 2MB of storage space? Yeah, it cracked me up too!

    Oh wait.....you just pretty much said the same thing......

    2TB is NOTHING for HD Video, even for a moderate user. If you are working with only 1TB then you are living in the 2000's. Time to evolve bud!
  • Ryan Smith - Friday, January 6, 2012 - link

    Well I'm certainly excited about this. Based on the technical description it's almost certainly a further iteration on the work MS had already done on Drive Extender v2. The chunk/slab description and how data is organized is practically identical, so clearly they didn't kill DEv2 as much as they held it back.

    Anyhow hopefully this means we'll see a WHSv3. 2011 has largely been a dud; the OS is fine and the backup client is really swell, but the lack of drive pooling seems to have killed a lot of interest in it as compared to WHSv1.

    It's interesting to note though that MS has done away with checksumming. DEv2 was to feature ZFS-like checksumming for each sector, but Storage Spaces leaves the checksumming up to individual programs. So techies looking for ZFS on Windows may come away disappointed, though I'm not sure filesystem checksumming is strictly necessary.

    The slab issue will also need to be better addressed, as it looks like we're going down the same path as DEv2 and chunks. If data is effectively being striped out, then in "non-redundant" modes you'd lose most-to-all of your data, since non-redundant modes are effectively RAID 0. Parity mode (RAID 4/5) will keep your data intact through a single failure, and of course mirroring (RAID 1) will go beyond that. In fact I'm not sure why mirrior/parity mode isn't forced on pools using multiple disks, as it seems like the use of slabs will make problems more likely.
  • hechacker1 - Friday, January 6, 2012 - link

    Considering that checksumming is left to applications to perform on the files, and that an API exists to call up different copies of the same file in the Storage Pool, I'd bet that Microsoft has another application layer on top of it that actually get the correct copy.

    If they don't provide something, it doesn't sound to hard to use the API to checksum the entire pool, and then periodically rescan for any changes. I'm guessing third party disk backup software could easily hook into this. I'm awaiting to see what MS does have planned, because that sounds like a easy to make program assuming their API is useful.
  • Wardrop - Friday, January 6, 2012 - link

    This is one welcome step closer to what I see as the inevitable future, which is data-redundancy built into the OS, and enabled by default. When you think about it, it's a little crazy that in 2012, backup is still something that the user must explicitly setup. Having data redundancy should be the default. On a new computer or fresh installation of Windows, backup schedules and procedures should already be in-place. It should be a required screen on the Windows installation wizard. All new computers should come with data redundancy already enabled and active. Anyone who's not a guru, probably does not have an adequate backup scheme, and anyone less than a power-user probably doesn't have a backup scheme at all. The least we could do is get the data of those non-power-users backed up, and provide better guidance from within the OS to help those power-user's who may not have the best backup scheme (e.g. backing up to another internal hard-drive, where a power supply failure or a malicious software attack could easily destroy everything).

    Given the cost of storage relative to how much data the average user actually has, it's crazy that we're not already making better use of the abundance of storage available. Windows Volume Shadow Copy (aka. Preview Versions) was thankfully brought to the home edition of Windows 7 (Windows Vista only had it in Business or Ultimate), but it still needs some improvements. "Previous Versions" doesn't easily allow you to increase the frequency of snapshots. Adjusting the schedule in the Windows 7 task schedular can often break it. I've personally had to resort to a VBscript that forces a snapshot, but doing so causes the machine to hiccup for about 5 seconds at the start of the process; not very nice when you're playing loud music. If I've got a 3TB drive and I'm only using 1TB, I want the remaining 2TB of free space to be previous versions, which are deleted only to make room for more data or newer snapshots, like how Windows caches regularly used resources in memory, as like RAM, free hard-drive space is useless if it's not being used.
  • hechacker1 - Friday, January 6, 2012 - link

    I agree. But I think MS needs to take a step forward and change NTFS to be able to specify redundant copies for all files (assuming free space). ZFS allows you to specify how many copies you want of each file (at the block level), and it can do that with a single drive.

    Even having a single backup copy around could save a lot of people's data when one part of the disk corrupts. Often it's just a critical boot file or registry that gets wiped and suddenly it doesn't boot.

    MS could save everybody a lot of effort if they just implemented this already.
  • Wardrop - Friday, January 6, 2012 - link

    Yeah, the file system is one of those fundamental areas of an operating system. Even a small improvement to the underlying file system can bubble up and positively affect a users entire workflow. I'm surprised it's not has even more attention from the Windows team.
  • Jeff7181 - Saturday, January 7, 2012 - link

    It's kinda cool though that it's sorta like software RAID that can be changed on the fly from striping to mirroring to striping with parity to a mirrored striped with parity. Though I'm not sure this is all that useful for 99% of people. I'd much rather see Windows support an SSD as a cache for a larger mechanical HDD.
  • B3an - Saturday, January 7, 2012 - link

    You have a small imagination.

    Killer feature for all servers, HTPC's, video editing, or ANYONE who actually does any work or has important data on there PC. Improved speed and complete data backup without having to do anything, not to mention very easy to setup?! I'm sure OEM's will ship some PC's will this enabled by default.
  • Touche - Saturday, January 7, 2012 - link

    RAID and this are NOT backup!
  • landerf - Saturday, January 7, 2012 - link

    Can we finally drag and drop between internal drives without using special key combinations? Seems like a real obvious thing but there's no way to set that behavior in 7.
  • Wardrop - Saturday, January 7, 2012 - link

    You can drag and drop between drives. It copies the dragged files to the destination folder, which is a much more desirable default than moving the files (cut-paste). The reason being is that imagine if a user dragged a file from an internal drive, to a mates external drive. If he assumed the drive copied the data, but it actually moved it (deleted it from his internal drive), then he's essentially unknowingly deleted his data. If however someone drags and drops to an external drive, if they expected the files to be moved, but they were copied, no data has been lost. It's merely left files somewhere.

    Now, I know you're talking about internal-to-internal, but I mentioned internal-to-external because I believe it would be a significant usability problem if the behaviour for drag and drop to an external drive different to that of an internal drive. The line can also sometimes be blurred between what is an external or internal drive. A lot of people use external drives as if they're internal, and vice versa.

    You're essentially suggesting to make the default drag-drop behaviour of Windows Explorer more dangerous, which obviously isn't going to happen. I've even be hesitant about adding the option, as users may roam between differently configured workstations, where once again, the inconsistency between drag-and-drop behaviour may cause accidental data loss.
  • SlyNine - Saturday, January 7, 2012 - link

    Anyone interested in this should look at flexraid. Snapshot raid is the best thing to happen to my media collection ever.
  • evilnewbie01 - Saturday, January 7, 2012 - link

    I don't understand this technology very well... If I had 5 x 3TB drives full of movies and I do a logical link and call the volume 20TB (even though i only have 15TB)... I can take out one hard drive that is failing and put in another blank hard drive without data loss? I don't get it... what happened to the data on the 3TB hard drive... it couldn't fit on the other hard drives since they are all full... can someone explain it to me? Thanks...
  • Solandri - Sunday, January 8, 2012 - link

    From my brief read of it, it's just a SAN with redundancy. ZFS already does that, and can also handle logical volumes instead of just physical drives (e.g. you can allocate space on one drive to multiple redundant volumes).

    For your example, your 5x3TB drives hold 15 TB of data. You could create a logical volume of 20 TB and assign them to it. But if you tried to add more data to it (or tried to add redundancy), the OS would tell you you need to add more physical storage. The point of being able to assign more space than you physically have is to (1) account for compressed filesystems, (2) share empty space, and (2) for future expansion.

    (1) You can't predict ahead of time how much space compression will save. If I have a 3 TB drive, I can't say ahead of time exactly how much data it'll hold. Maybe it'll only hold 3 TB of movies, or maybe I'll manage to squeeze in 6 TB of sparse data. Telling the OS that it's a 5 TB volume is really just a way to tell the OS to set it up to store a maximum of 5 TB of data.

    (2) When multiple volumes reside on the same physical drive, directly assigning empty space to a specific volume is wasteful. e.g. Take the example of a 500GB drive with an OS partition and a data partition. When you first bought the computer, you thought you'd never use more than 100 GB for the OS and programs, so you assigned 100 GB as an OS partition and 400 GB as a data partition.

    A year later, your OS partition only has 5 GB free, while the data partition still has 200 GB free. You guessed wrong and you're cramped for space on one volume, while you have lots of free space on the other volume, even though they're on the same physical disk. To fix it you have to dynamically resize the volumes, which takes a long time and carries with it the risk of data loss. And if after you repartition the situation should reverse, you have to repartition again.

    Instead, what you can do is divide the 500GB physical drive into two 500GB volumes. So the OS volume can take up to 500GB, and the data volume can take up to 500GB. The only constraint being that the OS + data cannot exceed 500GB. The two volumes are effectively sharing empty space, so you're free to let either of them grow or shrink to anything from 0GB to 500GB without having to repartition.

    The beautiful part comes with (3): What happens if your OS + data exceed 500 GB? Under the traditional system, you buy a new hard drive, and now you're stuck with three volumes and have to divide your data around between it to make it all fit. With a dynamic virtual volume system, you add the second 500GB hard drive, and that's it. The filesystem says, "oh, I see there's extra space" and expands the OS and data volumes to use the extra space. From the OS's point of view, nothing has changed. There are still two volumes, a 500GB OS volume and a 500GB data volume. Except now your maximum limit is 1TB.
  • evilnewbie01 - Sunday, January 8, 2012 - link

    Thanks but I am still confused on how you can remove the hard drive without data loss.... for example, in your example of the 500 GB hard drive that is a 1TB logical drive... if the 500GB drive fails and you stick in another 500GB to replace it... why is there no data loss?
  • marc1000 - Sunday, January 8, 2012 - link

    It takes a part of the disk to keep copies. It doesn't work with 1 disk, but only with 2 or more. Example: If you have 15tb total, you will be able to save 7.5tb. To save more, you will need to add disks. (not real numbers)
  • Solandri - Sunday, January 8, 2012 - link

    Redundancy with two drives requires mirroring - you have two copies of the data, so it takes 2x as much space. So if one drive fails, you still have the other copy.

    Redundancy with more than two drives requires a progressively smaller amount of parity data. With 3 drives, the parity data is 50%, so you need 1.5x as much drive space as you have data. Once you have parity data, any one of the three drives can fail and you can reconstruct it using the surviving data and parity information.

    With 4 drives, the parity data is 33%. With 5 drives, the parity data is 20%. etc. In all cases you need more storage space than the a single copy of the data would take. But with parity you don't have to predict which drive will fail. The parity information protects you from failure of any single drive. (Different parity schemes can protect you from two drive failures, at a corresponding increase in amount of parity data.)

    You can never have a situation where after a drive failure, the amount of remaining space is less than the amount of data.
  • iwod - Monday, January 9, 2012 - link

    I wish someone could make a comparison to what's the different between this and Drobo / ZFS
  • eachus - Monday, January 9, 2012 - link

    You can't install the operating system on "Storage Spaces" which basically makes it a nice to have, but not really useful. You still have to go through all the Windows hassle to create a hardware RAID that you can install the OS to, and then you are limited by the physical size of the RAID disks with respect to your C: partition.

    I've been waiting for decades for Windows to support separate volumes for TEMP:, Users, paging, and Program Files. The only thing that has happened is that we now have two Program Files directories that can't be moved off the boot volume. (Yes, you can move individual directories for large programs to other volumes, but you can't have a link that makes it not a bother.

    In fact, since Windows will chomp a 100 MBytes (which isn't that much) off your boot device if Windows is elsewhere, it would be really, really nice to assume four (or more) main directories, and have redirects in that partition, or the main boot partition. if they are located elsewhere. How much work is that? The problem is that it has to be done early in the boot process so there is no easy way to hack it.

    Well, actually there is. Install VMware or Linux as a host, and make Windows a guest. Now you can use ZFS or LVM for everything. ;-)
  • Visual - Monday, January 9, 2012 - link

    It is still a disk-based and not file-based approach, so it is not quite like WHS Drive Extender. Seems more like just an "unRAID" implementation for windows. Not bad.

    The drives are still usable individually as well, right? But to what extend, I wonder...
    How will it behave if you disconnect a drive and modify it on a different computer before reconnecting... will you have the option to specify that you want to preserve the changes, or will it just force a "repair" and wipe them out?

Log in

Don't have an account? Sign up now