Don't forget all recommendations will be supported by viewing manufacturer specs and absent of any review efforts, but those links to Amazon product pages will have sponsor kickbacks.
The only advantage hard drives hold over solid state media is cost per capacity, so that is the only metric that matters. Adding so much new technology surely has to increase the manufacturing cost, so capacity has to increase significantly to justify the price increase. Unless you're in a situation where you absolutely cannot add a few additional disks I don't see any reason to buy the first production widely available HAMR disks. It will take until the 40TB generation until HAMR beats CMR in cost/capacity and I actually consider buying one.
I'm also waiting to hear about the maximum number of program/erase cycles on these HAMR disks. That detail seems curiously absent from any press releases.
My experience with hard drives has been that the helium hard drives aren't perfect for long-term cold storage either - you still need to verify them every few months or so because I've had a few that slowly leak helium. Sure, since the data isn't affected when the helium leaks you could send it to a disaster recovery service and they could get all the data back but that's expensive. Tape backup seems to be the only safe option for long-term cold storage.
I'm not familiar enough with fluid dynamics to give an informed answer. The helium has a thinner boundary layer than air allowing the heads to be closer to the platters, so if air replaced the helium, the heads would be too far away from the platter to read anything, but I'm not sure how the rotation speed affects the thickness of the boundary layer.
Aah, tape. When I was a teenager, TRAVAN tapes and QIC-80 actually were being sold in high volume to end-users, not just IT. I had a tape drive. Sure, slow as all shit but the tapes were indestructible compared to a CD or another HDD.
I'd still have tape if I had more backup needs, but modern LTO drives are a little costly to me.
I recently recovered data from a set of DDS-2 tapes I found at my parents. Data was archived using Win95 backup so I had to build a friggin Win95 PC out of a P3 I bought and source a DAT drive (the first DDS-2 drive off eBay wouldn't read the tapes and turned out to be a defective drive, when I thought the tapes were the problem, but a second HP StorageWorks DDS-4 drive had DDS-2/3 R/W compatibility.)
Anyway, all 4 tapes recovered perfectly with no errors. They were stored in a basement that wasn't humidity or temperature controlled here in Chicago for over 25 years. That says everything you need to know about why tape media and magnetic storage in general are the archive standard bearer.
Depends on how "long term" you want it to be. Lots of QIC tapes from the 90's and back are no longer usable because the rubber bands have turned to goo. Hopefully more "modern" tape designs like LTO have better shelf life.
QIC (and Travan) were a race-to-the-bottom storage medium for cost-conscious consumers. Iomega and Seagate pushed them hard and there was unfortunately adoption. They were garbage back then and they're even less than garbage now.
DAT and LTO have by design simple and robust storage vessels. The tape media itself is incredibly high tech, having gone through many material changes but all of which have guaranteed a minimum archival lifetime of many, many decades when stored in a dry cool environment.
I have a softspot for DAT because of its incredible flexibility and reasonable cost of entry, more than QIC but much less than LTO. Working in IT for decades, we had used DDS entirely through the 90's and 00's until it was abandoned. These days only one office I manage has tape backup and it is unsurprisingly LTO. Reliable cloud backup options and inexpensive JBOD's for on-site backup have mostly replaced the need for SMB tape backup but you are still presented with the security risk of live backup storage. LTO auto loaders are truly the most secure on-site backup option if you can afford it.
Adding the HAMR component may increase cost but the economics for hard drives are different today than they were a decade ago when they were still shipped with almost every laptop and PC. Back then they wanted drives with a single platter to cost as little as possible, with $/GB a secondary concern at best, which limited the amount of technology that could be employed.
Now that hard drives are bulk storage only they are concerned about the $/GB figure, with total concern of secondary concern. So they can add more expensive technology to the functional part of the drive if means enough additional bytes stored that $/GB is less even when total cost is more. Because as you say if SSDs can close the gap for cost per capacity then it is game over for hard drives.
As to your second paragraph, I would assume they haven't mentioned a limit on "program/erase" cycles since like other hard drives they are unlimited.
At some point, hard disks will *only* be good for cold storage.
A 3.5″ drive occupies a volume of 18.1 cu in. A 2.5″ drive occupies a volume of 6.5 cu in. The highest capacity hard disk drive will be 30 TB, giving a density of 1.7 TB per cu in. The highest capacity 15 mm, 2.5″ solid state drive will be 61 TB (announced by Solidigm in November 2022) giving a density of 9.6 TB per cu in. To break even with such a density, a 3.5″ hard disk drive would need to have a capacity of 174 TB. In other news, Nimbus Data announced back in April 2022 that they were working on a 200 TB 3.5″ solid state drive, bringing the density to 11.0 TB per cu in.
Hard disk drives had already lost the capacity crown since the 8 TB 2.5″ solid state drives came out in 2016. Now we’re just waiting for the cost of NAND to encroach into one of hard disk storage’s last advantages over it.
TB/rack is something big cloud/etc operators care about. The amount of data they need to handle is growing exponentially, and upgrading to new denser HDDs is cheaper than building new data centers.
Right now there is no path for NAND to catch up to HDD's in the same physical space or price. That said, there are other competitors for solid state storage, as well as other takes on flash memory that could pose a challenge. I was hoping Optane would be it, but unfortunately it's just not able to be produced cheap enough at high density yet.
Hard drives have numerous advantages over solid state drives. The storage industry isn't purely focused on 'speed' as a performance metric.
Hard drives have superior archival performance as noted by Stevo. NAND will fade long before hard disks will - I have magnetic disks that are decades old with no data retention problem, and many have no bad sectors.
Hard drives have superior reliability in heavy non-WORM applications. SSD's have a finite lifetime of writes - HDD's don't. This makes hard disks the continued standard for databases and backup\archive.
As you mentioned, hard disks are less expensive per TB, but keep in mind most non-consumer (think enterprise\data center) clients are less sensitive to cost and more sensitive to reliability and low maintenance.
Even if solid state cost per TB were identical to hard drives, you would still see the later continue being used in a variety of large-scale applications. Magnetic storage isn't going anywhere until NAND has similar data retention and write amplification properties.
As far as helium drives, there as specific metrics for proper storage that if not followed will cause the seal to fail. It's important they be kept above 30C - even when cold. Temperature fluctuations are the primary cause of helioseal failures in WDC\Hitachi drives. Seagate has a different seal technology that seems to be more resilient during cold storage.
You can't compare decades old HDDs with current. Physical size of a bit has decreased significantly and that leads to shorter time it stays readable. Hard drives have limited write endurance as well. It mostly relates to spin-up/spin-down cycles, but recently HDD manufacturers started listing also DWPD metric and is not great. As for large scale storage - it's not as easy as you think it is. We have large storage (in PBs) and it can actually be cheaper to store it using SSDs. We can use less of them because we can use erasure codes instead of mirroring in our object storage thanks to much faster access times and higher iops. And one thing about cold storage - our experience is that you really don't want to use HDDs for that. When you turn them off, there's good chance that some of them will not spin up when you turn the server on in a month (or even in one day if you have to move servers elsewhere). SSDs don't have this problem. What will eventually kill HDDs is their speed. Or to be more precise speed-to-capacity ratio. Even already existing 20TB HDD takes several days to recover (failures are inevitable in a large scale storage). That's a f***ing huge window for another drive, two or even more to die. And unfortunately it's intrinsical for HDDs that their speed grows with square root of their capacity (it doesn't grow with number of tracks, only with density of data per track). Replacing a failed 30+TB HDD would really take a week. Even two mirrors are not enough in that scenario.
PBs is not really large storage, compared to CSP deployments where 100K+ drives will be aggregated in a large data center. And these cloud HDDs run flat out 24/7 for their life and never get rebuilt - large ECC codes are used. Rebuild doesn't make sense for the reasons you say - that is why storage arrays have mostly moved to SSDs. The increase in storage device cost/GB (~7x) isn't material relative to other costs and the benefits. HDDs will not die, there is too big a cost gap to solid state. In essence, a HDD uses a very expensive small solid state device (the head) and multiplies up the storage capacity by spinning cheap media under it. It's also not obvious to me that these HAMR drives will ever be sold to non-cloud customers.
That article is about failure rates of drives in active use. It says nothing at all about long-term data integrity in offline ("archival") storage.
I think it's a pretty commonly accepted thing that NAND voltage levels fade over time when the drive isn't powered and able to refresh them, especially in higher-density technologies like TLC, QLC and up. Samsung actually had an SSD firmware issue several years ago that illustrated this, causing data corruption in a matter of a year or so because it failed to refresh faded cells often enough. Magnetic HDD storage will also eventually fade, of course, but my understanding is that it takes a LOT longer. I suspect that's still true even in newer, denser HDD media, even if the newer drives might not be as robust as older drives in that respect.
The number of people who misconstrue Backblaze data to make a case for their favorite brand/medium/method never ceases to amaze me. Like you said, they were measuring something entirely different than what was being discussed.
Completely irrelevant statistics at large scale. HDDs are being deployed now in large CSPs to replace tape storage. Archival is the cheapest of all media. SSDs make no sense there.
I love when people link to Backblaze who, for a solid decade, has continued releasing uncredible, extremely bias reports on their hard disk data using flawed testing methodology and irrelevant metrics. They are the backyard mechanic of data storage and anybody who uses them is seriously clueless. They literally shucked drives for their redneck drive pods. Real data centers don't use SSD's for storage or archive. They find their application in caching and OS dependency.
"Real data centers don't use SSD's for storage or archive"
guess that makes backblaze a real data center then, because their SSDs are only used for boot/OS drives...
Can you elaborate on your criticisms of backblaze, what exactly is your issue with their methodology and bias?
They shucked drives during the great hard drive shortage when all the HDD factories got flooded, way back in 2011. External hard drives were still widely available, but internal ones were not. I myself shucked a few around then due to the shortage.
There is no practical limit on 'program/erase' for magnetic disks, that is why it's not mentioned in the press releases or the technical literature. HDD's are measured by MTBF, a measure of estimated reliability, you can find more information here: https://en.wikipedia.org/wiki/Mean_time_between_fa...
On magnetic disks there is no write/erase cycle because old data is not 'erased', it is simply overwritten by new data and neither process produces any wear on the platter as it's done magnetically. The failure points are typically in the motor or if vibration dampening is not adequate. In a hypothetically perfect environment they'd last longer than a human lifetime.
HAMR drives will likely have a write endurance, unlike PMR. It won't be an SSD type endurance where the location matters (media wear out), but rather the total write head power on hours.
I mean sure in the sense that mechanical parts wear down over time, but it's not program/erase cycles and we already have similar for MTBF for measuring the motors.
I am familiar with HAMR. So far I have seen no information that indicates the heating/cooling cycle wears faster than what MTBF allows, making such a metric useless as the one that matters is the one that will fail first.
Secondly, if you understood HAMR you wouldn't say "program/erase cycles" since that is literally NAND terminology that has nothing to do with how magnetic media works and as such will never be a metric used.
Drives can hit 50tb right now without any extra technology. They'd just need to be bigger. Also, they can turn at 1000rpm for all I care. You could even engineer them to fit together like Lego blocks so there is no wasted space, this would work if they're off most of the time.
Remember, from a 10,000 ft vantage point SMR is just a coding scheme, ie a way to write data given certain constraints. This is no different from cellphone coding (write data given the possibility of a bit error from noise) or flash coding (write data given the possibility of bad cells).
The particular constraints of hard drives are that the storage medium (magnetic grains) are distributed at random sizes, orientations, and locations, so that you may have spots where these all come together so that the desired bit is recorded weakly or not at all. To deal with this, like flash, like cellular, we design a code that includes some redundant data, and whose precise design is based on the "noise structure" of the medium, ie the disk surface.
Now, what is interesting about disks compared to flash or cellular, is that the disk surface is 2D. So while traditional coding is one-dimensional, you can gain some additional efficiency by making the code two dimensional. SMR is a particular version of this with an additional constraint that the write head is wider than the read head. But even in the absence of that constraint, what I've said is generically true. The only way around it is to remove the noisiness, the randomness, in the storage medium. This is in principle possible with so-called patterned media, but it's unclear that the economics will ever go down that road.
So the way I see it, if the only real use case for hard drives is ever denser storage, then 2D coding has to be part of that future. And 2D coding comes with the constraints/side effects of SMR, even if it's not in the precise form of SMR. However probably for most use cases the worst side effects can be hidden with a flash cache.
@Anton, I think you typed 22TB where you meant to type 24TB, " Furthermore, by using shingled recording, these drives can have their capacity stretched to 22 TB."
Oh yea the MUH PCIe speeds, right. They max out at garbage 4TB and that is very expensive TLC with 5000TBW from Seagate Firecuda rest are junk. Also sequential on NVMe is so pathetic. The only solution was Optane which is a real breakthrough in Technology with sky high endurance that not even SLC can dream of and insanely consistent sequential or any type of workload. Too bad Intel got bloated with other junk and killed Optane.
Oh please buy that PCIe 5.0 SSD with active tiny fan on it for those muh 10gbps speeds. While I just stick with my reliable WD Gold with OptiNAND which works without any issues and does not blow up in flames.
The alternative is SAS on Consumer mobos, that will let SAS Multi Actuator drives perform at high levels plus no need of 6x ports or scarce 8xSATA just buy a conversion adapter and done. Or even the U.3 standard which is backwards compat with SATA, NVMe, PCIe, U.2 anything. Enterprise has solid tech which will never come to consumers because of greed and idiotic consumer sheep.
SAS for multi actuator drives is really just a hack. SAS was originally designed to have two links per drive for redundancy, and then the dual-actuator drives repurposed that dual-link design to have a single link for each actuator with no redundancy. There's no technical reason why there can't be two SATA or PCIe links per drive other than the lack of a standard connector.
These are for servers, so 99.9999999% of the time your LAN connection will be the limitation. Even with multiple 10Gbps in aggregate you'll saturate the network long before you saturate the hard drives.
I am somewhat interested in seeing what HAMR can do for the 2.5in segment. The maximum for 2.5in HDDs is 5TB, and it's in that taller 15mm height configuration. An SSD offers 8TB in the normal 2.5in 7mm height drive, but the cost of that is astronomical in comparison.
The last thing they did was the main killer in my mind: switching to shingled recording on near all 2.5" units.
Without that I might have kept them around a little longer, because I still run RAID6 and don't need that much capacity.
But at these sizes anything beyond a mirror set is just way too much capacity, even if I preferred the bandwidth of RAIDs for shorter backups on my 10Gbit network.
We might see 30 TB in a few years. We will *NEVER SEE* 50TB. 50TB would be in roughly 6 years. You know what else will be going on in 6 years? 32 and 64 TB SSDs.
Oh but those will be so costly! I can hear you say. Yes. They will be the newest highest capacity drives. Of course they will cost a lot. However, and this is important, they will not cost more than a 50TB HDD.
The industry KNOWS this is coming. Yet they keep trying to bullshit up with "50 TB is coming soon!" as if we are too stupid to realize the writing on the wall; HDD tech has about 5 years left in it before no one outside of rediculopusly good deals invests into it. So they continue to try and semi con us with social engineering so people expect this to come. And in three years when it doesnt and they are still promising it at 32TB, people will still believe it, and SSDs will be at 16TB for the price of what 4TB SSD is right now.
The only reason HDD is still around is upfront cost. Becuse it is worse at literally everything else, and that changes in less than 6 years.
You are grossly over estimating the future cost scaling of NAND Flash. If you look up some historical data (WD are unbiased so are a good source) you'll see the $/GB gap has not really closed. SSDs got a bump improvement moving to 3D. HDDs will get a bump improvement moving to HAMR.
Ultratech, you need to school yourself on some tech. Or at least tour a data center. Ask Cogent or AWS what their storage deployment strategy is now and for the near term: hard disks reign supreme for a variety of reasons other than cost.
They are coming. WD is also innovating with their plans esp their OptiNAND technology started high density CMR. They already debuted the Dual Actuator in Ultrastar series it's not your consumer class but pure Enterprise.
There's a high demand of Storage even if HDD adoption rate is slowing and falling on all sectors, the archival is very important and since Optane is dead and the Enterprise TLC Is insanely low density vs HDD and garbage Endurance.
And also PCIe4.0 was here since 2020 on consumer space and the storage space is literally pathetic at 2TB max for majority and only 4TB exists in niche and 8TB in even insane expensive BS TLC tech which has horrendous TBW Endurance, only 4TB Firecuda is worth purchasing. And NVMe SAS / U.2 Enterprise SSDs are not even high density they have piss poor capacity.
PCIe 5.0 NVMe SSDs are not even here, not in Enterprise not in Consumer side. Only select Enterprise may have it already, and on Client / Consumer side 2TB is going to be a lot of cash for pathetic heat and low endurance. WD Gold 20TB can be bought for $400.
In the SATA SSD space, pathetic 4TB is the last option Samsung will soon retire 860 Evo and will kill high capacity NAND, ever seen how it looks inside ? 90% is free space lol a small chip. They could have made it 50TB easily but they did not because of greed and stupid consumers.
HDDs are never going away and if that is not the case WD would have put a ton of money on EAMR technologies and their new OptiNAND + Ultra SMR with extreme high densities and same for Toshiba for MAMR, and Seagate for HAMR all are targeting 50TB. It will happen. The major issue is it should be available for Clients too. I hope it does. Tired of buying multiple drives for those 4K REMUX and tons of old shows, Scene ISO copies etc.
Good words Silver5urfer! That UltraStar of which you speak is not 100% only for enterprise, ya know! I have 3 of them (14TBs and an 18TB) in my home office PC and I can tell you they are fully compatible with a regular PC and work a treat! I do 4K vid editing so need all the space i can get. I now exclusively buy the GOLD versions which are the same as GOLD except they don't have a few of the 'settings' that will enable them to work with more compatibility on enterprise systems. UltraStar and GOLD are 99.9% identical drives so you can'y go wrong with either and they are the very best WD has to offer. I wouldn't but any other drive, only WD, only GOLD.
Sorry, in my adjacent comment I meant to say UltraStar not GOLD here . . . "I now exclusively buy the GOLD versions which are the same as 'UltraStar' except they don't have a few of the 'settings' that will enable them to work with more compatibility on enterprise systems.
Somewhat relevant, I picked up two 1TB WD Raptor 10K HDD 22 years ago and have been using them for long term storage ever since. The PC they're in has been running 24/7 most of the time (not accounting for upgrades and such), and these beasties have yet to show a bad sector. The first (consumer) SDD I bought, A 128GB WD blue, failed recently after only a few years usage.
This SSD was the boot drive and to extend its life I had offloaded the swap and temp files to a 3TB HDD to extend the life of the SDD. Even then, the SSD did fail.
YEP! I concur Jerem43, those WD RAPTOR 10k rpm drives were the bomb back in the day!! I only buy WD and from here on they'll be WD GOLD and/or Ultrastar (same drive basically). Just wonderful and consummately reliable hard drives.
Like Jerem43 4 comments above, I have a 1TB WD Raptor 10,000rpm HDD and in two decades has never missed a beat. I also have 2x 4TB WD regular drives from that same period and they've been 100% reliable too. In fact, I was so impressed with them that 3 years ago I invested in 2x 14TB Ultrastar drives and in the last 2 years an 18TB GOLD and a 22TB Ultrastar. I do 4K vid editing so need the space. I have full confidence in WD drives, especially these new Ultrastar and GOLD (basically same drive). In fact, the new ones at 7,200 rpm wipe the floor with the Raptor 10,000rpm which was the superstar of HDDs back in the 2000s. I have never had a WD failure in my last 30 years but have had 4 Seagates die at the most inopportune times. Warranty was a hassle with 3 of them thanks to Seagate Australia and the other died just out of warranty. I sold my soul to WD back then and have been handsomely rewarded for my faith. These GOLD and Ultrastar drives truly are the best HDDs money can buy currently. Yes, my C:\ is a 2TB Samsung 980 NvMe. OS drive is the only exception I make in the HDD domain. Oh, and don't talk to me about SMR - I had a nasty laptop experience 4 years ago with Seagate I'd prefer not to discuss as swearing isn't kosher on this site ;-) . . . CMR all the way with me now as far as I'm concerned CMR is the ONLY HDD tech that should be sold to consumers at retail level.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
70 Comments
Back to Article
boozed - Friday, January 27, 2023 - link
Ahh the new Anandtech recommended consumer drives!ads295 - Tuesday, January 31, 2023 - link
Good one."Looking for a hard drive? Well of course you need 30TB of space."
FunBunny2 - Wednesday, February 1, 2023 - link
"Looking for a hard drive? Well of course you need 30TB of space."c'mon man!! nobody needs more than 640K.
HarryVoyager - Tuesday, February 14, 2023 - link
Don't worry. I'm sure it won't me more than a year or two before the latest AAA games are pushing 10TB before DLC.If you build it, they will fill it!
PeachNCream - Saturday, February 4, 2023 - link
Don't forget all recommendations will be supported by viewing manufacturer specs and absent of any review efforts, but those links to Amazon product pages will have sponsor kickbacks.The Von Matrices - Friday, January 27, 2023 - link
The only advantage hard drives hold over solid state media is cost per capacity, so that is the only metric that matters. Adding so much new technology surely has to increase the manufacturing cost, so capacity has to increase significantly to justify the price increase. Unless you're in a situation where you absolutely cannot add a few additional disks I don't see any reason to buy the first production widely available HAMR disks. It will take until the 40TB generation until HAMR beats CMR in cost/capacity and I actually consider buying one.I'm also waiting to hear about the maximum number of program/erase cycles on these HAMR disks. That detail seems curiously absent from any press releases.
StevoLincolnite - Friday, January 27, 2023 - link
Pretty sure mechanical drives are better for archival purposes as well... Where NAND will bit flip and loose data over time.The Von Matrices - Friday, January 27, 2023 - link
My experience with hard drives has been that the helium hard drives aren't perfect for long-term cold storage either - you still need to verify them every few months or so because I've had a few that slowly leak helium. Sure, since the data isn't affected when the helium leaks you could send it to a disaster recovery service and they could get all the data back but that's expensive. Tape backup seems to be the only safe option for long-term cold storage.nandnandnand - Saturday, January 28, 2023 - link
Is it necessarily a failure if all helium leaks out of a drive, or could it run at a lower RPM (long enough for the whole drive to be copied)?The Von Matrices - Tuesday, January 31, 2023 - link
I'm not familiar enough with fluid dynamics to give an informed answer. The helium has a thinner boundary layer than air allowing the heads to be closer to the platters, so if air replaced the helium, the heads would be too far away from the platter to read anything, but I'm not sure how the rotation speed affects the thickness of the boundary layer.hansmuff - Tuesday, January 31, 2023 - link
Aah, tape. When I was a teenager, TRAVAN tapes and QIC-80 actually were being sold in high volume to end-users, not just IT. I had a tape drive. Sure, slow as all shit but the tapes were indestructible compared to a CD or another HDD.I'd still have tape if I had more backup needs, but modern LTO drives are a little costly to me.
Samus - Wednesday, February 1, 2023 - link
I recently recovered data from a set of DDS-2 tapes I found at my parents. Data was archived using Win95 backup so I had to build a friggin Win95 PC out of a P3 I bought and source a DAT drive (the first DDS-2 drive off eBay wouldn't read the tapes and turned out to be a defective drive, when I thought the tapes were the problem, but a second HP StorageWorks DDS-4 drive had DDS-2/3 R/W compatibility.)Anyway, all 4 tapes recovered perfectly with no errors. They were stored in a basement that wasn't humidity or temperature controlled here in Chicago for over 25 years. That says everything you need to know about why tape media and magnetic storage in general are the archive standard bearer.
thunderbird32 - Wednesday, February 1, 2023 - link
Depends on how "long term" you want it to be. Lots of QIC tapes from the 90's and back are no longer usable because the rubber bands have turned to goo. Hopefully more "modern" tape designs like LTO have better shelf life.Samus - Thursday, February 2, 2023 - link
QIC (and Travan) were a race-to-the-bottom storage medium for cost-conscious consumers. Iomega and Seagate pushed them hard and there was unfortunately adoption. They were garbage back then and they're even less than garbage now.DAT and LTO have by design simple and robust storage vessels. The tape media itself is incredibly high tech, having gone through many material changes but all of which have guaranteed a minimum archival lifetime of many, many decades when stored in a dry cool environment.
I have a softspot for DAT because of its incredible flexibility and reasonable cost of entry, more than QIC but much less than LTO. Working in IT for decades, we had used DDS entirely through the 90's and 00's until it was abandoned. These days only one office I manage has tape backup and it is unsurprisingly LTO. Reliable cloud backup options and inexpensive JBOD's for on-site backup have mostly replaced the need for SMB tape backup but you are still presented with the security risk of live backup storage. LTO auto loaders are truly the most secure on-site backup option if you can afford it.
UltraTech79 - Monday, January 30, 2023 - link
lol They arnt. Where are you getting your bs "pretty sure" from? Cherry picking?TheinsanegamerN - Tuesday, January 31, 2023 - link
Cold storage rating. NAND memory is measured in weeks, magnetic platters in years.Threska - Friday, January 27, 2023 - link
Gotta keep up as backups for other hard drives, and them for other drives.Doug_S - Saturday, January 28, 2023 - link
Adding the HAMR component may increase cost but the economics for hard drives are different today than they were a decade ago when they were still shipped with almost every laptop and PC. Back then they wanted drives with a single platter to cost as little as possible, with $/GB a secondary concern at best, which limited the amount of technology that could be employed.Now that hard drives are bulk storage only they are concerned about the $/GB figure, with total concern of secondary concern. So they can add more expensive technology to the functional part of the drive if means enough additional bytes stored that $/GB is less even when total cost is more. Because as you say if SSDs can close the gap for cost per capacity then it is game over for hard drives.
As to your second paragraph, I would assume they haven't mentioned a limit on "program/erase" cycles since like other hard drives they are unlimited.
LiKenun - Sunday, January 29, 2023 - link
At some point, hard disks will *only* be good for cold storage.A 3.5″ drive occupies a volume of 18.1 cu in. A 2.5″ drive occupies a volume of 6.5 cu in. The highest capacity hard disk drive will be 30 TB, giving a density of 1.7 TB per cu in. The highest capacity 15 mm, 2.5″ solid state drive will be 61 TB (announced by Solidigm in November 2022) giving a density of 9.6 TB per cu in. To break even with such a density, a 3.5″ hard disk drive would need to have a capacity of 174 TB. In other news, Nimbus Data announced back in April 2022 that they were working on a 200 TB 3.5″ solid state drive, bringing the density to 11.0 TB per cu in.
Hard disk drives had already lost the capacity crown since the 8 TB 2.5″ solid state drives came out in 2016. Now we’re just waiting for the cost of NAND to encroach into one of hard disk storage’s last advantages over it.
Doug_S - Monday, January 30, 2023 - link
TB/cu inch is not a metric anyone cares aboutDanNeely - Monday, January 30, 2023 - link
TB/rack is something big cloud/etc operators care about. The amount of data they need to handle is growing exponentially, and upgrading to new denser HDDs is cheaper than building new data centers.Doug_S - Monday, January 30, 2023 - link
They are limited by power long before they are limited by the difference between HDD and SSD.Marko123 - Monday, January 30, 2023 - link
That's true but it's a distant third to $/GB and power consumption (i.e. TCO).Reflex - Monday, January 30, 2023 - link
Right now there is no path for NAND to catch up to HDD's in the same physical space or price. That said, there are other competitors for solid state storage, as well as other takes on flash memory that could pose a challenge. I was hoping Optane would be it, but unfortunately it's just not able to be produced cheap enough at high density yet.Samus - Sunday, January 29, 2023 - link
Von,Hard drives have numerous advantages over solid state drives. The storage industry isn't purely focused on 'speed' as a performance metric.
Hard drives have superior archival performance as noted by Stevo. NAND will fade long before hard disks will - I have magnetic disks that are decades old with no data retention problem, and many have no bad sectors.
Hard drives have superior reliability in heavy non-WORM applications. SSD's have a finite lifetime of writes - HDD's don't. This makes hard disks the continued standard for databases and backup\archive.
As you mentioned, hard disks are less expensive per TB, but keep in mind most non-consumer (think enterprise\data center) clients are less sensitive to cost and more sensitive to reliability and low maintenance.
Even if solid state cost per TB were identical to hard drives, you would still see the later continue being used in a variety of large-scale applications. Magnetic storage isn't going anywhere until NAND has similar data retention and write amplification properties.
As far as helium drives, there as specific metrics for proper storage that if not followed will cause the seal to fail. It's important they be kept above 30C - even when cold. Temperature fluctuations are the primary cause of helioseal failures in WDC\Hitachi drives. Seagate has a different seal technology that seems to be more resilient during cold storage.
qap - Sunday, January 29, 2023 - link
You can't compare decades old HDDs with current. Physical size of a bit has decreased significantly and that leads to shorter time it stays readable.Hard drives have limited write endurance as well. It mostly relates to spin-up/spin-down cycles, but recently HDD manufacturers started listing also DWPD metric and is not great.
As for large scale storage - it's not as easy as you think it is. We have large storage (in PBs) and it can actually be cheaper to store it using SSDs. We can use less of them because we can use erasure codes instead of mirroring in our object storage thanks to much faster access times and higher iops.
And one thing about cold storage - our experience is that you really don't want to use HDDs for that. When you turn them off, there's good chance that some of them will not spin up when you turn the server on in a month (or even in one day if you have to move servers elsewhere). SSDs don't have this problem.
What will eventually kill HDDs is their speed. Or to be more precise speed-to-capacity ratio. Even already existing 20TB HDD takes several days to recover (failures are inevitable in a large scale storage). That's a f***ing huge window for another drive, two or even more to die. And unfortunately it's intrinsical for HDDs that their speed grows with square root of their capacity (it doesn't grow with number of tracks, only with density of data per track). Replacing a failed 30+TB HDD would really take a week. Even two mirrors are not enough in that scenario.
Marko123 - Monday, January 30, 2023 - link
PBs is not really large storage, compared to CSP deployments where 100K+ drives will be aggregated in a large data center. And these cloud HDDs run flat out 24/7 for their life and never get rebuilt - large ECC codes are used. Rebuild doesn't make sense for the reasons you say - that is why storage arrays have mostly moved to SSDs. The increase in storage device cost/GB (~7x) isn't material relative to other costs and the benefits. HDDs will not die, there is too big a cost gap to solid state. In essence, a HDD uses a very expensive small solid state device (the head) and multiplies up the storage capacity by spinning cheap media under it. It's also not obvious to me that these HAMR drives will ever be sold to non-cloud customers.UltraTech79 - Monday, January 30, 2023 - link
"Hard drives have superior archival performance"WRONG. People like you pulling data out of your ass need to stop posting.
https://www.backblaze.com/blog/are-ssds-really-mor...
Even worse case for SSD they are roughly equivelent.
Maltz - Monday, January 30, 2023 - link
That article is about failure rates of drives in active use. It says nothing at all about long-term data integrity in offline ("archival") storage.I think it's a pretty commonly accepted thing that NAND voltage levels fade over time when the drive isn't powered and able to refresh them, especially in higher-density technologies like TLC, QLC and up. Samsung actually had an SSD firmware issue several years ago that illustrated this, causing data corruption in a matter of a year or so because it failed to refresh faded cells often enough. Magnetic HDD storage will also eventually fade, of course, but my understanding is that it takes a LOT longer. I suspect that's still true even in newer, denser HDD media, even if the newer drives might not be as robust as older drives in that respect.
Reflex - Monday, January 30, 2023 - link
The number of people who misconstrue Backblaze data to make a case for their favorite brand/medium/method never ceases to amaze me. Like you said, they were measuring something entirely different than what was being discussed.Marko123 - Monday, January 30, 2023 - link
Completely irrelevant statistics at large scale. HDDs are being deployed now in large CSPs to replace tape storage. Archival is the cheapest of all media. SSDs make no sense there.Samus - Monday, January 30, 2023 - link
I love when people link to Backblaze who, for a solid decade, has continued releasing uncredible, extremely bias reports on their hard disk data using flawed testing methodology and irrelevant metrics. They are the backyard mechanic of data storage and anybody who uses them is seriously clueless. They literally shucked drives for their redneck drive pods. Real data centers don't use SSD's for storage or archive. They find their application in caching and OS dependency.Arnham - Friday, March 10, 2023 - link
"Real data centers don't use SSD's for storage or archive"guess that makes backblaze a real data center then, because their SSDs are only used for boot/OS drives...
Can you elaborate on your criticisms of backblaze, what exactly is your issue with their methodology and bias?
They shucked drives during the great hard drive shortage when all the HDD factories got flooded, way back in 2011. External hard drives were still widely available, but internal ones were not. I myself shucked a few around then due to the shortage.
TheinsanegamerN - Tuesday, January 31, 2023 - link
Maybe cool down your whataboutism until you learn what reading comprehension is?Reflex - Monday, January 30, 2023 - link
There is no practical limit on 'program/erase' for magnetic disks, that is why it's not mentioned in the press releases or the technical literature. HDD's are measured by MTBF, a measure of estimated reliability, you can find more information here: https://en.wikipedia.org/wiki/Mean_time_between_fa...On magnetic disks there is no write/erase cycle because old data is not 'erased', it is simply overwritten by new data and neither process produces any wear on the platter as it's done magnetically. The failure points are typically in the motor or if vibration dampening is not adequate. In a hypothetically perfect environment they'd last longer than a human lifetime.
Marko123 - Monday, January 30, 2023 - link
HAMR drives will likely have a write endurance, unlike PMR. It won't be an SSD type endurance where the location matters (media wear out), but rather the total write head power on hours.Reflex - Tuesday, January 31, 2023 - link
I mean sure in the sense that mechanical parts wear down over time, but it's not program/erase cycles and we already have similar for MTBF for measuring the motors.The Von Matrices - Tuesday, January 31, 2023 - link
You must not be familiar with HAMR. The heating/cooling cycling of HAMR causes wear on the media unlike conventional hard drives.Reflex - Wednesday, February 1, 2023 - link
I am familiar with HAMR. So far I have seen no information that indicates the heating/cooling cycle wears faster than what MTBF allows, making such a metric useless as the one that matters is the one that will fail first.Secondly, if you understood HAMR you wouldn't say "program/erase cycles" since that is literally NAND terminology that has nothing to do with how magnetic media works and as such will never be a metric used.
flyingpants265 - Monday, January 30, 2023 - link
Drives can hit 50tb right now without any extra technology. They'd just need to be bigger. Also, they can turn at 1000rpm for all I care. You could even engineer them to fit together like Lego blocks so there is no wasted space, this would work if they're off most of the time.Ken_g6 - Friday, January 27, 2023 - link
Do SMR and HAMR go together for even larger storage drives?nandnandnand - Friday, January 27, 2023 - link
https://ieeexplore.ieee.org/document/6947937https://www.anandtech.com/show/14077/toshiba-hdd-r...
It's possible. I don't know if Seagate or WD plan to though.
name99 - Saturday, January 28, 2023 - link
Remember, from a 10,000 ft vantage point SMR is just a coding scheme, ie a way to write data given certain constraints. This is no different from cellphone coding (write data given the possibility of a bit error from noise) or flash coding (write data given the possibility of bad cells).The particular constraints of hard drives are that the storage medium (magnetic grains) are distributed at random sizes, orientations, and locations, so that you may have spots where these all come together so that the desired bit is recorded weakly or not at all. To deal with this, like flash, like cellular, we design a code that includes some redundant data, and whose precise design is based on the "noise structure" of the medium, ie the disk surface.
Now, what is interesting about disks compared to flash or cellular, is that the disk surface is 2D. So while traditional coding is one-dimensional, you can gain some additional efficiency by making the code two dimensional. SMR is a particular version of this with an additional constraint that the write head is wider than the read head. But even in the absence of that constraint, what I've said is generically true. The only way around it is to remove the noisiness, the randomness, in the storage medium. This is in principle possible with so-called patterned media, but it's unclear that the economics will ever go down that road.
So the way I see it, if the only real use case for hard drives is ever denser storage, then 2D coding has to be part of that future. And 2D coding comes with the constraints/side effects of SMR, even if it's not in the precise form of SMR. However probably for most use cases the worst side effects can be hidden with a flash cache.
ballsystemlord - Friday, January 27, 2023 - link
@Anton, I think you typed 22TB where you meant to type 24TB, " Furthermore, by using shingled recording, these drives can have their capacity stretched to 22 TB."xane - Friday, January 27, 2023 - link
What about the elephant in the room? Namely, the speed. Are these SATA drives? It'll take a week to copy lolnandnandnand - Saturday, January 28, 2023 - link
https://www.pcmag.com/news/seagate-creates-an-nvme...Seagate did demo an NVMe HDD in 2021. Multi-actuator technology could push speeds past the SATA 3.0 limit. It will still be slow.
abufrejoval - Sunday, January 29, 2023 - link
That's fine, HDDs are the new tape: You don't copy drives, you archive files you believe you'll need rarely, if ever.But yes, I am quite glad NVMe is so affordable these days, because I do remember trying to build for bandwidth with wide RAIDs.
Silver5urfer - Tuesday, January 31, 2023 - link
Oh yea the MUH PCIe speeds, right. They max out at garbage 4TB and that is very expensive TLC with 5000TBW from Seagate Firecuda rest are junk. Also sequential on NVMe is so pathetic. The only solution was Optane which is a real breakthrough in Technology with sky high endurance that not even SLC can dream of and insanely consistent sequential or any type of workload. Too bad Intel got bloated with other junk and killed Optane.Oh please buy that PCIe 5.0 SSD with active tiny fan on it for those muh 10gbps speeds. While I just stick with my reliable WD Gold with OptiNAND which works without any issues and does not blow up in flames.
The alternative is SAS on Consumer mobos, that will let SAS Multi Actuator drives perform at high levels plus no need of 6x ports or scarce 8xSATA just buy a conversion adapter and done. Or even the U.3 standard which is backwards compat with SATA, NVMe, PCIe, U.2 anything. Enterprise has solid tech which will never come to consumers because of greed and idiotic consumer sheep.
TheinsanegamerN - Tuesday, January 31, 2023 - link
8TB NVMe drives exist. TLC even!Silver5urfer - Wednesday, February 1, 2023 - link
Sabrent 8TB TLC trash with same endurance as Firecuda 4TB ? Thanks but no thanks for that mess and which costs $1200.The Von Matrices - Tuesday, January 31, 2023 - link
SAS for multi actuator drives is really just a hack. SAS was originally designed to have two links per drive for redundancy, and then the dual-actuator drives repurposed that dual-link design to have a single link for each actuator with no redundancy. There's no technical reason why there can't be two SATA or PCIe links per drive other than the lack of a standard connector.TheinsanegamerN - Tuesday, January 31, 2023 - link
These are for servers, so 99.9999999% of the time your LAN connection will be the limitation. Even with multiple 10Gbps in aggregate you'll saturate the network long before you saturate the hard drives.meacupla - Saturday, January 28, 2023 - link
I am somewhat interested in seeing what HAMR can do for the 2.5in segment. The maximum for 2.5in HDDs is 5TB, and it's in that taller 15mm height configuration. An SSD offers 8TB in the normal 2.5in 7mm height drive, but the cost of that is astronomical in comparison.nandnandnand - Saturday, January 28, 2023 - link
https://blocksandfiles.com/2023/01/10/fourth-2022-...https://www.techspot.com/news/97294-shipments-hdds...
I'm surprised to see so many 2.5" HDDs being shipped. It should be some mix of cheap laptops and portable drives.
cbm80 - Saturday, January 28, 2023 - link
also game consoles and DVRscbm80 - Saturday, January 28, 2023 - link
Will probably never happen. The HD makers haven't done anything with 2.5" since they hit the 1.0TB/platter level in 2016.abufrejoval - Sunday, January 29, 2023 - link
The last thing they did was the main killer in my mind: switching to shingled recording on near all 2.5" units.Without that I might have kept them around a little longer, because I still run RAID6 and don't need that much capacity.
But at these sizes anything beyond a mirror set is just way too much capacity, even if I preferred the bandwidth of RAIDs for shorter backups on my 10Gbit network.
Railgun - Sunday, January 29, 2023 - link
"We are meeting or exceeding all product…reliability metrics” said no one outside of Seagate ever.UltraTech79 - Monday, January 30, 2023 - link
God these people are such LIARS.We might see 30 TB in a few years. We will *NEVER SEE* 50TB.
50TB would be in roughly 6 years. You know what else will be going on in 6 years? 32 and 64 TB SSDs.
Oh but those will be so costly! I can hear you say. Yes. They will be the newest highest capacity drives. Of course they will cost a lot. However, and this is important, they will not cost more than a 50TB HDD.
The industry KNOWS this is coming. Yet they keep trying to bullshit up with "50 TB is coming soon!" as if we are too stupid to realize the writing on the wall; HDD tech has about 5 years left in it before no one outside of rediculopusly good deals invests into it. So they continue to try and semi con us with social engineering so people expect this to come. And in three years when it doesnt and they are still promising it at 32TB, people will still believe it, and SSDs will be at 16TB for the price of what 4TB SSD is right now.
The only reason HDD is still around is upfront cost. Becuse it is worse at literally everything else, and that changes in less than 6 years.
Marko123 - Monday, January 30, 2023 - link
You are grossly over estimating the future cost scaling of NAND Flash. If you look up some historical data (WD are unbiased so are a good source) you'll see the $/GB gap has not really closed. SSDs got a bump improvement moving to 3D. HDDs will get a bump improvement moving to HAMR.Samus - Monday, January 30, 2023 - link
Ultratech, you need to school yourself on some tech. Or at least tour a data center. Ask Cogent or AWS what their storage deployment strategy is now and for the near term: hard disks reign supreme for a variety of reasons other than cost.Silver5urfer - Tuesday, January 31, 2023 - link
They are coming. WD is also innovating with their plans esp their OptiNAND technology started high density CMR. They already debuted the Dual Actuator in Ultrastar series it's not your consumer class but pure Enterprise.There's a high demand of Storage even if HDD adoption rate is slowing and falling on all sectors, the archival is very important and since Optane is dead and the Enterprise TLC Is insanely low density vs HDD and garbage Endurance.
And also PCIe4.0 was here since 2020 on consumer space and the storage space is literally pathetic at 2TB max for majority and only 4TB exists in niche and 8TB in even insane expensive BS TLC tech which has horrendous TBW Endurance, only 4TB Firecuda is worth purchasing. And NVMe SAS / U.2 Enterprise SSDs are not even high density they have piss poor capacity.
PCIe 5.0 NVMe SSDs are not even here, not in Enterprise not in Consumer side. Only select Enterprise may have it already, and on Client / Consumer side 2TB is going to be a lot of cash for pathetic heat and low endurance. WD Gold 20TB can be bought for $400.
In the SATA SSD space, pathetic 4TB is the last option Samsung will soon retire 860 Evo and will kill high capacity NAND, ever seen how it looks inside ? 90% is free space lol a small chip. They could have made it 50TB easily but they did not because of greed and stupid consumers.
HDDs are never going away and if that is not the case WD would have put a ton of money on EAMR technologies and their new OptiNAND + Ultra SMR with extreme high densities and same for Toshiba for MAMR, and Seagate for HAMR all are targeting 50TB. It will happen. The major issue is it should be available for Clients too. I hope it does. Tired of buying multiple drives for those 4K REMUX and tons of old shows, Scene ISO copies etc.
Squeaky'21 - Monday, March 6, 2023 - link
Good words Silver5urfer! That UltraStar of which you speak is not 100% only for enterprise, ya know! I have 3 of them (14TBs and an 18TB) in my home office PC and I can tell you they are fully compatible with a regular PC and work a treat! I do 4K vid editing so need all the space i can get. I now exclusively buy the GOLD versions which are the same as GOLD except they don't have a few of the 'settings' that will enable them to work with more compatibility on enterprise systems. UltraStar and GOLD are 99.9% identical drives so you can'y go wrong with either and they are the very best WD has to offer. I wouldn't but any other drive, only WD, only GOLD.Squeaky'21 - Monday, March 6, 2023 - link
Sorry, in my adjacent comment I meant to say UltraStar not GOLD here . . . "I now exclusively buy the GOLD versions which are the same as 'UltraStar' except they don't have a few of the 'settings' that will enable them to work with more compatibility on enterprise systems.jerem43 - Sunday, February 5, 2023 - link
Somewhat relevant, I picked up two 1TB WD Raptor 10K HDD 22 years ago and have been using them for long term storage ever since. The PC they're in has been running 24/7 most of the time (not accounting for upgrades and such), and these beasties have yet to show a bad sector. The first (consumer) SDD I bought, A 128GB WD blue, failed recently after only a few years usage.This SSD was the boot drive and to extend its life I had offloaded the swap and temp files to a 3TB HDD to extend the life of the SDD. Even then, the SSD did fail.
Squeaky'21 - Saturday, March 4, 2023 - link
YEP! I concur Jerem43, those WD RAPTOR 10k rpm drives were the bomb back in the day!! I only buy WD and from here on they'll be WD GOLD and/or Ultrastar (same drive basically). Just wonderful and consummately reliable hard drives.yeeeeman - Monday, February 6, 2023 - link
Și they milked the heck out of SMR giving users 2tb each year. And now they finally move to hamr which gives 10tb each year. Good.albatrozz - Tuesday, February 7, 2023 - link
I'd love to own a 50TB drive, but unfortunately Seagate's reputation for reliability is... less than stellar.s.yu - Wednesday, February 8, 2023 - link
I concur...I no longer consider anything Seagate after a drive exhibited some data corruption soon after copied over from an old drive.Squeaky'21 - Saturday, March 4, 2023 - link
Like Jerem43 4 comments above, I have a 1TB WD Raptor 10,000rpm HDD and in two decades has never missed a beat. I also have 2x 4TB WD regular drives from that same period and they've been 100% reliable too. In fact, I was so impressed with them that 3 years ago I invested in 2x 14TB Ultrastar drives and in the last 2 years an 18TB GOLD and a 22TB Ultrastar. I do 4K vid editing so need the space. I have full confidence in WD drives, especially these new Ultrastar and GOLD (basically same drive). In fact, the new ones at 7,200 rpm wipe the floor with the Raptor 10,000rpm which was the superstar of HDDs back in the 2000s. I have never had a WD failure in my last 30 years but have had 4 Seagates die at the most inopportune times. Warranty was a hassle with 3 of them thanks to Seagate Australia and the other died just out of warranty. I sold my soul to WD back then and have been handsomely rewarded for my faith. These GOLD and Ultrastar drives truly are the best HDDs money can buy currently. Yes, my C:\ is a 2TB Samsung 980 NvMe. OS drive is the only exception I make in the HDD domain. Oh, and don't talk to me about SMR - I had a nasty laptop experience 4 years ago with Seagate I'd prefer not to discuss as swearing isn't kosher on this site ;-) . . . CMR all the way with me now as far as I'm concerned CMR is the ONLY HDD tech that should be sold to consumers at retail level.