It's still a better drive than competing products in its price segment. The only other drive that comes close is the 840 Evo (which apparently has some huge performance bugs on static data - and support is terrible...the bug has existed for over a year.)
You could consider spending more money on an Intel drive or something from Sandisk, but most consumers need something "reliable-enough" and price is always the driving factor in consumer purchases. If that weren't true, you wouldn't see so many Chevy Cobalt's and Acer PC's.
The irony is, for price and reliability, the best route is a used Intel SSD320 (or even an X25-M) off eBay for $60 bucks. They never fail and have a 15 year lifespan under typical consumer workloads. They're still SATA 3Gbps, but many people won't notice the difference if coming from a hard disk. Considering the write performance of many cheap SSD's anyway (such as the M500) the performance of a 4 year old Intel SSD might even still be superior.
My X-25M failed after 2 years of use. So please don't use the word 'never' - Intel sent me a 320 as a replacement, due to 3 year warranty. Performance wise, it's ancient but still an ssd.
If the cost is low enough, they might be able to compete with hard drives. A two-disk RAID0 of these 1TB drives could replace my 2TB WD Black, which I store my game library on. And even a slow drive like this is a million times faster than any hard drive.
That said, it's still a $900 set of SSDs fighting with a $200 hard drive. What we really need is a $200 1TB SSD, even a horribly slow one (is it possible to pack four bits into one cell? Like a QLC or something? That might be the way to do it). That would be able to compete not just in the performance sector, but in the bulk storage arena.
For people like me, capacity also affects performance, because it means I can install more apps/games to that drive instead of the slow spinning rust. I actually bought a very low-performing Mushkin 180GB SSD for my desktop, because it was the same price as the 120GB drives everyone else was slinging. That meant I could fit more games onto it, even the big ones like Skyrim.
That review wasn't up yet when I posted my comment. But you can add to that, that it's still 340x worse than the ARC 100 in that same test (which is also a budget drive). It's worse in the read test than the MX100 and 5x worse than the ARC. So yeah, service times are just terrible on Crucial's 256GB drives (all models).
Obviously, Dynamic Write Acceleration is not meant to be benchmarked. And "client workload" is not about constant high pressure on the SSD, so the drive is basically ok.
Agreed. It seems like the whole premise of the Dynamic Write Acceleration requires idle time to move data off the SLC NAND, but benchamarking doesn't allow that to happen (and isn't like real-life client usage). Also, if you just compare the MX100 256GB vs the M600 256GB, the newer SSD does have better write speeds, and does better at everything except the destroyer test.
Appreciate the brief write up on encryption and that this may be a technically challenging area to detail. But in a post-Snowden world with increasing complex malware and emphasis on data mining, we should all be pressing for strengthening of protective technologies.
Additional article depth on encryption technologies, certification authorities and related technical metrics would be appreciated by many of us who are not IT professionals, but are concerned about protecting our personal LANs and links to our wireless/cellular devices.
Contrary to the government's and RIAA most recent assertions, a desire for privacy and freedom from warrantless searches should be a fundamental American value.
Thanks for the in depth technical reviews and hope Anand is doing well.
This explains the data loss issues we've had with the MX100 series, both under Windows 7 and FreeNAS.
With all C-states enabled (the default and recommended configuration, which Anandtech doesn't use since some highly advertized drives are badly designed and suffer up to 40% IOPS drop), the drives don't properly handle suspending and resuming the system.
Under FreeNAS, the zpool would slowly accumulate corruption and during the next scrubbing the whole zpool would get trashed and the only option was to restore all data from backup.
Under windows strange errors, like being unable to properly recognise USB devices or install Windows updates, would appear little by little after every suspend/resume cycle until the machine would refuse to boot up at all.
A workaround is to either disable all power-saving C-states or to disable HIPM and DIPM on *all* disk controller, even those which don't have Micron drives connected. Or to never suspend/resume.
We decided to return all our Micron drivers, about 350 total, and get Intel SSDs instead. They're not cheap and not the fastest but at least I don't have to keep re-imaging systems every week..
For information on how to enable/disable HIPM and DIPM under Windows 7 please see: www.sevenforums.com/tutorials/177819-ahci-link-power-management-enable-hipm-dipm.html
Damn, I'm having four systems which are having suspend/resume issues. Especially when the system goes into S5 suspend. All four have MX100 drives. Two identical systems with other brand SSD have not such issues. I'm not having corruption but sometimes when the computer is resumed from S5, it can't find the drive. Bought those drives based on Anand's recommendation. Thx
Thank you for the link. It will be helpful. Can you confirm that modifying hipm dipm helps?
I wouldn't be so sure as to blame Micron for losing data or getting errors when it comes to Windows suspend and resume. I have had WD, Seagate, and the acquired Fujitsu, Hitachi, and Maxtor HDDs, and Intel, Crucial, and Corsair SSDs, and all of them have had problems when it came to Windows suspend and resume. I never ever use Windows' own Sleep and Hibernate states anymore because of this problem. These power states from Windows are not even supposed to be considered an unexpected power loss by most of these storage industry manufacturers because it is required by the specs for Windows to gracefully flush any cached data and power down the drive before yanking the power to it. As far as I know, all of these drives can handle a graceful power down just fine.
Unexpected power loss should happen if and only if the user either physically holds down the power of their PC to forcefully shut down the system, or the power cable is physically disconnected from the drive during operation. If an error is happening during suspend and resume, there's usually something wrong that the OS is doing, or something wrong that the OEM system is doing because if there is no graceful power cycle to the storage during suspend and resume, that's the OS's or the OEM's or the BIOS's fault.
I always disable sleep for that very reason. With my Thinkpad and Vista it never woke, and while things have got better it still happened with Windows 7 on every machine I have tried it on.
O'really? On every single computer I have had, sleep worked without issues. I've had Gateway's, HP's, Acer's, Toshiba's, etc. None of them had problems with sleeping properly in Windows Vista - 8.1. I'm thinking that you are exaggerating or just out and out falsifying what actually was going on.
i agree i always sleep my computer and laptops only OS that had a problem with doing it was Vista with Nvidia + creative card (witch was not comptelay VIsta fault) on windows 7 never had an issue with sleep
i guessing i not had the issue with the 2 512GB MX100 i have (X58 i7-920 systems) as the motherboard i got due not support any of the adv power management features (well i did have to update the firmware in the other system as it was failing after 5 minutes but that system has always been odd, but the firmware update did seem to resolve it or something els i did)
probably bad luck with Drivers and sleep (Drivers are what norm break the sleep, last time i had BSOD issues was related to Sleep and Creative sound drivers)
not that i need to sleep this system as it boots up in under 20 seconds (but Chrome is compleatly CPU bound when it reopens 40 tabs)
Hi Kristian, it seems that Dynamic Write Acceleration is disabled on 512GB and 1 TB disks. From techreport: "Surprisingly, the 1TB and 512GB variants don't have Dynamic Write Acceleration. Those drives are already fast enough for the controller, according to Micron, and the math works out. The Marvell chip can address up to four chips on each of its eight memory channels, making 32-die configurations ideal for peak performance. At 16GB per die, the cut-off point is 512GB."
This is probably the reason behind the 512 GB units having only 50% more endurance that the 256 GB one: DWA on the smaller one can absorb many writes and flush them in sequential form on the MLC array, saving some flash wear (in average).
In fact DWA is not better for endurance, it's worst. - Writting random writes in sequential form is already done on all SSD by write combining. - DWA increase write amplification since the data is first wrote in "SLC" mode then rewrote in "MLC mode".
For 2 bit of data : - 2 cells are used for SLC mode - then 1 cell is used for MLC mode vs - 1 cell is used for MLC mode w/o DWA
Since write speed is rarely a problem in daily usage and since there is counterpart, i don't understand the positive reception for TurboWrite, nCache 2, Dynamic Write Acceleration, etc...
Hi, it really depends on how the Write Acceleration is implemented. While it is true that badly designed WA caches can have a bad effect of flash endurance, a good designed one (and under a favorable workload) can lessen the load on the flash as a whole.
Micron is not discussing their pSLC implementation in detail, so let speak about Sandisk NCache which is more understood at the moment.
NCache works by reserving a fixed amount of NAND die to pSLC. This pSLC slice, while built on top of MLC cells, is good for, say, 10X the cycles of standard MLC (so ~30.000 cycles). The reason is simple: by using them as SLC, you have much higher margin for voltage drop.
Now, lets follow a write down to the flash. When a write arrive to the disk, it places the new data to the pSLC array. After that we have two possibilities:
1. no new write for the same LBA arrives in short time, so the pSLC array is flushed to the main MLC portion. Total writes with WA: 2 (1 pSLC / 1 MLC) - without WA: 1 (MLC)
2. if a new write is recorder for the same LBA _before_ the pSLC array is flushed, the new write will overwrite the data stored in the pSLC portion. After some idle time, the pSLC array is flushed to the MCL one. Total writes with WA: 3 (2 pSLC / 1 MLC) - without WA: 2 (MLC)
In the rewrite scenario (n.2) the MLC portion see only a single write vs the two MLC writes of the no-WA drive. While it is true that the pSLC portion sustain increased stress, its longevity is much longer than the main MLC array so it should not be a problem is their cycles are "eaten" faster. On the other hand, the MLC array is much more prone to flash wearing, so any decrease in writes are very welcomed.
This rewrite behavior is the exact reason behind SanDisk's quoted write amplification number, which is only 0.8: without Write Acceleration, a write amplification less then 1.0 can be achieved only using come compression/deduplication scheme.
As you said it's really depend on the workload for nCache 2, write vs rewrite.
But another point of view is that for example a 120 GB Ultra II with 5 GB nCache 2.0 could be a 135 GB Ultra II without NAND die reserved to nCache 2.0.
For example, any modern, journaled filesystem will constantly rewrite an on-disk circular buffer. Databases use a similar concept (double-write) with another on-disk circular buffer. The swapfile is constantly rewritten ...
Anyway, it surely remain a matter of favorable vs unfavorable workload.
Only some FSes, usually with non-default settings, will double-write any file data, though. What most do is some form of meta-data journaling, where new writes preferably go into free space (one more reason not to fill your drives all the way up!), and the journal logs the writing of the new state. But, the data itself is not in the journal. EXT3/4 can be set write twice, but don't by default. NTFS, JFS, and XFS, among others, simply don't have such a feature at all. So, the additional writing is pretty minimal, being just metadata. You're not likely to be writing GBs/day to the FS journal.
Databases generally should write everything twice, though, so that they never are in an unrecoverable state, if the hardware is functioning correctly.
I have yet to get an answer to this question: what's the point of doing purely synthetic and relative-performance tests, and how does that tell the reader the tangible difference of these drives?
You don't test video cards in terms of IOPS or how fast they pound through a made-up suite. You test what matters: fps.
You also test what matters for CPUs: encoding time, gaming fps, or CAD filter time.
With phones, you test actual battery time or actual page loading time.
With SSDs, why would you not test things like how fast Windows loads, program load time, and time to transfer files? That matters more than any of the current tests! Where am I going wrong, Kristian?
Proper real world testing is subject to too many variables to be truly reproducible and accurate. Testing Windows boot time and app load time is something that can be done, but the fact is that in a real world scenario you will be having more than one app running at a time and a countless number of Windows background processes. Once more variables are introduced to the test, the results become less accurate unless all variables can be accurately measured, which cannot really be done (at least not without extensive knowledge of Windows' architecture).
The reasoning is the same as to why we don't test real-time or multiplayer gaming performance. It's just that the test scenarios are not fully reproducible unless the test is scripted to run the exact same scenario over and over again (like the built-in game benchmarks and our Storage Benches).
That said, I've been working on making our Storage Bench data more relevant to real world usage and I already have a plan on how to do that. It won't change the limitations of the test (it's still trace-based with no TRIM available, unfortunately), but I hope to present the data in a way that is more relevant than just pure MB/s.
You said it yourself: boot time and app load time can be done. These are 2 of the top 5 reasons people buy SSDs. To get around the "uncontrolled" nature, just do multiple trials and take the average.
Add a 3rd test: app load time while heavy background activity is going on, such as copying a 5GB file to an HDD.
4th test: IrfanView batch conversion; time to re-save 100 JPEG files.
All of those can be done on a fresh Windows install with minimal variables.
To expand on my 3rd test: kick off a program that scans your hard drive (like anti-spyware or anti-virus) and then test app load time.
You might be overestimating the amount of disk transfers that go on during normal computer usage. Right now, for example, I've got 7 programs open, and task manager shows 0% CPU usage on all 4 cores. It takes the same time to launch any app now as when I have 0 other programs open. So I think the test set I described would be quite representative of real life, and a massive benefit over what you're currently testing.
We used to do that a couple of years ago but then we reached a point where SSDs became practically indistinguishable. The truth is that for light workloads what matters is that you have an SSD, not what model the SSD actually is. That is why we are recommending the MX100 for the majority of users as it provides the best value.
I think our Light suite already does a good job at characterizing performance under typical consumer workloads. The differences between drives are small, which reflects the minimal difference one would notice in real world with light usage. It's not overly promoting high-end drives like purely synthetic tests do.
Then again, that applies to all components. It's not like we test CPUs and GPUs under typical usage -- it's just the heavy use cases. I mean, we could test the application launch speed in our CPU reviews, but it's common knowledge that CPUs today are all so fast that the difference is negligible. Or we could test GPUs for how smoothly they can run Windows Aero, but again it's widely known that any modern GPU can handle that just fine.
The issue with testing heavy usage scenarios in real world is the number of variables I mentioned earlier. There tends to be a lot of multitasking involved, so creating a reliable test is extremely hard. One huge problem is the variability of user input speed (i.e. how quickly you click things etc -- this vary from round to round during testing). That can be fixed with excellent scripting skills, but unfortunately I have a total lack of those.
FYI, I spent a lot of time playing around with real world tests about a year ago, but I was never able to create something that met my criteria. Either the test was too basic (like installing an app) that showed no difference between drives, or the results wouldn't be consistent when adding more variables. I'm not trying to avoid real world tests, not at all, it's just that I haven't been able to create a suite that would be relevant and accurate at the same time.
Also, once we get some NVMe drives in for review, I plan to revisit my real world testing since that presents a chance for greater difference between drives. Right now AHCI and SATA 6Gbps limit the performance because they account for the largest share in latency, which is why you don't really see differences between drives under light workloads as the AHCI and SATA latency absorb any latency advantage that a particular drive provides.
I suspect a lot of people don't realize there's negligible performance difference across SSDs. And I think lots of people put SSDs in RAID0! Reviews I've seen show zero real-world benefit.
This isn't a criticism, but it's practically misleading for a review to only include graphs with a wide range of performance. What a real-world test does is get us back to reality. I think ideally a review should start with real-world, and all the other stuff almost belongs in an appendix.
Users should prioritize SSDs with: 1. Good enough (excellent) performance. 2. High reliability and data protection. 3. Low cost.
If #1 is too easy, then #2 and #3 should get more attention. I generally recommend Intel SSDs because I suspect they have the best reliability standards, but I really don't know, and most people probably also don't. OCZ wouldn't have shipped as many as they did if people were aware of their reliability.
nowadays you cant buy a bad SSD (unless its phison based, they norm make Cheap USB flash pen drives) even JMicron based SSDs are OK now
its only compatibility problems that make an SSD bad with some setups
JMicron JMF602 had a Very very very bad SSD controller when they made there first 2 (did i say that to many times) http://www.anandtech.com/show/2614/8 (1 second Write delay)
Probably because top tier SSD reached a point a while ago where the differences in performing basic tasks like that is basically milliseconds, which would tell the reader even less.
For large transfers the sequential tests are wholly representative of the task.
I think Anand used to have a test in the early days of SSD reviews where he'd time opening five apps right after boot, but it'd basically be a dead heat with any decent drive these days.
It would tell the reader that any of the drives being tested would fit the bill. Currently, readers might see that drive A is 20% faster than drive B and think that will give 20% better real world performance.
Both types of tests are useful, doing strictly real-world tests would miss information too.
> is basically milliseconds, which would tell the reader even less.
Wrong; that tells the reader MORE! If all modern video cards produced within 1fps of each other, would you rather see that, or solely relative performance graphs that show an apparent difference?
Darn, that's a shame these don't have full data loss protection. I assumed they did too! Still, Micron/Crucial and Intel are my top choices for drives :)
Thanks for posting the information here. I think you are a bit soft on them with the power failure protection marketing, but you did a good job explaining what they were doing and hopefully they will now accurately reflect the capability of the product in their marketing collateral. A lot of people have bought these products with the wrong expectations on power failure, although for most applications they are still very good drives. What is the source for the market data you posted in the article?
Thanks for the clarification on the powerloss protection feature. I am very disappointed by how it actually works because that was a major deciding factor in my purchase of the MX100. At the time, the choice was between the MX100 and the Seagate 600 Pro which was $30 more and which also offers powerloss protection. I would have gladly paid the extra $30 if I had known the actual workings of the MX100.
Since we're on the topic, I wonder if other relatively recent SSDs within the consumer budget that offer powerloss protection (e.g. Intel 730, Seagate 600 Pro) work the way everyone assumes (flush volatile data)? Would love to hear your comment on this.
Seagate 600 Pro is basically an enterprise drive (28% over-provisioning etc), so it does have full power-loss protection. It uses tantalum capacitors like other enterprise SSDs.
As for the SSD 730, it too has full power-loss protection, which is because of its enterprise background (it's essentially an S3500 with an overclocked controller/NAND and a more client-optimized firmware). The power-loss protection implementation is the same as in the S3500 and S3700.
If the 256GB drive were formatted with a 110GB partition, would it operate in Dynamic Write Acceleration 100% of the time? If so, this would be an interesting way to get an SLC drive.
I'm really not sure that the AnandTech Storage Bench 2013 does an adequate job of characterzing the performance of this drive or really, any drive in a consumer class environment. And I'm not sure that filling all the LBA's and looking at the psedo-SLC step down as the drive is filled really tells us anything useful (other than where the break points are...and how much use is that?) either.
Performance consistency? Same deal. Almost no one uses consumer class drives (large steady long term massive writes) this way, and those who do use drives this way likely aren't using consumer class drives.
I can really take nothing useful away from this review. And BTW, this whole "Crucial doesn't really have power protection, we didn't actually bother checking but just assumed and repeated the marketing speak before" stuff is not the kind of thing I expect from AnandTech. With that kind of care being taken in these articles, I'll be careful to read things here with the same sort of skepticism I had previously reserved for other sites. I'd sort of suspended that skepticism with AnandTech over the years. My mistake.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
56 Comments
Back to Article
milli - Monday, September 29, 2014 - link
The MX100 already had terrible service time. The M600 is even worse.I mean if it's even worse than this showing the MX100 delivered (http://techreport.com/r.x/adata-sp610/db2-100-writ... then forget about it.
milli - Monday, September 29, 2014 - link
http://techreport.com/r.x/adata-sp610/db2-100-writ...Link got messed.
BedfordTim - Monday, September 29, 2014 - link
If service times are such an issue why did Tech Report give the MX100 an Editor's Choice award?milli - Monday, September 29, 2014 - link
Because everybody is a sucker for low prices.menting - Monday, September 29, 2014 - link
i guess you go out and buy the fastest, regardless of price then?milli - Monday, September 29, 2014 - link
Obviously not. I'm just giving one of the main reasons why the MX100 wins so many awards.Samus - Monday, September 29, 2014 - link
It's still a better drive than competing products in its price segment. The only other drive that comes close is the 840 Evo (which apparently has some huge performance bugs on static data - and support is terrible...the bug has existed for over a year.)You could consider spending more money on an Intel drive or something from Sandisk, but most consumers need something "reliable-enough" and price is always the driving factor in consumer purchases. If that weren't true, you wouldn't see so many Chevy Cobalt's and Acer PC's.
The irony is, for price and reliability, the best route is a used Intel SSD320 (or even an X25-M) off eBay for $60 bucks. They never fail and have a 15 year lifespan under typical consumer workloads. They're still SATA 3Gbps, but many people won't notice the difference if coming from a hard disk. Considering the write performance of many cheap SSD's anyway (such as the M500) the performance of a 4 year old Intel SSD might even still be superior.
Cellar Door - Monday, September 29, 2014 - link
My X-25M failed after 2 years of use. So please don't use the word 'never' - Intel sent me a 320 as a replacement, due to 3 year warranty. Performance wise, it's ancient but still an ssd.Samus - Monday, September 29, 2014 - link
Like many SSD's, they are prone to failure from overfapping.Lerianis - Friday, October 3, 2014 - link
Eh? Overwriting, I think you mean. That said, all of these drives should be able to handle 20GB's write per day at least for years without issues.makerofthegames - Monday, September 29, 2014 - link
If the cost is low enough, they might be able to compete with hard drives. A two-disk RAID0 of these 1TB drives could replace my 2TB WD Black, which I store my game library on. And even a slow drive like this is a million times faster than any hard drive.That said, it's still a $900 set of SSDs fighting with a $200 hard drive. What we really need is a $200 1TB SSD, even a horribly slow one (is it possible to pack four bits into one cell? Like a QLC or something? That might be the way to do it). That would be able to compete not just in the performance sector, but in the bulk storage arena.
For people like me, capacity also affects performance, because it means I can install more apps/games to that drive instead of the slow spinning rust. I actually bought a very low-performing Mushkin 180GB SSD for my desktop, because it was the same price as the 120GB drives everyone else was slinging. That meant I could fit more games onto it, even the big ones like Skyrim.
sirius3100 - Monday, September 29, 2014 - link
Afaik QLC has been used in some USB-sticks in the past. But for SSDs the amount of write cycles QLC-NAND would be able to endure might be too low.bernstein - Monday, September 29, 2014 - link
you are just wrong, it's an order of magnitude BETTER than a M500 & still 5x better than MX100 : http://techreport.com/r.x/micron-m600/db2-100-writ...milli - Monday, September 29, 2014 - link
That review wasn't up yet when I posted my comment.But you can add to that, that it's still 340x worse than the ARC 100 in that same test (which is also a budget drive). It's worse in the read test than the MX100 and 5x worse than the ARC.
So yeah, service times are just terrible on Crucial's 256GB drives (all models).
nirwander - Monday, September 29, 2014 - link
Obviously, Dynamic Write Acceleration is not meant to be benchmarked. And "client workload" is not about constant high pressure on the SSD, so the drive is basically ok.kmmatney - Monday, September 29, 2014 - link
Agreed. It seems like the whole premise of the Dynamic Write Acceleration requires idle time to move data off the SLC NAND, but benchamarking doesn't allow that to happen (and isn't like real-life client usage). Also, if you just compare the MX100 256GB vs the M600 256GB, the newer SSD does have better write speeds, and does better at everything except the destroyer test.hojnikb - Monday, September 29, 2014 - link
I wonder if Crucial is gonna bring DWA to their consumer line aswell..Samus - Monday, September 29, 2014 - link
The M500 sure could have used it back in the day. The 120GB model had appalling write performance.PrivacyIsNotCriminal - Monday, September 29, 2014 - link
Appreciate the brief write up on encryption and that this may be a technically challenging area to detail. But in a post-Snowden world with increasing complex malware and emphasis on data mining, we should all be pressing for strengthening of protective technologies.Additional article depth on encryption technologies, certification authorities and related technical metrics would be appreciated by many of us who are not IT professionals, but are concerned about protecting our personal LANs and links to our wireless/cellular devices.
Contrary to the government's and RIAA most recent assertions, a desire for privacy and freedom from warrantless searches should be a fundamental American value.
Thanks for the in depth technical reviews and hope Anand is doing well.
kaelynthedove78 - Monday, September 29, 2014 - link
This explains the data loss issues we've had with the MX100 series, both under Windows 7 and FreeNAS.With all C-states enabled (the default and recommended configuration, which Anandtech doesn't use since some highly advertized drives are badly designed and suffer up to 40% IOPS drop), the drives don't properly handle suspending and resuming the system.
Under FreeNAS, the zpool would slowly accumulate corruption and during the next scrubbing the whole zpool would get trashed and the only option was to restore all data from backup.
Under windows strange errors, like being unable to properly recognise USB devices or install Windows updates, would appear little by little after every suspend/resume cycle until the machine would refuse to boot up at all.
A workaround is to either disable all power-saving C-states or to disable HIPM and DIPM on *all* disk controller, even those which don't have Micron drives connected. Or to never suspend/resume.
We decided to return all our Micron drivers, about 350 total, and get Intel SSDs instead. They're not cheap and not the fastest but at least I don't have to keep re-imaging systems every week..
For information on how to enable/disable HIPM and DIPM under Windows 7 please see:
www.sevenforums.com/tutorials/177819-ahci-link-power-management-enable-hipm-dipm.html
nirwander - Monday, September 29, 2014 - link
First 840 EVO, then MX100...hbarnwheeler - Monday, September 29, 2014 - link
What explains the data loss? Are you suggesting that DRAM was not being flushed during system suspension?milli - Monday, September 29, 2014 - link
Damn, I'm having four systems which are having suspend/resume issues. Especially when the system goes into S5 suspend. All four have MX100 drives. Two identical systems with other brand SSD have not such issues.I'm not having corruption but sometimes when the computer is resumed from S5, it can't find the drive.
Bought those drives based on Anand's recommendation. Thx
Thank you for the link. It will be helpful. Can you confirm that modifying hipm dipm helps?
metayoshi - Monday, September 29, 2014 - link
I wouldn't be so sure as to blame Micron for losing data or getting errors when it comes to Windows suspend and resume. I have had WD, Seagate, and the acquired Fujitsu, Hitachi, and Maxtor HDDs, and Intel, Crucial, and Corsair SSDs, and all of them have had problems when it came to Windows suspend and resume. I never ever use Windows' own Sleep and Hibernate states anymore because of this problem. These power states from Windows are not even supposed to be considered an unexpected power loss by most of these storage industry manufacturers because it is required by the specs for Windows to gracefully flush any cached data and power down the drive before yanking the power to it. As far as I know, all of these drives can handle a graceful power down just fine.Unexpected power loss should happen if and only if the user either physically holds down the power of their PC to forcefully shut down the system, or the power cable is physically disconnected from the drive during operation. If an error is happening during suspend and resume, there's usually something wrong that the OS is doing, or something wrong that the OEM system is doing because if there is no graceful power cycle to the storage during suspend and resume, that's the OS's or the OEM's or the BIOS's fault.
BedfordTim - Monday, September 29, 2014 - link
I always disable sleep for that very reason. With my Thinkpad and Vista it never woke, and while things have got better it still happened with Windows 7 on every machine I have tried it on.Lerianis - Friday, October 3, 2014 - link
O'really? On every single computer I have had, sleep worked without issues. I've had Gateway's, HP's, Acer's, Toshiba's, etc. None of them had problems with sleeping properly in Windows Vista - 8.1.I'm thinking that you are exaggerating or just out and out falsifying what actually was going on.
leexgx - Saturday, November 1, 2014 - link
i agree i always sleep my computer and laptops only OS that had a problem with doing it was Vista with Nvidia + creative card (witch was not comptelay VIsta fault) on windows 7 never had an issue with sleepi guessing i not had the issue with the 2 512GB MX100 i have (X58 i7-920 systems) as the motherboard i got due not support any of the adv power management features (well i did have to update the firmware in the other system as it was failing after 5 minutes but that system has always been odd, but the firmware update did seem to resolve it or something els i did)
leexgx - Saturday, November 1, 2014 - link
probably bad luck with Drivers and sleep (Drivers are what norm break the sleep, last time i had BSOD issues was related to Sleep and Creative sound drivers)not that i need to sleep this system as it boots up in under 20 seconds (but Chrome is compleatly CPU bound when it reopens 40 tabs)
Lerianis - Friday, October 3, 2014 - link
Is this problem/issue present with Windows 8? Or did they fix this issue?shodanshok - Monday, September 29, 2014 - link
Hi Kristian,it seems that Dynamic Write Acceleration is disabled on 512GB and 1 TB disks. From techreport:
"Surprisingly, the 1TB and 512GB variants don't have Dynamic Write Acceleration. Those drives are already fast enough for the controller, according to Micron, and the math works out. The Marvell chip can address up to four chips on each of its eight memory channels, making 32-die configurations ideal for peak performance. At 16GB per die, the cut-off point is 512GB."
This is probably the reason behind the 512 GB units having only 50% more endurance that the 256 GB one: DWA on the smaller one can absorb many writes and flush them in sequential form on the MLC array, saving some flash wear (in average).
Regards.
Kristian Vättö - Monday, September 29, 2014 - link
I thought I had that there, but looks like I forgot to add it in a hurry. Anyway, I've added it now :)MarcHFR - Tuesday, September 30, 2014 - link
Shodanshok,In fact DWA is not better for endurance, it's worst.
- Writting random writes in sequential form is already done on all SSD by write combining.
- DWA increase write amplification since the data is first wrote in "SLC" mode then rewrote in "MLC mode".
For 2 bit of data :
- 2 cells are used for SLC mode
- then 1 cell is used for MLC mode
vs
- 1 cell is used for MLC mode w/o DWA
Since write speed is rarely a problem in daily usage and since there is counterpart, i don't understand the positive reception for TurboWrite, nCache 2, Dynamic Write Acceleration, etc...
shodanshok - Tuesday, September 30, 2014 - link
Hi,it really depends on how the Write Acceleration is implemented. While it is true that badly designed WA caches can have a bad effect of flash endurance, a good designed one (and under a favorable workload) can lessen the load on the flash as a whole.
Micron is not discussing their pSLC implementation in detail, so let speak about Sandisk NCache which is more understood at the moment.
NCache works by reserving a fixed amount of NAND die to pSLC. This pSLC slice, while built on top of MLC cells, is good for, say, 10X the cycles of standard MLC (so ~30.000 cycles). The reason is simple: by using them as SLC, you have much higher margin for voltage drop.
Now, lets follow a write down to the flash. When a write arrive to the disk, it places the new data to the pSLC array. After that we have two possibilities:
1. no new write for the same LBA arrives in short time, so the pSLC array is flushed to the main MLC portion. Total writes with WA: 2 (1 pSLC / 1 MLC) - without WA: 1 (MLC)
2. if a new write is recorder for the same LBA _before_ the pSLC array is flushed, the new write will overwrite the data stored in the pSLC portion. After some idle time, the pSLC array is flushed to the MCL one. Total writes with WA: 3 (2 pSLC / 1 MLC) - without WA: 2 (MLC)
In the rewrite scenario (n.2) the MLC portion see only a single write vs the two MLC writes of the no-WA drive. While it is true that the pSLC portion sustain increased stress, its longevity is much longer than the main MLC array so it should not be a problem is their cycles are "eaten" faster. On the other hand, the MLC array is much more prone to flash wearing, so any decrease in writes are very welcomed.
This rewrite behavior is the exact reason behind SanDisk's quoted write amplification number, which is only 0.8: without Write Acceleration, a write amplification less then 1.0 can be achieved only using come compression/deduplication scheme.
Regards.
MarcHFR - Tuesday, September 30, 2014 - link
As you said it's really depend on the workload for nCache 2, write vs rewrite.But another point of view is that for example a 120 GB Ultra II with 5 GB nCache 2.0 could be a 135 GB Ultra II without NAND die reserved to nCache 2.0.
shodanshok - Tuesday, September 30, 2014 - link
True, but rewrite is quite pervasive.For example, any modern, journaled filesystem will constantly rewrite an on-disk circular buffer.
Databases use a similar concept (double-write) with another on-disk circular buffer.
The swapfile is constantly rewritten
...
Anyway, it surely remain a matter of favorable vs unfavorable workload.
Regards.
Cerb - Tuesday, September 30, 2014 - link
Only some FSes, usually with non-default settings, will double-write any file data, though. What most do is some form of meta-data journaling, where new writes preferably go into free space (one more reason not to fill your drives all the way up!), and the journal logs the writing of the new state. But, the data itself is not in the journal. EXT3/4 can be set write twice, but don't by default. NTFS, JFS, and XFS, among others, simply don't have such a feature at all. So, the additional writing is pretty minimal, being just metadata. You're not likely to be writing GBs/day to the FS journal.Databases generally should write everything twice, though, so that they never are in an unrecoverable state, if the hardware is functioning correctly.
AnnonymousCoward - Monday, September 29, 2014 - link
I have yet to get an answer to this question: what's the point of doing purely synthetic and relative-performance tests, and how does that tell the reader the tangible difference of these drives?You don't test video cards in terms of IOPS or how fast they pound through a made-up suite. You test what matters: fps.
You also test what matters for CPUs: encoding time, gaming fps, or CAD filter time.
With phones, you test actual battery time or actual page loading time.
With SSDs, why would you not test things like how fast Windows loads, program load time, and time to transfer files? That matters more than any of the current tests! Where am I going wrong, Kristian?
Kristian Vättö - Monday, September 29, 2014 - link
Proper real world testing is subject to too many variables to be truly reproducible and accurate. Testing Windows boot time and app load time is something that can be done, but the fact is that in a real world scenario you will be having more than one app running at a time and a countless number of Windows background processes. Once more variables are introduced to the test, the results become less accurate unless all variables can be accurately measured, which cannot really be done (at least not without extensive knowledge of Windows' architecture).The reasoning is the same as to why we don't test real-time or multiplayer gaming performance. It's just that the test scenarios are not fully reproducible unless the test is scripted to run the exact same scenario over and over again (like the built-in game benchmarks and our Storage Benches).
That said, I've been working on making our Storage Bench data more relevant to real world usage and I already have a plan on how to do that. It won't change the limitations of the test (it's still trace-based with no TRIM available, unfortunately), but I hope to present the data in a way that is more relevant than just pure MB/s.
AnnonymousCoward - Tuesday, September 30, 2014 - link
Thanks for your reply.You said it yourself: boot time and app load time can be done. These are 2 of the top 5 reasons people buy SSDs. To get around the "uncontrolled" nature, just do multiple trials and take the average.
Add a 3rd test: app load time while heavy background activity is going on, such as copying a 5GB file to an HDD.
4th test: IrfanView batch conversion; time to re-save 100 JPEG files.
All of those can be done on a fresh Windows install with minimal variables.
AnnonymousCoward - Tuesday, September 30, 2014 - link
To expand on my 3rd test: kick off a program that scans your hard drive (like anti-spyware or anti-virus) and then test app load time.You might be overestimating the amount of disk transfers that go on during normal computer usage. Right now, for example, I've got 7 programs open, and task manager shows 0% CPU usage on all 4 cores. It takes the same time to launch any app now as when I have 0 other programs open. So I think the test set I described would be quite representative of real life, and a massive benefit over what you're currently testing.
Kristian Vättö - Tuesday, September 30, 2014 - link
We used to do that a couple of years ago but then we reached a point where SSDs became practically indistinguishable. The truth is that for light workloads what matters is that you have an SSD, not what model the SSD actually is. That is why we are recommending the MX100 for the majority of users as it provides the best value.I think our Light suite already does a good job at characterizing performance under typical consumer workloads. The differences between drives are small, which reflects the minimal difference one would notice in real world with light usage. It's not overly promoting high-end drives like purely synthetic tests do.
Then again, that applies to all components. It's not like we test CPUs and GPUs under typical usage -- it's just the heavy use cases. I mean, we could test the application launch speed in our CPU reviews, but it's common knowledge that CPUs today are all so fast that the difference is negligible. Or we could test GPUs for how smoothly they can run Windows Aero, but again it's widely known that any modern GPU can handle that just fine.
The issue with testing heavy usage scenarios in real world is the number of variables I mentioned earlier. There tends to be a lot of multitasking involved, so creating a reliable test is extremely hard. One huge problem is the variability of user input speed (i.e. how quickly you click things etc -- this vary from round to round during testing). That can be fixed with excellent scripting skills, but unfortunately I have a total lack of those.
FYI, I spent a lot of time playing around with real world tests about a year ago, but I was never able to create something that met my criteria. Either the test was too basic (like installing an app) that showed no difference between drives, or the results wouldn't be consistent when adding more variables. I'm not trying to avoid real world tests, not at all, it's just that I haven't been able to create a suite that would be relevant and accurate at the same time.
Also, once we get some NVMe drives in for review, I plan to revisit my real world testing since that presents a chance for greater difference between drives. Right now AHCI and SATA 6Gbps limit the performance because they account for the largest share in latency, which is why you don't really see differences between drives under light workloads as the AHCI and SATA latency absorb any latency advantage that a particular drive provides.
AnnonymousCoward - Tuesday, September 30, 2014 - link
Thanks for explaining The State of SSDs.I suspect a lot of people don't realize there's negligible performance difference across SSDs. And I think lots of people put SSDs in RAID0! Reviews I've seen show zero real-world benefit.
This isn't a criticism, but it's practically misleading for a review to only include graphs with a wide range of performance. What a real-world test does is get us back to reality. I think ideally a review should start with real-world, and all the other stuff almost belongs in an appendix.
Users should prioritize SSDs with:
1. Good enough (excellent) performance.
2. High reliability and data protection.
3. Low cost.
If #1 is too easy, then #2 and #3 should get more attention. I generally recommend Intel SSDs because I suspect they have the best reliability standards, but I really don't know, and most people probably also don't. OCZ wouldn't have shipped as many as they did if people were aware of their reliability.
leexgx - Saturday, November 1, 2014 - link
nowadays you cant buy a bad SSD (unless its phison based, they norm make Cheap USB flash pen drives) even JMicron based SSDs are OK nowits only compatibility problems that make an SSD bad with some setups
JMicron JMF602 had a Very very very bad SSD controller when they made there first 2 (did i say that to many times) http://www.anandtech.com/show/2614/8 (1 second Write delay)
Impulses - Monday, September 29, 2014 - link
Probably because top tier SSD reached a point a while ago where the differences in performing basic tasks like that is basically milliseconds, which would tell the reader even less.For large transfers the sequential tests are wholly representative of the task.
I think Anand used to have a test in the early days of SSD reviews where he'd time opening five apps right after boot, but it'd basically be a dead heat with any decent drive these days.
Gigaplex - Monday, September 29, 2014 - link
It would tell the reader that any of the drives being tested would fit the bill. Currently, readers might see that drive A is 20% faster than drive B and think that will give 20% better real world performance.Both types of tests are useful, doing strictly real-world tests would miss information too.
AnnonymousCoward - Tuesday, September 30, 2014 - link
> is basically milliseconds, which would tell the reader even less.Wrong; that tells the reader MORE! If all modern video cards produced within 1fps of each other, would you rather see that, or solely relative performance graphs that show an apparent difference?
Wolfpup - Monday, September 29, 2014 - link
Darn, that's a shame these don't have full data loss protection. I assumed they did too! Still, Micron/Crucial and Intel are my top choices for drives :)Wormstyle - Tuesday, September 30, 2014 - link
Thanks for posting the information here. I think you are a bit soft on them with the power failure protection marketing, but you did a good job explaining what they were doing and hopefully they will now accurately reflect the capability of the product in their marketing collateral. A lot of people have bought these products with the wrong expectations on power failure, although for most applications they are still very good drives. What is the source for the market data you posted in the article?Kristian Vättö - Tuesday, September 30, 2014 - link
It's straight from the M500's product page.http://www.micron.com/products/solid-state-storage...
Wormstyle - Tuesday, September 30, 2014 - link
The size of the SSD market by OEM, channel, industrial and OEM breakdown of notebook, tablet, and desktop? I'm not seeing it at that link.Kristian Vättö - Wednesday, October 1, 2014 - link
Oh, that one. It's from the M600's reviewer's guide and the numbers are based on Micron's own research.maofthnun - Wednesday, October 1, 2014 - link
Thanks for the clarification on the powerloss protection feature. I am very disappointed by how it actually works because that was a major deciding factor in my purchase of the MX100. At the time, the choice was between the MX100 and the Seagate 600 Pro which was $30 more and which also offers powerloss protection. I would have gladly paid the extra $30 if I had known the actual workings of the MX100.Since we're on the topic, I wonder if other relatively recent SSDs within the consumer budget that offer powerloss protection (e.g. Intel 730, Seagate 600 Pro) work the way everyone assumes (flush volatile data)? Would love to hear your comment on this.
Kristian Vättö - Wednesday, October 1, 2014 - link
Seagate 600 Pro is basically an enterprise drive (28% over-provisioning etc), so it does have full power-loss protection. It uses tantalum capacitors like other enterprise SSDs.http://www.anandtech.com/show/6935/seagate-600-ssd...
As for the SSD 730, it too has full power-loss protection, which is because of its enterprise background (it's essentially an S3500 with an overclocked controller/NAND and a more client-optimized firmware). The power-loss protection implementation is the same as in the S3500 and S3700.
maofthnun - Wednesday, October 1, 2014 - link
Thank you. I'll be targeting those two as my future purchase.RAMdiskSeeker - Wednesday, October 1, 2014 - link
If the 256GB drive were formatted with a 110GB partition, would it operate in Dynamic Write Acceleration 100% of the time? If so, this would be an interesting way to get an SLC drive.Romberry - Friday, January 9, 2015 - link
I'm really not sure that the AnandTech Storage Bench 2013 does an adequate job of characterzing the performance of this drive or really, any drive in a consumer class environment. And I'm not sure that filling all the LBA's and looking at the psedo-SLC step down as the drive is filled really tells us anything useful (other than where the break points are...and how much use is that?) either.Performance consistency? Same deal. Almost no one uses consumer class drives (large steady long term massive writes) this way, and those who do use drives this way likely aren't using consumer class drives.
I can really take nothing useful away from this review. And BTW, this whole "Crucial doesn't really have power protection, we didn't actually bother checking but just assumed and repeated the marketing speak before" stuff is not the kind of thing I expect from AnandTech. With that kind of care being taken in these articles, I'll be careful to read things here with the same sort of skepticism I had previously reserved for other sites. I'd sort of suspended that skepticism with AnandTech over the years. My mistake.