Hey Kristian, I read that the 1.2 TB model uses 84 dies. But that's not a multiple of 18. So what gives? Is it running in 14 channel mode or something?
Controllers don't have to operate on a specific multiple of the number of dies. That's just a coincidence as to how we've seen them so far on most SSD's. They can operate with varying priorities and asymmetrically. Further, more than 1 channel can address the same die in different intervals/priorities. As controllers become more and more complex, this kind of assymetrical operation will become more common, unfortunately this is correlated with increasing number of total dies and lower reliability.
I suspect this drive is not for the current z97 chip set, but will realize its potential with the Z170 chipset (Sunrise Point) due for release in the second half of this year with Skylake. The Z170 chipset has 20 PCIe 3.0 lanes and DMI 3.0 (8 GB/s) bus interface.
It should be a very interesting second half of the year - Skylake CPU, Sunrise Point chipsets, and Windows 10.
who needs chipset for PCIe if You have 40 lanes directly from CPU. it is step back in the configuration. it was big step ahead to put memory and PCIe to CPU. the chipset is useless.
>It's again a bit disappointing that the SSD 750 isn't that well optimized for sequential IO because there's prcatically no scaling at all
That's a weird conclusion. I'd say it is quite impressive that the drive almost reaches peak throughput at QD 1 already. Requiring higher QD to achieve more throughput is a not a positive characteristic. But if that matters depends on the usage scenario ofc.
It's impressive that the performance is almost the same regardless of queue depth, but I don't find 1.2GB/s to be very impressive for a 1.2TB PCIe drive.
Unfortunately your use of un-normalised standard deviation for performance consistency makes them a barrier to understanding. A 1000 IOPS drive with 5% variance is going to have lower standard deviation and by the way you have presented it "better consistency" than a 10000 IOPS drive with 1% variance.
Here is test and review of the new 750, what is up with boot time...its SLOWEST of 14 drives. Everything else is great, but boot time. The Plextor M6 is 15 seconds, the 750 is 34 sec....ideas
The funny thing is that the X25-M is STILL a great product. You can buy one on ebay and place it into a new build and it works just fine. And will continue to work just fine for many more years.
I have 4 X25-M 80 GB drives in RAID 0. The 750 is cheaper and faster than my setup. Price is based on what I paid several years ago for them.
I would need a new motherboard and CPU to make this drive bootable. I do want.
Intel's PCIe lane bottleneck is pathetic. It seems to be a constant concern. X99 and Haswell-E is not the best answer to the problem. I am really skeptical about waiting for Skylake and the associated chipset. Broadwell for desktop hasn't even been released yet. Skylake for desktop will likely be next year at this rate.
Intel's never waivered from stating that SkyLake will launch on time and that all of the 14nm ramping delays will be absorbed by shortening broadwell's life. At this point I am wondering if desktop broadwell might end up being cut entirely in the mainstream market segment; with only the LGA2011 variant and possibly the LGA1150 celeron/pentium class chips that normally launch about a year after the rest of the product line on the desktop.
Skylake will bring 20 PCIe 3.0 lanes on the PCH, in addition to the PCIe 3.0 lanes coming off the CPU (Skylake-E CPUs will introduce PCIe 4.0) as well as support for up to three SATA Express/M.2 devices. Don't worry, Intel is well aware of the bandwidth bottleneck and they're addressing it.
I'm not satisfied with the explanations of why this product is slower than the SM951. By all rights it should be faster. Why would it still get a recommendation by anandtech?
It's only slower in the Heavy and Light traces, which focus more on peak performance rather than consistency. In The Destroyer trace the SSD 750 has significantly lower IO latency and that's what's critical for power users and professionals since it translates to more responsive system. The Heavy and Light traces don't really illustrate the workloads where the SSD 750 is aimed for, hence the SM951 is faster in those.
Is it really measureably more responsive though? I guess I have a hard time believing that latencies measured in microseconds are going to bare out into any real world difference. Maybe it makes a difference on the single digit millisecond scale, but I'm talking real world here. Like is there any scenario where you'd be able to measure the *actual responsiveness*, meaning the time between clicking something and it actually responding to your command is measurably better? Even if it's just something minor like notepad opens in 50ms vs 100ms while you're compiling and backing up at the same time?
Their target market is consumers so I feel like they've got to justify it on the basis of real world usage, not theory or benchmarks. From what I'm seeing here the SM951 looks like a better buy in every single way that matters.
It's not about "clicking and responding". It's about different servers/databases handling hundreds of requests per second in a heavily multithreaded scenario.
For UI interaction you probably cannot make the difference between this and the cheapest SSD on the market unless compared side by side.
As the review explains, this is targeted to a very specific niche. Whether people understand the scope of that niche or not is a different thing.
It's too bad that Anandtech didn't benchmark the 400 GB model, since that's the one most people are going to be most interested in buying. I assume that it's a case of Intel not making the 400 GB model available for review, rather than Anandtech deciding not to review it.
Agreed, the 400 GB model is more interesting to consumers.
Also, I hope that if Anandtech does test the 400GB model, that they re-run the tests of the comparison SSDs so that the competitors are overprovisioned to 400GB usable capacity (from 512GB or whatever nominal capacity). That is the only reasonable way to compare, since anyone who wants high sustained performance and is willing to try a drive with only 400GB to achieve it would obviously be willing to overprovision, for example, a 512GB Samsung 850 Pro to only 400GB usable to achieve higher sustained performance.
That is something that I've had on my mind for a while now and I even have a way to do it now (the Storage Bench traces are a bit tricky since they are run on a raw drive, but thankfully I found an hdparm command for limiting the far LBA count). The only issue is time because it takes roughly two days to test one drive through the 2015 suite, so I may be include a drive or two as comparison points but I definitely can't test all drives with added OP.
Honestly, I'd rather have AnandTech test drives and components as-is ("stock" from the manufacturer) and publish those results rather than spend time doing tests on non-standard, customized configurations. Let the customers do that if they truly need that type of set-up or leave it to integrators/specialists.
As far as I know, most customers of a product just want to use it immediately right of the box, no mucking with special settings. Most products are advertised that way as well.
Really, just test the product(s) as advertised/intended by the manufacturer first and foremost to see if it matches their claims and properly serves the target userbase. Specialty cases should only be done if that is actively advertised as a feature, there is truly high interest, something makes you curious, and/or you have the time.
If this were a review site for the totally clueless, then you might have a point. But anandtech has always catered to enthusiasts and those who either already know a lot about how computer equipment works, or who want to learn.
The target audience for this site would certainly consider something as simple as overprovisioning an SSD if it could significantly increase performance and/or achieve similar performance at lower cost relative to another product. So it makes sense to test SSDs configured for similar capacity or performance rather than just "stock" configuration. Anyone can take an SSD and run a few benchmarks. It takes a site as good as anandtech to go more in-depth and consider how SSDs are actually likely to be used and then present useful tests to its readers.
That is correct. I always ask for all capacities, but in this case Intel decided to sample all media with only 1.2TB samples. I've asked for a 400GB, though, and will review it as soon as I get it.
That's up to the motherboard manufacturers. If they provide BIOS with NVMe support then yes, but I wouldn't get my hopes up as the motherboard OEMs don't usually do updates for old boards.
If Z97 board bioses from Asus, Gigabyte, etc. are going to be upgradeable to support Broadwell for all desktop (socket 1150) motherboards, wouldn't they also want to include NVMe support? I'm assuming such support is at least within the realm of possibility, for both Z87 and Z97 boards.
Has anyone worked out exactly what the limitation is/why the bios needs upgrading yet?
Simply that I had the idea that the P3700 had it's own nvme orom, nominally akin to a raid card... ...& that people have had issues with the updated mobo bioses replacing intel's one with a generic one...
...which kind of suggests that the bios update could conceivably not be a requirement for some nvme drives.
A motherboard bios update would be required to provide bootability. Without that update, an NVMe drive could only function as a secondary storage drive. As stated elsewhere, each device model needs specific support added to the motherboard bios. Samsung's SM941 (an M.2 SSD form factor device) is a prime example of this conundrum, and why it's not generally available as a retail device. Although it can be found for sale at Newegg or on eBay.
Ummmm... Well, for example, looking at http://www.thessdreview.com/Forums/ssd-discussion/... then the P3700 could be used as a boot drive on a Z87 board in July 2014 - so clearly that wasn't using a mobo bios with an added nvme orom as ami hadn't even released their generic nvme orom that's being added to the Z97 boards.
(& from recollection, on Z97 boards, in Windows the P3700 is detected as an intel nvme device without the bios update... ...& an ami nvme one with the update)
This appears to effectively the same as, say, an lsi sas raid card loading it's own orom during the boot process & the drives on it becoming bootable - as obviously, as new raid cards with new feature sets are introduced, you don't have to have updates for every mobo bios.
Now, whilst I can clearly appreciate that *if* a nvme drive didn't have it's own orom then there would be issues, it really doesn't seem to be the case with drives that do... ...so is there some other issue with the nvme feature set or...?
Now, obviously this review is about another intel nvme pcie ssd - so it might be reasonable to imagine that it could well also have it's own orom - but, more generally, I'm questioning the assumption that just because it's an nvme drive you can *only* fully utilise it with a board with an updated bios...
...& that if it's the case that some nvme ssds will & some won't have their own orom (& it doesn't affect the feature set), it would be a handy thing to see talked about in the reviews as it means that people with older machines are neither put off buying nor buy an inappropriate ssd when more consumer orientated ones are released.
I think I've kind of found the answer via a few different sources - it's not that nvme drives necessarily won't work properly with booting & whatnot on older boards... it's that there's no stated consistency as to what will & won't work...
So apparently they can simply not work on some boards d.t. a bios conflict & there can separately be address space issues... So the ami nvme orom & uefi bios updates are about compatibility - *not* that an nvme ssd with its own orom will or won't necessarily work without them on any particular setup.
it would be very useful if there was some extra info about this though...
- well, it's conceivable that at least part of the problem is akin to the issues on much older boards with the free bios capacity for oroms & multiple raid configurations... ...where if you attempted to both enable all of the onboard controllers for raid (as this alters the bios behaviour to load them) &/or had too many additional controllers then one or more of them simply wouldn't operate d.t. the bios limitation; whereas they'd all work both individually & with smaller no's enabled/installed... ...so people with older machines who haven't seen this issue previously simply because they've never used cards with their own oroms or the ssd is the extra thing where they're hitting the limit, are now seeing what some of us experienced years ago.
- or, similarly, that there's a min uefi version that's needed - I know that intel's recommending 2.3.1 or later for compatibility but clearly they were working on some boards prior to that...
The Idle power spec of this drive is 4W, while the SM951 is at 50 mW with an L1.2 power consumption at 2mW. Your notebook's battery life will suffer greatly with a drive this power hungry.
Even though you could not run the performance tests with additional overprovisioning on the 750, you should still show the comparison SSDs with additional overprovisioning.
The fair comparison is NOT with the Intel 750 no OP versus other SSDs with no OP. The comparison you should be showing is similar capacity vs. similar capacity. So, for example, a 512GB Samsung 850 Pro with OP to leave it with 400GB usable, versus and Intel 750 with 400GB usable.
I also think it would be good testing policy to test ALL SSDs twice, once with no OP, and once with 50% overprovisioning, running them through all the tests with 0% and 50% OP. The point is not that 50% OP is typical, but rather that it will reveal the best and worst case performance that the SSD is capable of. The reason I say 50% rather than 20% or 25% is that the optimal OP varies from SSD to SSD, especially among models that already come with significant OP. So, to be sure that you OP enough that you reach optimal performance, and to provide historical comparison tests, it is best just to arbitrarily choose 50% OP since that should be more than enough to achieve optimal sustained performance on any SSD.
Kristian, you wrote "for up to 4GB/s of bandwidth with PCIe 3.0 (although in real world the maximum bandwidth is about 3.2GB/s due to PCIe inefficiency)". Is this really true? PCIe 2.0 uses 8b/10b encoding with 20% bandwidth overhead which would match your numbers. However, PCIe 3.0 uses 128b/130b encoding with only 1.54% bandwidth overhead. Could you please explain the inefficiency you mentioned? Thanks in advance!
The real world number includes the bandwidth consumed by PCIe packet headers, NVME packet headers, NVME command messages, etc. Those are over and above the penalty from the encoding scheme on the bus itself.
The 4GB bandwidth takes into account the encoding scheme.
Each lane of v1 PCI-Express had 2.5GT/s so with 8b/10b encoding you end up with 2.5G/10 = 250MB/s. Quadruple that for four lanes and you end up with 1GB/s.
v2 of PCI-Express is double that and v3 of PCI-Express is further double that and there is the 4GB number.
When will these be available for purchase? Also I have a m.2 slot on my motherboard (z10PE-D8 WS) Id rather utilize the 2.5 15mm form factor. I am a bit confused. I dont think that board has SFF-8639. Is there an adapter. Will that affect performance? I assume so and by how much?
The motherboard (host) end of the cable has a square-shaped SFF-8643(!) connector. E.g. ASUS ships an M.2 adapter card for the X99 Sabertooth that offers a suitable port. SFF-8639 is on the drive's end.
That endurance number is scarily low for a 1.2TB drive. 70GB a day for 5 years - thats about 128 TB of writes total, and that's just 100 drive writes! Put another way, at around 1GB/sec (which this drive can easily do), you'd reach those 100 drive writes in just 36 hours.
Of course, that's an extremely intensive workload, but I sure hope this is just intel trying to avoid giving any warrantee rather than an every remotely realistic assessment of the drives capabilities.
Raw 4k video and its not even close to being enough.
At 4K (4096 x 2160) it registers 1697 Mbps which equals 764 GB/hour of 4K video footage. A single camera large Hollywood production can often shoot 100 hours of footage. That’s 76 TB of 4K ProRes 4444 XQ footage.
The upcoming David Fincher film GONE GIRL crept up on 500 hours of raw footage during its multi camera 6K RED Dragon production. That equates to roughly 315 TB of RED 6K (4:1) footage. Shit just got real for data management and post production workflows.
Let me say that again: this is a consumer drive. That's why it is so cheap compared to 3700. A large Hollywood production company will surely be able to afford enough of these drives not to worry about exceeding 128TB write limits.
I'm sure they can afford it - but why pay more than necessary? Compared to the competition, this is an unusally low write endurace for such a high-end drive. Take a peek at say the 1TB 850 Pro; that's likely to be considerably cheaper (and perhaps more deserving of the "consumer" monniker), and it's NAND is rated for a little more than 6000TB of (raw) writes.
128TB? That's really, really unusual for a drive like this.
Because that is how you run your company out of business by being cheap of key hardware.
If you are producing enough 4K video to stress this drive, you are producing enough video that the cost of production is way greater that the cost of drives that you don't have to worry about this type of failure.
I have seen tons of companies go out of business or lose out on thousands of dollars in sales because they tried to save a few hundred dollars up-front against my advice.
Stop looking for cheap solutions if the storage is critical to the running of your business.
I do a lot of large-file snapshot/restore stuff, and I definitely write a lot more than 70gb a day. Intel's own consumer level 335 was rated for 700TB, and that was a much smaller drive. More specifically, this hasn't been a problem on other drives - neither on ssd's, nor on hdds. While it's conceivable there are more efficient ways of working from the perspective of the drive, that's a hassle to arrange.
All of these drives approximately 240GB drives survived at least 700gb, and it was specifically the intel that seemingly intentionally bricked itself then.
This drive is 5 time larger, and is rated for a fraction of that. This is pretty unreasonable to my mind.
This part of the review made be curse: "...with the X25-M. It wasn't the first SSD on the market, but it was the first drive that delivered the aspects we now take for granted: high, consistent and reliable performance."
Arrrgh...
I was one of the early adopters who paid a ton for the XM-25. If you go back through the archives, though, you'll see Intel's XM-25 had a fragmentation bug that made it slower than a spinning platter hard drive after a bit of use. I was in that situation. Intel released a "fix" based on a script run on some old freeware, but they didn't support the fix *at all* and for many people (including me) it would never work.
So INTEL's "high, consistent and reliable performance" turned out to be total crap. I paid over $400 for what turned out to be a doorstop and had to replace it with a Corsair SSD a short time later. INTEL never offered a refund, support, or even an apology to all the people they had sold a totally nonfunctional product to. I still have that drive in my electronic junk pile and I curse INTEL every time it catches my eye.
I'm waiting for good PCIe SSD before my next PC build, unfortunately I would say INTEL products don't count because in the past we've seen (inarguably, and documented on this very site!) that they mass release buggy products and if you happen to have bought one you're just hung out to dry when they turn out to have had major design errors.
Ugh. at least mention the history here and caution people instead of suggesting Intel is reliable.
I wasn't expected that kind of reply. Google "intel replicated SSD firmware problem" (without the quotes) and you can read about the various things that happened, many of which were first reported at this very site, but I guess it WAS 6 years ago so I shouldn't expect everyone to know about it.
I was running win 7 64-bit and had a G1. You'll see reports that ALL G1's had a fragmentation issue that made them slower than spinning platters after a bit of use, and you'll see mainstream media reports about how the "fix" instead bricked drives for many users on win 7 64-bit .
Not anecdotes, mainstream reporting and I was one of the thousands affected and can confirm that even after those reports Intel did nothing for non-enterprise users but delete the 50-page thread on their support site.
To put it frankly, there's no SSD (or HDD) manufacturer that hasn't had any issues, so you might as well go back to the good ol' pen&paper if you want something truly reliable ;-)
Agreed. In coming up with a good google search for the guy above who apparently hadn't heard about this I encountered a lot of articles about necessary firmware updates for other vendors as well. All I know is that Intel left consumers without options or replacements, I don't know what happened in all those other cases. I suppose it's a good reason to think about how important the storage division is to any company you buy from, though. Intel might, conceptually, want to support SSDs but I'd imagine all the management focus is on enterprise and processors. So who do you go with? OCZ (yikes! but maybe okay after the buyout?) Any thoughts on which companies actually value consumer purchases of their SSDs as "mission-critical" ?
I had a G1 without TRIM. The Intel fix was based on some ancient shareware (FreeDOS!) that wouldn't work with many modern motherboards and in some cases left drives bricked. It was well reported at the time (see my comment above for a google search that returns articles), but lots of people wound up with X25-Ms that were useless. If you weren't an enterprise customer the Intel response was "tough luck." No refunds, no replacements, nothing. In all fairness I'm sure Intel would love to be able to support consumers, but they probably aren't set up for it in their storage area because it's just not a big area of their business bottom line.
Yeah, it seems like the G1 owners got screwed. (I have a G2 and G3 and they've both been great. Sorry they screwed the early adopters.)
In Anand's words from 2009 when the G2 was released: "TRIM isn’t yet supported, but the 34nm drives will get a firmware update when Windows 7 launches enabling TRIM. XP and Vista users will get a performance enhancing utility (read: manual TRIM utility). It seems that 50nm users are SOL with regards to TRIM support. Bad form Intel, very bad form." http://anandtech.com/show/2806
"Overall the G2 is the better drive but it's support for TRIM that will ultimately ensure that. The G1 will degrade in performance over time, the G2 will only lose performance as you fill it with real data. I wonder what else Intel has decided to add to the new firmware...
I hate to say it but this is another example of Intel only delivering what it needs to in order to succeed. There's nothing that keeps the G1 from also having TRIM other than Intel being unwilling to invest the development time to make it happen. I'd be willing to assume that Intel already has TRIM working on the G1 internally and it simply chose not to validate the firmware for public release (an admittedly long process). But from Intel's perspective, why bother?
Even the G1, in its used state, is faster than the fastest Indilinx drive. In 4KB random writes the G1 is even faster than an SLC Indilinx drive. Intel doesn't need to touch the G1, the only thing faster than it is the G2. Still, I do wish that Intel would be generous to its loyal customers that shelled out $600 for the first X25-M. It just seems like the right thing to do. Sigh." http://www.anandtech.com/show/2829/11
Could you elaborate on this (although there appears to be an NVMe version too after all) of the SM951. As looking at the numbers if NVMe even slightly improves the SM951 it would make it a better choice, and the form factor being M.2 makes it much more attractive.
Ganesh received an NVMe version of the SM951 inside a NUC and I've also heard from other sources that it exists. No idea of its retail availability, though, as RamCity hadn't heard about it until I told them.
Kristian, there is a DRAM difference between the two models. The 400gb has 1gb DRAM while the 1.2tb model has 2gb. Do you think it plays a big role in terms of performance between the two models.
Also is there a way to reduce the overprovision in these drives? I would prefer 80gb more on the 400gb model over less consistency.
When will you review the kingston hyperX predator, and when will samsung release the sm951 nvme? Q3 or sooner?
The 400gb model shouldn't need as much DRAM because it has fewer pages to keep track of. But there's no way to know how the 400gb model will perform until Intel sends out samples for review.
Seems like some benchmarks like Iometer cannot actually feed the drive, due to being programmed with a single thread. Have you had similar experiences during benchmarking, or is their logic faulty?
I didn't notice anything that would suggest a problem with Iometer's capability of saturating the drive. In fact, Intel provided us Iometer benchmarking guidelines for the review, although they didn't really differ from what I've been doing for a while now.
Reread their article and it seems like the only problem is the Iometer's Fileserver IOPS Test, which peaks at around 200.000 IOPS, since you don't use that one thats probably the reason why you saw no problem.
"For better readability, I now provide bar graphs with the first one being an average IOPS of the last 400 seconds and the second graph displaying the standard deviation during the same period"
lol why not just portray standard deviation as error bars like they are supposed to be shown. Kudos for being one of the few sites to recognize this but what a convoluted senseless way of showing them.
I think the practical tests of many other reviews show that the normal consumer has absolutely no benefit (except being able to copy files faster) from such an SSD. We have reached the peak a long time ago. SSDs are not the limiting factor anymore.
Still, it's great to see that we finally again major improvements. It was always sad that all SSDs got limited by the interface. This was the case with SATA 2, it's the case with SATA 3.
Thanks for sharing Kristian Query about the thorough put using these on external Thunderbolt docks and PCIe 'decks' (several new third party drive and GPU enclosires experimenting with the latter ...and adding powerful desktop cards {GPU} etc) ... Would there still be the 'bottle neck' (not that SLI nor Crossfire with the exception of the MacPro and how the two AMDs work together--would be a concern in OS X but Windows motherboards...) if you were to utilize the TBolt headers to the PCIe lane-->CPU? These seem like a better idea than external video cards for what I'm Doing on the rMBPs. The GPUs are quick enough, especially in tandem with the IrisPro and its ability to 'calculate' ;) -- but a 2.4GB twin card RAID external box with a 'one cord' plug hot or cold would be SWEEET,
As I explained in the article, I see no point in testing such high queue depths in a client-oriented review because the portion of such IOs is marginal. We are talking about a fraction of a percent, so while it would show big numbers it has no relevance to the end-user.
Since you feel strongly enough to levy a personal attack, could you also explain why you think QD128 is important? Anandtech's storage benchmarks are likely a much better indication of user experience unless you have a very specific workload in mind.
Guys why are you cutpasting the same old specs table and formulaic article? For a review of the first consumer NVMe I'm sorely disappointed you didn't touch on latency metrics: one of the most important improvements with the NVMe bus
There are several latency graphs in the article and I also suggest that you read the following article to better understand what latency and other storage metrics actually mean (hint: latency isn't really different from IOPS and throughput).
Hi Kristian, what evidence do you have that the firmware in the SSD 750 is any different from that found in the DC P3600 / P3700? According to leaked reports released before they have the same firmware: http://www.tweaktown.com/news/43331/new-consumer-i...
I should be more clear: I mean that you retest the P3700. And obviously the performance of the 750 wont match that, as it is based of the P3500. But I think you get what I mean anyway ;)
if you use the 2.0 x4 slot your maximum throughput will top out at 2gb/sec. For client workloads this probably won't matter much since only some server workloads can hit situations where the drive can exceed that rate.
> although in real world the maximum bandwidth is about 3.2GB/s due to PCIe inefficiency What does this phrase mean? If you're referring to 8b10b encoding, this is plainly false, since PCIe gen 3 utilized 128b130b coding. If you're referring to the overheds related to TLP and DLLP headers, this is depends on device's and PCIe RC's maximum transaction size. But, even with (minimal) 128 byte limit it would be 3.36 GB/s. In fact, modern PCIe RCs support much larger TLPs, thus eliminating header-related overheads.
Sounds like there is going to be big form factor change coming to desktop computer in the next few years. Complete removal of 5.25 and 3.5" drives, M.2 and 2.5" drives taking over, CPU limted to <77W and video card to <250W.
I should hold off replacing my still very good case until I am building a new computer in 3~4 years.
how would this drive compare with a 4 drive (samsung 850 pro 512), two card Sonnet tempo ssd pro plus arrangement? this set up is about $600 more, but 800GB larger and overall ~same $/GB @.82
Those 10TB and 32TB SSDs can't come soon enough. I just hope they come down to an affordable price very soon as standard SSDs are still way to expensive per TB for any real storage needs.
Can I ask why is the boot time so slow? For a drive this expensive this is not something that is tolerable.
Is it possible to do a boot up timing with the fast boost function enabled? I wanna see how fast will it be as compared with other SATA drives using the same fast boot function.
The boot up time will be the last factor to decide if I wanna pull trigger on this one.
This drive is a beast and just raised the cost of my skylake-e build another 1000 dollars. Maybe an even better 2nd generation version will be out by then. Upgrading my gulftown to 8 core skylake-e flagship. 4.3ghz i7-980x will have lasted me 7 years by the time skylake-e comes out which is a pretty darn good service life. Convert the ole gulftown into a seedbox/personal cloud nas/htpc/living room gaming console. Kill all the oc's and undervolt cpu for the lowest voltage stable at stock and turn all the noctua fans down with ULN adapters into silence mode. It will be rough re buying a buncha parts I wouldn't of had to if I didn't keep the PC together but it's too good of a PC still to dismantle for parts. Will be nice having a beastly backup pc.
My skylake-e build has really ballooned in price but this next upgrade should last a full decade with a couple gpu upgrades using the flagship skylake-e 8 core i7 + 1.2TB intel 750 boot drive + nvidia/amd flagship 16nm FF+ GPU. Basically like 3000 dollars just in 3 parts :(. Thats ok tho it brings too many features to the table pci-e 4.0 DMI 3.0 USB 3.1 built into chipset natively 10gbit ethernet natively up to 3x ultra m2 slots and the SFF connector used in this drive possibly thunderbolt 2 built in natively of course quad channel ddr4. Hopefully better overclocking with the heat producing FIVR removed guessing 4.7-5ghz will be possible on good water cooling to the 8 core.
Sorry got on a tangent. I'm just excited there are finally enough upgrades to make a new PC worth it. No applause for intel tho it took them 7 years to make a gulftown PC worth upgrading. I should see a nice IPC gain from i7-980x gulftown to skylake-e. I'll be happy with 50-60% IPC gain and 500 extra mhz on my 980x so 4.8ghz. I think 6x 140mm high static pressure noctuas in push/pull and a 420mm rad should provide enough cooling for 4.8ghz on 8 core skylake-e if the chip is capable. Goal is to push it to 5.0ghz tho and get 700mhz speed increase + additional 55% IPC gain.
I mentioned this on twitter with you already, but Dead Rising 3 on a HDD versus a NVMe SSD comparison would be nice :) would save me the work of doing it and testing it on my own :p
The endurance figure is also *really* low compared to other drives - it works out to around 128TB of total writes - that's on the order of 50 times less than an 850Pro (which is slightly smaller).
I'm hoping this is just a really stingy guarrantee, and not representative of the actual drive - otherwise I'd really, recommend against using it.
I mean, running the anandtech destroyer benchmark with its close to 1TB of writes would use up your write-allowance for the next two weeks (put another way, it's cost around 10$ in relation to the 1k drive cost).
Put down the drain all flash trash and start making full power loss protected ramdrives with flash/harddrive backup. Would be cheap by this time if not slow selfdedtroying flash garbage lying on the way.
1) Why would you post a review of a Intel SSD 750 PCIe SSD solution without benchmarking it against the other state of the art Intel PCIe SSD Intel DC P3700 solution?
2) Why would you put up sequential/random read/write graphs with pull-downs to display the different hardware results instead of efficiently putting all of the hardware results on ONE graph?
Greetings to everyone on the forum, we supply perfectly reproduced fake money with holograms and all security features available. Indistinguishable to the eye and to touch. also provide real valid and fake passports for any country delivery is discreet
We offer free shipping for samples which is 1000 worth fake as MOQ
I just recieved my 750 yesterday and soon found myself slightly bummed out by the lacking NVMe BIOS-support in my ASUS P8Z77-V motherboard. I managed to get the drive working (albeit non-bootable) by placing it in the black PCIe 2.0 slot of the mainboard, but this is hardly a long term solution. I posted a question to the https://pcdiy.asus.com/ website regarding possible future support for these motherboards and this morning they had publised a poll to check the interest for BIOS/UEFI-support for NVMe's. Please vote here if you (like me) would like to see this implemented! https://pcdiy.asus.com/2015/04/asus-nvme-support-p...
Lets say I had an Intel 5520 Chipset based computer that has multiple PCIe 2.0 Slots. I would be able to get almost the maximum read performance (Since PCIe 2.0 is 500MB/s per 1X, 4X = 2000MB/s, which is exciting on an older computer. I am curious as to if this would be a bootable solution on my desktop though. With 12 Cores and 24 Threads, this computer is far from under-powered, and it would be nice to breath life into this machine, but the BIOS would have no NVMe support that I can think of. I know it has Intel SSD support, but this is from a different era. I wish someone could confirm that this either will, or will not be bootable on non-MVMe mobo's. I am getting conflicting answers.
Nevermind, finally found the requirements that this drive will not be bootable on on NVMe machines, whats more is even using it as a 'secondary' drive requires UEFI apparently. My computer wouldn't be able to use this card at all? That would suck.
Kristian, any chance you have two of these drives in the same machine & you could test RAID0 performance? I'm running into some slow read performance when using two Samsung PCIe drives in a Dell server w/ a RAID1 or RAID0 config. It's not like regular bottlenecking where you hit a performance cap, but where transfer rate drops down to ~ 1/5th the speed at a lower xfer rate.
I thought this was just a Storage Spaces problem, but the same holds true w/ regular windows software raid. I got up to about 4,200 MB/sec, then it tanked. I then ran two simultaneous ATTO tests on two of the drives and they both behaved normally & peaked at 2,700 MB/sec... so I don't think I'm hitting a PCIe bus limitation... I think it's all software.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
132 Comments
Back to Article
mmrezaie - Thursday, April 2, 2015 - link
finally it has started, although I wont budge now. maybe next generation.blanarahul - Thursday, April 2, 2015 - link
Hey Kristian, I read that the 1.2 TB model uses 84 dies. But that's not a multiple of 18. So what gives? Is it running in 14 channel mode or something?blanarahul - Thursday, April 2, 2015 - link
Okay so it has 86 dies. But now it's even more confusing. Aren't they supposed be multiples of number of channels the controller is using?SunLord - Thursday, April 2, 2015 - link
Its likely 18 channels so 4 probably only address 4 dies while the 14 other channels handle 5woggs - Thursday, April 2, 2015 - link
yepTyrDonar - Friday, April 10, 2015 - link
Controllers don't have to operate on a specific multiple of the number of dies. That's just a coincidence as to how we've seen them so far on most SSD's. They can operate with varying priorities and asymmetrically. Further, more than 1 channel can address the same die in different intervals/priorities. As controllers become more and more complex, this kind of assymetrical operation will become more common, unfortunately this is correlated with increasing number of total dies and lower reliability.huaxshin - Thursday, April 2, 2015 - link
Will there be any M2 SSDs from Intel with NVMe? Some notebooks, and desktops, have routed PCIe to M2 slots where its the only place its available.blanarahul - Thursday, April 2, 2015 - link
No.DigitalFreak - Thursday, April 2, 2015 - link
Not with this controller. Maybe down the road.bgelfand - Thursday, April 2, 2015 - link
I suspect this drive is not for the current z97 chip set, but will realize its potential with the Z170 chipset (Sunrise Point) due for release in the second half of this year with Skylake. The Z170 chipset has 20 PCIe 3.0 lanes and DMI 3.0 (8 GB/s) bus interface.It should be a very interesting second half of the year - Skylake CPU, Sunrise Point chipsets, and Windows 10.
kaisellgren - Friday, May 1, 2015 - link
Do not forget the Fiji 390x!dzezik - Saturday, May 7, 2016 - link
who needs chipset for PCIe if You have 40 lanes directly from CPU. it is step back in the configuration. it was big step ahead to put memory and PCIe to CPU. the chipset is useless.zrav - Thursday, April 2, 2015 - link
>It's again a bit disappointing that the SSD 750 isn't that well optimized for sequential IO because there's prcatically no scaling at allThat's a weird conclusion. I'd say it is quite impressive that the drive almost reaches peak throughput at QD 1 already. Requiring higher QD to achieve more throughput is a not a positive characteristic. But if that matters depends on the usage scenario ofc.
Kristian Vättö - Thursday, April 2, 2015 - link
It's impressive that the performance is almost the same regardless of queue depth, but I don't find 1.2GB/s to be very impressive for a 1.2TB PCIe drive.futrtrubl - Thursday, April 2, 2015 - link
Unfortunately your use of un-normalised standard deviation for performance consistency makes them a barrier to understanding. A 1000 IOPS drive with 5% variance is going to have lower standard deviation and by the way you have presented it "better consistency" than a 10000 IOPS drive with 1% variance.Kristian Vättö - Thursday, April 2, 2015 - link
Any suggestions for improving the metric? Perhaps divide by the average IOPS or its square root to take that into account as well?futrtrubl - Thursday, April 2, 2015 - link
Yes, I think dividing by the average IOPs would be perfect. You could even x100 to get it to a sort of percentage deviation.bricko - Saturday, April 4, 2015 - link
Here is test and review of the new 750, what is up with boot time...its SLOWEST of 14 drives. Everything else is great, but boot time. The Plextor M6 is 15 seconds, the 750 is 34 sec....ideashttp://techreport.com/review/28050/intel-750-serie...
Ethos Evoss - Saturday, April 4, 2015 - link
Plextor SSDs - BESTbricko - Saturday, April 4, 2015 - link
Its only slow on the boot time, otherwise it beats ALL other ssd on different loads and tests , by 2 - 3 times....odd it seemsPer Hansson - Saturday, April 4, 2015 - link
It's most likely due to the poor performance of file transfers below 4KB with this drive.Shadowmaster625 - Thursday, April 2, 2015 - link
The funny thing is that the X25-M is STILL a great product. You can buy one on ebay and place it into a new build and it works just fine. And will continue to work just fine for many more years.eanazag - Thursday, April 2, 2015 - link
I have 4 X25-M 80 GB drives in RAID 0. The 750 is cheaper and faster than my setup. Price is based on what I paid several years ago for them.I would need a new motherboard and CPU to make this drive bootable. I do want.
Intel's PCIe lane bottleneck is pathetic. It seems to be a constant concern. X99 and Haswell-E is not the best answer to the problem. I am really skeptical about waiting for Skylake and the associated chipset. Broadwell for desktop hasn't even been released yet. Skylake for desktop will likely be next year at this rate.
DanNeely - Thursday, April 2, 2015 - link
Intel's never waivered from stating that SkyLake will launch on time and that all of the 14nm ramping delays will be absorbed by shortening broadwell's life. At this point I am wondering if desktop broadwell might end up being cut entirely in the mainstream market segment; with only the LGA2011 variant and possibly the LGA1150 celeron/pentium class chips that normally launch about a year after the rest of the product line on the desktop.r3loaded - Friday, April 3, 2015 - link
Skylake will bring 20 PCIe 3.0 lanes on the PCH, in addition to the PCIe 3.0 lanes coming off the CPU (Skylake-E CPUs will introduce PCIe 4.0) as well as support for up to three SATA Express/M.2 devices. Don't worry, Intel is well aware of the bandwidth bottleneck and they're addressing it.Hung_Low - Thursday, April 2, 2015 - link
So is thisi 750 the long rumoured P3500?Shadowmaster625 - Thursday, April 2, 2015 - link
I'm not satisfied with the explanations of why this product is slower than the SM951. By all rights it should be faster. Why would it still get a recommendation by anandtech?Kristian Vättö - Thursday, April 2, 2015 - link
It's only slower in the Heavy and Light traces, which focus more on peak performance rather than consistency. In The Destroyer trace the SSD 750 has significantly lower IO latency and that's what's critical for power users and professionals since it translates to more responsive system. The Heavy and Light traces don't really illustrate the workloads where the SSD 750 is aimed for, hence the SM951 is faster in those.BD2003 - Thursday, April 2, 2015 - link
Is it really measureably more responsive though? I guess I have a hard time believing that latencies measured in microseconds are going to bare out into any real world difference. Maybe it makes a difference on the single digit millisecond scale, but I'm talking real world here. Like is there any scenario where you'd be able to measure the *actual responsiveness*, meaning the time between clicking something and it actually responding to your command is measurably better? Even if it's just something minor like notepad opens in 50ms vs 100ms while you're compiling and backing up at the same time?Their target market is consumers so I feel like they've got to justify it on the basis of real world usage, not theory or benchmarks. From what I'm seeing here the SM951 looks like a better buy in every single way that matters.
SirPerro - Monday, April 6, 2015 - link
It's not about "clicking and responding". It's about different servers/databases handling hundreds of requests per second in a heavily multithreaded scenario.For UI interaction you probably cannot make the difference between this and the cheapest SSD on the market unless compared side by side.
As the review explains, this is targeted to a very specific niche. Whether people understand the scope of that niche or not is a different thing.
KAlmquist - Thursday, April 2, 2015 - link
It's too bad that Anandtech didn't benchmark the 400 GB model, since that's the one most people are going to be most interested in buying. I assume that it's a case of Intel not making the 400 GB model available for review, rather than Anandtech deciding not to review it.jwilliams4200 - Thursday, April 2, 2015 - link
Agreed, the 400 GB model is more interesting to consumers.Also, I hope that if Anandtech does test the 400GB model, that they re-run the tests of the comparison SSDs so that the competitors are overprovisioned to 400GB usable capacity (from 512GB or whatever nominal capacity). That is the only reasonable way to compare, since anyone who wants high sustained performance and is willing to try a drive with only 400GB to achieve it would obviously be willing to overprovision, for example, a 512GB Samsung 850 Pro to only 400GB usable to achieve higher sustained performance.
Kristian Vättö - Thursday, April 2, 2015 - link
That is something that I've had on my mind for a while now and I even have a way to do it now (the Storage Bench traces are a bit tricky since they are run on a raw drive, but thankfully I found an hdparm command for limiting the far LBA count). The only issue is time because it takes roughly two days to test one drive through the 2015 suite, so I may be include a drive or two as comparison points but I definitely can't test all drives with added OP.Kristian Vättö - Thursday, April 2, 2015 - link
Not far LBA count, but raw LBA count, obviously :)Stahn Aileron - Friday, April 3, 2015 - link
Honestly, I'd rather have AnandTech test drives and components as-is ("stock" from the manufacturer) and publish those results rather than spend time doing tests on non-standard, customized configurations. Let the customers do that if they truly need that type of set-up or leave it to integrators/specialists.As far as I know, most customers of a product just want to use it immediately right of the box, no mucking with special settings. Most products are advertised that way as well.
Really, just test the product(s) as advertised/intended by the manufacturer first and foremost to see if it matches their claims and properly serves the target userbase. Specialty cases should only be done if that is actively advertised as a feature, there is truly high interest, something makes you curious, and/or you have the time.
jwilliams4200 - Friday, April 3, 2015 - link
If this were a review site for the totally clueless, then you might have a point. But anandtech has always catered to enthusiasts and those who either already know a lot about how computer equipment works, or who want to learn.The target audience for this site would certainly consider something as simple as overprovisioning an SSD if it could significantly increase performance and/or achieve similar performance at lower cost relative to another product. So it makes sense to test SSDs configured for similar capacity or performance rather than just "stock" configuration. Anyone can take an SSD and run a few benchmarks. It takes a site as good as anandtech to go more in-depth and consider how SSDs are actually likely to be used and then present useful tests to its readers.
Kristian Vättö - Thursday, April 2, 2015 - link
That is correct. I always ask for all capacities, but in this case Intel decided to sample all media with only 1.2TB samples. I've asked for a 400GB, though, and will review it as soon as I get it.Mr Alpha - Thursday, April 2, 2015 - link
Has anyone managed to find this mythological list of compatible motherboards?Kristian Vättö - Thursday, April 2, 2015 - link
I just asked Intel and will provide a link as soon as I get one. Looks like it's not up yet as they didn't have an answer right away.tstones - Thursday, April 2, 2015 - link
Older chipsets like z77 and z87 will support NVMe?Kristian Vättö - Thursday, April 2, 2015 - link
That's up to the motherboard manufacturers. If they provide BIOS with NVMe support then yes, but I wouldn't get my hopes up as the motherboard OEMs don't usually do updates for old boards.vailr - Thursday, April 2, 2015 - link
If Z97 board bioses from Asus, Gigabyte, etc. are going to be upgradeable to support Broadwell for all desktop (socket 1150) motherboards, wouldn't they also want to include NVMe support? I'm assuming such support is at least within the realm of possibility, for both Z87 and Z97 boards.TheRealPD - Thursday, April 2, 2015 - link
Has anyone worked out exactly what the limitation is/why the bios needs upgrading yet?Simply that I had the idea that the P3700 had it's own nvme orom, nominally akin to a raid card... ...& that people have had issues with the updated mobo bioses replacing intel's one with a generic one...
...which kind of suggests that the bios update could conceivably not be a requirement for some nvme drives.
vailr - Friday, April 3, 2015 - link
A motherboard bios update would be required to provide bootability. Without that update, an NVMe drive could only function as a secondary storage drive. As stated elsewhere, each device model needs specific support added to the motherboard bios. Samsung's SM941 (an M.2 SSD form factor device) is a prime example of this conundrum, and why it's not generally available as a retail device. Although it can be found for sale at Newegg or on eBay.TheRealPD - Friday, April 3, 2015 - link
Ummmm... Well, for example, looking at http://www.thessdreview.com/Forums/ssd-discussion/... then the P3700 could be used as a boot drive on a Z87 board in July 2014 - so clearly that wasn't using a mobo bios with an added nvme orom as ami hadn't even released their generic nvme orom that's being added to the Z97 boards.(& from recollection, on Z97 boards, in Windows the P3700 is detected as an intel nvme device without the bios update... ...& an ami nvme one with the update)
This appears to effectively the same as, say, an lsi sas raid card loading it's own orom during the boot process & the drives on it becoming bootable - as obviously, as new raid cards with new feature sets are introduced, you don't have to have updates for every mobo bios.
Now, whilst I can clearly appreciate that *if* a nvme drive didn't have it's own orom then there would be issues, it really doesn't seem to be the case with drives that do... ...so is there some other issue with the nvme feature set or...?
Now, obviously this review is about another intel nvme pcie ssd - so it might be reasonable to imagine that it could well also have it's own orom - but, more generally, I'm questioning the assumption that just because it's an nvme drive you can *only* fully utilise it with a board with an updated bios...
...& that if it's the case that some nvme ssds will & some won't have their own orom (& it doesn't affect the feature set), it would be a handy thing to see talked about in the reviews as it means that people with older machines are neither put off buying nor buy an inappropriate ssd when more consumer orientated ones are released.
TheRealPD - Saturday, April 4, 2015 - link
I think I've kind of found the answer via a few different sources - it's not that nvme drives necessarily won't work properly with booting & whatnot on older boards... it's that there's no stated consistency as to what will & won't work...So apparently they can simply not work on some boards d.t. a bios conflict & there can separately be address space issues... So the ami nvme orom & uefi bios updates are about compatibility - *not* that an nvme ssd with its own orom will or won't necessarily work without them on any particular setup.
it would be very useful if there was some extra info about this though...
- well, it's conceivable that at least part of the problem is akin to the issues on much older boards with the free bios capacity for oroms & multiple raid configurations... ...where if you attempted to both enable all of the onboard controllers for raid (as this alters the bios behaviour to load them) &/or had too many additional controllers then one or more of them simply wouldn't operate d.t. the bios limitation; whereas they'd all work both individually & with smaller no's enabled/installed... ...so people with older machines who haven't seen this issue previously simply because they've never used cards with their own oroms or the ssd is the extra thing where they're hitting the limit, are now seeing what some of us experienced years ago.
- or, similarly, that there's a min uefi version that's needed - I know that intel's recommending 2.3.1 or later for compatibility but clearly they were working on some boards prior to that...
pesho00 - Thursday, April 2, 2015 - link
Why they omit M2? I really think this is a mistake missing the whole mobile market while SM951 will penetrate both!Kristian Vättö - Thursday, April 2, 2015 - link
Because M.2 would melt with that beast of a controller.metayoshi - Thursday, April 2, 2015 - link
The Idle power spec of this drive is 4W, while the SM951 is at 50 mW with an L1.2 power consumption at 2mW. Your notebook's battery life will suffer greatly with a drive this power hungry.jwilliams4200 - Thursday, April 2, 2015 - link
Even though you could not run the performance tests with additional overprovisioning on the 750, you should still show the comparison SSDs with additional overprovisioning.The fair comparison is NOT with the Intel 750 no OP versus other SSDs with no OP. The comparison you should be showing is similar capacity vs. similar capacity. So, for example, a 512GB Samsung 850 Pro with OP to leave it with 400GB usable, versus and Intel 750 with 400GB usable.
I also think it would be good testing policy to test ALL SSDs twice, once with no OP, and once with 50% overprovisioning, running them through all the tests with 0% and 50% OP. The point is not that 50% OP is typical, but rather that it will reveal the best and worst case performance that the SSD is capable of. The reason I say 50% rather than 20% or 25% is that the optimal OP varies from SSD to SSD, especially among models that already come with significant OP. So, to be sure that you OP enough that you reach optimal performance, and to provide historical comparison tests, it is best just to arbitrarily choose 50% OP since that should be more than enough to achieve optimal sustained performance on any SSD.
knweiss - Thursday, April 2, 2015 - link
Kristian, you wrote "for up to 4GB/s of bandwidth with PCIe 3.0 (although in real world the maximum bandwidth is about 3.2GB/s due to PCIe inefficiency)". Is this really true? PCIe 2.0 uses 8b/10b encoding with 20% bandwidth overhead which would match your numbers. However, PCIe 3.0 uses 128b/130b encoding with only 1.54% bandwidth overhead. Could you please explain the inefficiency you mentioned? Thanks in advance!DanNeely - Thursday, April 2, 2015 - link
The real world number includes the bandwidth consumed by PCIe packet headers, NVME packet headers, NVME command messages, etc. Those are over and above the penalty from the encoding scheme on the bus itself.IntelUser2000 - Thursday, April 2, 2015 - link
The 4GB bandwidth takes into account the encoding scheme.Each lane of v1 PCI-Express had 2.5GT/s so with 8b/10b encoding you end up with 2.5G/10 = 250MB/s. Quadruple that for four lanes and you end up with 1GB/s.
v2 of PCI-Express is double that and v3 of PCI-Express is further double that and there is the 4GB number.
aggrokalle - Thursday, April 2, 2015 - link
i'm interrested in this as well...so how many nand-channels got the 1.2tb and 400gb version Kristian?tspacie - Thursday, April 2, 2015 - link
Was there an approximate release date?gforce007 - Thursday, April 2, 2015 - link
When will these be available for purchase? Also I have a m.2 slot on my motherboard (z10PE-D8 WS) Id rather utilize the 2.5 15mm form factor. I am a bit confused. I dont think that board has SFF-8639. Is there an adapter. Will that affect performance? I assume so and by how much?knweiss - Thursday, April 2, 2015 - link
The motherboard (host) end of the cable has a square-shaped SFF-8643(!) connector. E.g. ASUS ships an M.2 adapter card for the X99 Sabertooth that offers a suitable port. SFF-8639 is on the drive's end.emn13 - Thursday, April 2, 2015 - link
That endurance number is scarily low for a 1.2TB drive. 70GB a day for 5 years - thats about 128 TB of writes total, and that's just 100 drive writes! Put another way, at around 1GB/sec (which this drive can easily do), you'd reach those 100 drive writes in just 36 hours.Of course, that's an extremely intensive workload, but I sure hope this is just intel trying to avoid giving any warrantee rather than an every remotely realistic assessment of the drives capabilities.
p1esk - Thursday, April 2, 2015 - link
This is a consumer drive. What's your use case where you write more than 70GB a day?juhatus - Friday, April 3, 2015 - link
Raw 4k video and its not even close to being enough.At 4K (4096 x 2160) it registers 1697 Mbps which equals 764 GB/hour of 4K video footage. A single camera large Hollywood production can often shoot 100 hours of footage. That’s 76 TB of 4K ProRes 4444 XQ footage.
The upcoming David Fincher film GONE GIRL crept up on 500 hours of raw footage during its multi camera 6K RED Dragon production. That equates to roughly 315 TB of RED 6K (4:1) footage. Shit just got real for data management and post production workflows.
p1esk - Friday, April 3, 2015 - link
Let me say that again: this is a consumer drive. That's why it is so cheap compared to 3700. A large Hollywood production company will surely be able to afford enough of these drives not to worry about exceeding 128TB write limits.emn13 - Saturday, April 4, 2015 - link
I'm sure they can afford it - but why pay more than necessary? Compared to the competition, this is an unusally low write endurace for such a high-end drive. Take a peek at say the 1TB 850 Pro; that's likely to be considerably cheaper (and perhaps more deserving of the "consumer" monniker), and it's NAND is rated for a little more than 6000TB of (raw) writes.128TB? That's really, really unusual for a drive like this.
earl colby pottinger - Tuesday, April 7, 2015 - link
Because that is how you run your company out of business by being cheap of key hardware.If you are producing enough 4K video to stress this drive, you are producing enough video that the cost of production is way greater that the cost of drives that you don't have to worry about this type of failure.
I have seen tons of companies go out of business or lose out on thousands of dollars in sales because they tried to save a few hundred dollars up-front against my advice.
Stop looking for cheap solutions if the storage is critical to the running of your business.
emn13 - Saturday, April 4, 2015 - link
I do a lot of large-file snapshot/restore stuff, and I definitely write a lot more than 70gb a day. Intel's own consumer level 335 was rated for 700TB, and that was a much smaller drive. More specifically, this hasn't been a problem on other drives - neither on ssd's, nor on hdds. While it's conceivable there are more efficient ways of working from the perspective of the drive, that's a hassle to arrange.Perhaps it's worth pointing you to the SSD endurance experiment: http://techreport.com/review/26523/the-ssd-enduran...
All of these drives approximately 240GB drives survived at least 700gb, and it was specifically the intel that seemingly intentionally bricked itself then.
This drive is 5 time larger, and is rated for a fraction of that. This is pretty unreasonable to my mind.
darkgreen - Thursday, April 2, 2015 - link
This part of the review made be curse:"...with the X25-M. It wasn't the first SSD on the market, but it was the first drive that delivered the aspects we now take for granted: high, consistent and reliable performance."
Arrrgh...
I was one of the early adopters who paid a ton for the XM-25. If you go back through the archives, though, you'll see Intel's XM-25 had a fragmentation bug that made it slower than a spinning platter hard drive after a bit of use. I was in that situation. Intel released a "fix" based on a script run on some old freeware, but they didn't support the fix *at all* and for many people (including me) it would never work.
So INTEL's "high, consistent and reliable performance" turned out to be total crap. I paid over $400 for what turned out to be a doorstop and had to replace it with a Corsair SSD a short time later. INTEL never offered a refund, support, or even an apology to all the people they had sold a totally nonfunctional product to. I still have that drive in my electronic junk pile and I curse INTEL every time it catches my eye.
I'm waiting for good PCIe SSD before my next PC build, unfortunately I would say INTEL products don't count because in the past we've seen (inarguably, and documented on this very site!) that they mass release buggy products and if you happen to have bought one you're just hung out to dry when they turn out to have had major design errors.
Ugh. at least mention the history here and caution people instead of suggesting Intel is reliable.
Makaveli - Thursday, April 2, 2015 - link
anecdotal evidence!!I've have two G2 160GB intel drives in Raid 0 for a couple years now and they been solid no issues.
So I disagree with your post do I win ?
darkgreen - Friday, April 3, 2015 - link
I wasn't expected that kind of reply. Google "intel replicated SSD firmware problem" (without the quotes) and you can read about the various things that happened, many of which were first reported at this very site, but I guess it WAS 6 years ago so I shouldn't expect everyone to know about it.I was running win 7 64-bit and had a G1. You'll see reports that ALL G1's had a fragmentation issue that made them slower than spinning platters after a bit of use, and you'll see mainstream media reports about how the "fix" instead bricked drives for many users on win 7 64-bit .
Not anecdotes, mainstream reporting and I was one of the thousands affected and can confirm that even after those reports Intel did nothing for non-enterprise users but delete the 50-page thread on their support site.
Kristian Vättö - Thursday, April 2, 2015 - link
To put it frankly, there's no SSD (or HDD) manufacturer that hasn't had any issues, so you might as well go back to the good ol' pen&paper if you want something truly reliable ;-)Raniz - Thursday, April 2, 2015 - link
Until the pen explodes and you have to buy a new shirtdarkgreen - Friday, April 3, 2015 - link
Agreed. In coming up with a good google search for the guy above who apparently hadn't heard about this I encountered a lot of articles about necessary firmware updates for other vendors as well. All I know is that Intel left consumers without options or replacements, I don't know what happened in all those other cases. I suppose it's a good reason to think about how important the storage division is to any company you buy from, though. Intel might, conceptually, want to support SSDs but I'd imagine all the management focus is on enterprise and processors. So who do you go with? OCZ (yikes! but maybe okay after the buyout?) Any thoughts on which companies actually value consumer purchases of their SSDs as "mission-critical" ?magreen - Thursday, April 2, 2015 - link
darkgreen, are you talking about a G1 without TRIM or a G2 with TRIM support?darkgreen - Friday, April 3, 2015 - link
I had a G1 without TRIM. The Intel fix was based on some ancient shareware (FreeDOS!) that wouldn't work with many modern motherboards and in some cases left drives bricked. It was well reported at the time (see my comment above for a google search that returns articles), but lots of people wound up with X25-Ms that were useless. If you weren't an enterprise customer the Intel response was "tough luck." No refunds, no replacements, nothing. In all fairness I'm sure Intel would love to be able to support consumers, but they probably aren't set up for it in their storage area because it's just not a big area of their business bottom line.magreen - Sunday, April 5, 2015 - link
Yeah, it seems like the G1 owners got screwed. (I have a G2 and G3 and they've both been great. Sorry they screwed the early adopters.)In Anand's words from 2009 when the G2 was released:
"TRIM isn’t yet supported, but the 34nm drives will get a firmware update when Windows 7 launches enabling TRIM. XP and Vista users will get a performance enhancing utility (read: manual TRIM utility). It seems that 50nm users are SOL with regards to TRIM support. Bad form Intel, very bad form."
http://anandtech.com/show/2806
"Overall the G2 is the better drive but it's support for TRIM that will ultimately ensure that. The G1 will degrade in performance over time, the G2 will only lose performance as you fill it with real data. I wonder what else Intel has decided to add to the new firmware...
I hate to say it but this is another example of Intel only delivering what it needs to in order to succeed. There's nothing that keeps the G1 from also having TRIM other than Intel being unwilling to invest the development time to make it happen. I'd be willing to assume that Intel already has TRIM working on the G1 internally and it simply chose not to validate the firmware for public release (an admittedly long process). But from Intel's perspective, why bother?
Even the G1, in its used state, is faster than the fastest Indilinx drive. In 4KB random writes the G1 is even faster than an SLC Indilinx drive. Intel doesn't need to touch the G1, the only thing faster than it is the G2. Still, I do wish that Intel would be generous to its loyal customers that shelled out $600 for the first X25-M. It just seems like the right thing to do. Sigh."
http://www.anandtech.com/show/2829/11
Redstorm - Thursday, April 2, 2015 - link
Could you elaborate on this (although there appears to be an NVMe version too after all) of the SM951. As looking at the numbers if NVMe even slightly improves the SM951 it would make it a better choice, and the form factor being M.2 makes it much more attractive.Kristian Vättö - Thursday, April 2, 2015 - link
Ganesh received an NVMe version of the SM951 inside a NUC and I've also heard from other sources that it exists. No idea of its retail availability, though, as RamCity hadn't heard about it until I told them.eddieobscurant - Thursday, April 2, 2015 - link
if i'm not wrong the nvme version has p/n MZVPV256HDGL-00000 for the 256gb model while the ahci version has p/n MZHPV256HDGL-00000Redstorm - Friday, April 3, 2015 - link
Thanks looks promising , found this with verbage suposidly from RAMCity that they will ship in May.http://translate.google.co.nz/translate?hl=en&...
Redstorm - Friday, April 3, 2015 - link
So no real proof that they exist then.eddieobscurant - Thursday, April 2, 2015 - link
Kristian, there is a DRAM difference between the two models. The 400gb has 1gb DRAM while the 1.2tb model has 2gb. Do you think it plays a big role in terms of performance between the two models.Also is there a way to reduce the overprovision in these drives? I would prefer 80gb more on the 400gb model over less consistency.
When will you review the kingston hyperX predator, and when will samsung release the sm951 nvme? Q3 or sooner?
KAlmquist - Thursday, April 2, 2015 - link
The 400gb model shouldn't need as much DRAM because it has fewer pages to keep track of. But there's no way to know how the 400gb model will perform until Intel sends out samples for review.knweiss - Thursday, April 2, 2015 - link
According to Semiaccurate the 400 GB drive has "only" 512 MB DRAM.(Unfortunately, ARK hasn't been updated yet so I can't verify.)
eddieobscurant - Thursday, April 2, 2015 - link
You're right it's probably 512mb for the 400gb model and 1gb for the 1.2tb modelAzunia - Thursday, April 2, 2015 - link
In PCPer's review of this drive, they actually talk about the problems of benchmarking this drive. (https://www.youtube.com/watch?v=ubxgTBqgXV8)Seems like some benchmarks like Iometer cannot actually feed the drive, due to being programmed with a single thread. Have you had similar experiences during benchmarking, or is their logic faulty?
Kristian Vättö - Friday, April 3, 2015 - link
I didn't notice anything that would suggest a problem with Iometer's capability of saturating the drive. In fact, Intel provided us Iometer benchmarking guidelines for the review, although they didn't really differ from what I've been doing for a while now.Azunia - Friday, April 3, 2015 - link
Reread their article and it seems like the only problem is the Iometer's Fileserver IOPS Test, which peaks at around 200.000 IOPS, since you don't use that one thats probably the reason why you saw no problem.Gigaplex - Thursday, April 2, 2015 - link
"so if you were to put two SSD 750s in RAID 0 the only option would be to use software RAID. That in turn will render the volume unbootable"It's incredibly easy to use software RAID in Linux on the boot drives. Not all software RAID implementations are as limiting as Windows.
PubFiction - Friday, April 3, 2015 - link
"For better readability, I now provide bar graphs with the first one being an average IOPS of the last 400 seconds and the second graph displaying the standard deviation during the same period"lol why not just portray standard deviation as error bars like they are supposed to be shown. Kudos for being one of the few sites to recognize this but what a convoluted senseless way of showing them.
Chloiber - Friday, April 3, 2015 - link
I think the practical tests of many other reviews show that the normal consumer has absolutely no benefit (except being able to copy files faster) from such an SSD. We have reached the peak a long time ago. SSDs are not the limiting factor anymore.Still, it's great to see that we finally again major improvements. It was always sad that all SSDs got limited by the interface. This was the case with SATA 2, it's the case with SATA 3.
akdj - Friday, April 3, 2015 - link
Thanks for sharing KristianQuery about the thorough put using these on external Thunderbolt docks and PCIe 'decks' (several new third party drive and GPU enclosires experimenting with the latter ...and adding powerful desktop cards {GPU} etc) ... Would there still be the 'bottle neck' (not that SLI nor Crossfire with the exception of the MacPro and how the two AMDs work together--would be a concern in OS X but Windows motherboards...) if you were to utilize the TBolt headers to the PCIe lane-->CPU? These seem like a better idea than external video cards for what I'm Doing on the rMBPs. The GPUs are quick enough, especially in tandem with the IrisPro and its ability to 'calculate' ;) -- but a 2.4GB twin card RAID external box with a 'one cord' plug hot or cold would be SWEEET,
wyewye - Friday, April 3, 2015 - link
Kristian: test with QD128, moron, its NVM.Anandtech becomes more and more idiotic: poor articles and crappy hosting, you have to real pages multiple times to access anything.
Go look at TSSDreview for a competent review.
Kristian Vättö - Friday, April 3, 2015 - link
As I explained in the article, I see no point in testing such high queue depths in a client-oriented review because the portion of such IOs is marginal. We are talking about a fraction of a percent, so while it would show big numbers it has no relevance to the end-user.voicequal - Saturday, April 4, 2015 - link
Since you feel strongly enough to levy a personal attack, could you also explain why you think QD128 is important? Anandtech's storage benchmarks are likely a much better indication of user experience unless you have a very specific workload in mind.d2mw - Friday, April 3, 2015 - link
Guys why are you cutpasting the same old specs table and formulaic article? For a review of the first consumer NVMe I'm sorely disappointed you didn't touch on latency metrics: one of the most important improvements with the NVMe busKristian Vättö - Friday, April 3, 2015 - link
There are several latency graphs in the article and I also suggest that you read the following article to better understand what latency and other storage metrics actually mean (hint: latency isn't really different from IOPS and throughput).http://www.anandtech.com/show/8319/samsung-ssd-845...
Per Hansson - Friday, April 3, 2015 - link
Hi Kristian, what evidence do you have that the firmware in the SSD 750 is any different from that found in the DC P3600 / P3700?According to leaked reports released before they have the same firmware: http://www.tweaktown.com/news/43331/new-consumer-i...
And if you read the Intel changelog you see in firmware 8DV10130: "Drive sub-4KB sequential write performance may be below 1MB/sec"
http://downloadmirror.intel.com/23931/eng/Intel_SS...
Which was exactly what you found in the original review of the P3700:
http://www.anandtech.com/show/8147/the-intel-ssd-d...
http://www.anandtech.com/bench/product/1239
Care to retest with the new firmware?
I suspect you will get identical performance.
Per Hansson - Saturday, April 4, 2015 - link
I should be more clear: I mean that you retest the P3700.And obviously the performance of the 750 wont match that, as it is based of the P3500.
But I think you get what I mean anyway ;)
djsvetljo - Friday, April 3, 2015 - link
I am unclear of which connector will this use. Does it use the video card PCI-E port?I have MSI Z97 MATE board that has one PCI-E gen3 x16 and one PCI-E gen2 x 4. Will I be able to use it and will I be limited somehow?
DanNeely - Friday, April 3, 2015 - link
if you use the 2.0 x4 slot your maximum throughput will top out at 2gb/sec. For client workloads this probably won't matter much since only some server workloads can hit situations where the drive can exceed that rate.djsvetljo - Friday, April 3, 2015 - link
So it uses the GPU express port although the card pins are visually shorter ?eSyr - Friday, April 3, 2015 - link
> although in real world the maximum bandwidth is about 3.2GB/s due to PCIe inefficiencyWhat does this phrase mean? If you're referring to 8b10b encoding, this is plainly false, since PCIe gen 3 utilized 128b130b coding. If you're referring to the overheds related to TLP and DLLP headers, this is depends on device's and PCIe RC's maximum transaction size. But, even with (minimal) 128 byte limit it would be 3.36 GB/s. In fact, modern PCIe RCs support much larger TLPs, thus eliminating header-related overheads.
oranos - Friday, April 3, 2015 - link
Insane performance, insane value. What else to say? Intel never loses a step and surprises at every turn.Peichen - Friday, April 3, 2015 - link
Sounds like there is going to be big form factor change coming to desktop computer in the next few years. Complete removal of 5.25 and 3.5" drives, M.2 and 2.5" drives taking over, CPU limted to <77W and video card to <250W.I should hold off replacing my still very good case until I am building a new computer in 3~4 years.
cjones13 - Friday, April 3, 2015 - link
how would this drive compare with a 4 drive (samsung 850 pro 512), two card Sonnet tempo ssd pro plus arrangement? this set up is about $600 more, but 800GB larger and overall ~same $/GB @.82Freakie - Friday, April 3, 2015 - link
Maybe I'm just blind, but I don't see this 750 in Bench? Did someone forget to add it to Bench or is there a reason why it's not in there?boe - Saturday, April 4, 2015 - link
Those 10TB and 32TB SSDs can't come soon enough. I just hope they come down to an affordable price very soon as standard SSDs are still way to expensive per TB for any real storage needs.gattberserk - Saturday, April 4, 2015 - link
Can I ask why is the boot time so slow? For a drive this expensive this is not something that is tolerable.Is it possible to do a boot up timing with the fast boost function enabled? I wanna see how fast will it be as compared with other SATA drives using the same fast boot function.
The boot up time will be the last factor to decide if I wanna pull trigger on this one.
Laststop311 - Saturday, April 4, 2015 - link
This drive is a beast and just raised the cost of my skylake-e build another 1000 dollars. Maybe an even better 2nd generation version will be out by then. Upgrading my gulftown to 8 core skylake-e flagship. 4.3ghz i7-980x will have lasted me 7 years by the time skylake-e comes out which is a pretty darn good service life. Convert the ole gulftown into a seedbox/personal cloud nas/htpc/living room gaming console. Kill all the oc's and undervolt cpu for the lowest voltage stable at stock and turn all the noctua fans down with ULN adapters into silence mode. It will be rough re buying a buncha parts I wouldn't of had to if I didn't keep the PC together but it's too good of a PC still to dismantle for parts. Will be nice having a beastly backup pc.My skylake-e build has really ballooned in price but this next upgrade should last a full decade with a couple gpu upgrades using the flagship skylake-e 8 core i7 + 1.2TB intel 750 boot drive + nvidia/amd flagship 16nm FF+ GPU. Basically like 3000 dollars just in 3 parts :(. Thats ok tho it brings too many features to the table pci-e 4.0 DMI 3.0 USB 3.1 built into chipset natively 10gbit ethernet natively up to 3x ultra m2 slots and the SFF connector used in this drive possibly thunderbolt 2 built in natively of course quad channel ddr4. Hopefully better overclocking with the heat producing FIVR removed guessing 4.7-5ghz will be possible on good water cooling to the 8 core.
Sorry got on a tangent. I'm just excited there are finally enough upgrades to make a new PC worth it. No applause for intel tho it took them 7 years to make a gulftown PC worth upgrading. I should see a nice IPC gain from i7-980x gulftown to skylake-e. I'll be happy with 50-60% IPC gain and 500 extra mhz on my 980x so 4.8ghz. I think 6x 140mm high static pressure noctuas in push/pull and a 420mm rad should provide enough cooling for 4.8ghz on 8 core skylake-e if the chip is capable. Goal is to push it to 5.0ghz tho and get 700mhz speed increase + additional 55% IPC gain.
gattberserk - Sunday, April 5, 2015 - link
Unfortunately Skylake e is not coming in another 2 years. There is no news of BW-E even, and that will be another year before Skylake will come in.By den, 750 would have been obsolette, esp with Samsung 3D NAND in NVMe PCIe SSD.
JatkarP - Saturday, April 4, 2015 - link
<$1/GB at 2400/1200 MBps R/W performance. What else you need !!Ethos Evoss - Saturday, April 4, 2015 - link
|I wud rather go for new Plextor which is 5 time cheaper and with same specs..Ethos Evoss - Saturday, April 4, 2015 - link
http://www.thessdreview.com/our-reviews/plextor-m6...Brazos - Monday, April 6, 2015 - link
Does the Plextor use NVMe?Sushisamurai - Saturday, April 4, 2015 - link
I mentioned this on twitter with you already, but Dead Rising 3 on a HDD versus a NVMe SSD comparison would be nice :) would save me the work of doing it and testing it on my own :pAntonAM - Monday, April 6, 2015 - link
I don't understand why both drives have the same endurance if one of them have 3 times more flash? Is it endurance of something else?emn13 - Monday, April 6, 2015 - link
The endurance figure is also *really* low compared to other drives - it works out to around 128TB of total writes - that's on the order of 50 times less than an 850Pro (which is slightly smaller).I'm hoping this is just a really stingy guarrantee, and not representative of the actual drive - otherwise I'd really, recommend against using it.
I mean, running the anandtech destroyer benchmark with its close to 1TB of writes would use up your write-allowance for the next two weeks (put another way, it's cost around 10$ in relation to the 1k drive cost).
edved - Tuesday, April 7, 2015 - link
So how does this compare to the Kingstone HyperX Predator that was recently reviewed and I recently purchased?!eliz82 - Tuesday, April 7, 2015 - link
any chance of testing Kingston HyperX Predator PCIe SSD ?SanX - Tuesday, April 7, 2015 - link
Put down the drain all flash trash and start making full power loss protected ramdrives with flash/harddrive backup. Would be cheap by this time if not slow selfdedtroying flash garbage lying on the way.gospadin - Tuesday, April 7, 2015 - link
In other words, 100x the cost for a marginal improvement in performance?Rustang - Wednesday, April 8, 2015 - link
1) Why would you post a review of a Intel SSD 750 PCIe SSD solution without benchmarking it against the other state of the art Intel PCIe SSD Intel DC P3700 solution?2) Why would you put up sequential/random read/write graphs with pull-downs to display the different hardware results instead of efficiently putting all of the hardware results on ONE graph?
perula - Thursday, April 9, 2015 - link
[For Sell] Counterfeit Dollar(perula0@gmail.com)Euro,POUNDS,PASSPORTS,ID,Visa Stamp.Email/ perula0@gmail.com/
Text;+1(201) 588-4406
Greetings to everyone on the forum,
we supply perfectly reproduced fake money with holograms and all security features available.
Indistinguishable to the eye and to touch.
also provide real valid and fake passports for any country
delivery is discreet
We offer free shipping for samples which is 1000 worth fake as MOQ
oddbjorn - Tuesday, April 14, 2015 - link
I just recieved my 750 yesterday and soon found myself slightly bummed out by the lacking NVMe BIOS-support in my ASUS P8Z77-V motherboard. I managed to get the drive working (albeit non-bootable) by placing it in the black PCIe 2.0 slot of the mainboard, but this is hardly a long term solution. I posted a question to the https://pcdiy.asus.com/ website regarding possible future support for these motherboards and this morning they had publised a poll to check the interest for BIOS/UEFI-support for NVMe's. Please vote here if you (like me) would like to see this implemented! https://pcdiy.asus.com/2015/04/asus-nvme-support-p...Elchi - Wednesday, April 15, 2015 - link
If you are a happy owner of an older ASUS MB (z77, x79, z87) please vote for NVme support !http://pcdiy.asus.com/2015/04/asus-nvme-support-po...
iliketoprogrammeoo99 - Monday, April 20, 2015 - link
hey, this drive is now on preorder at amazon!http://amzn.to/1DDKwoI
only $449 on amazon.
vventurelli74 - Monday, May 4, 2015 - link
Lets say I had an Intel 5520 Chipset based computer that has multiple PCIe 2.0 Slots. I would be able to get almost the maximum read performance (Since PCIe 2.0 is 500MB/s per 1X, 4X = 2000MB/s, which is exciting on an older computer. I am curious as to if this would be a bootable solution on my desktop though. With 12 Cores and 24 Threads, this computer is far from under-powered, and it would be nice to breath life into this machine, but the BIOS would have no NVMe support that I can think of. I know it has Intel SSD support, but this is from a different era. I wish someone could confirm that this either will, or will not be bootable on non-MVMe mobo's. I am getting conflicting answers.vventurelli74 - Monday, May 4, 2015 - link
Nevermind, finally found the requirements that this drive will not be bootable on on NVMe machines, whats more is even using it as a 'secondary' drive requires UEFI apparently. My computer wouldn't be able to use this card at all? That would suck.xyvyx2 - Friday, May 8, 2015 - link
Great review!Kristian, any chance you have two of these drives in the same machine & you could test RAID0 performance? I'm running into some slow read performance when using two Samsung PCIe drives in a Dell server w/ a RAID1 or RAID0 config. It's not like regular bottlenecking where you hit a performance cap, but where transfer rate drops down to ~ 1/5th the speed at a lower xfer rate.
I thought this was just a Storage Spaces problem, but the same holds true w/ regular windows software raid. I got up to about 4,200 MB/sec, then it tanked. I then ran two simultaneous ATTO tests on two of the drives and they both behaved normally & peaked at 2,700 MB/sec... so I don't think I'm hitting a PCIe bus limitation... I think it's all software.
I posted more detail on Technet here:
https://social.technet.microsoft.com/Forums/en-US/...
shadowfang - Saturday, September 26, 2015 - link
How does the pcie card perform on a system without nvme?