Download as pdf or txt
Download as pdf or txt
You are on page 1of 14

Reversing the mindless enslavement of humans by technology.

Experiment: 8Gb USB Flash Drive Endurance Test


Posted on May 30, 2017 by lui_gough

Flash memory is, and has been, a commodity item for a while now. Almost everyone has at least a few
USB flash drives (sticks) and maybe even a few memory cards. When it comes to rapidly transferring
large files between devices, or even storing working documents, USB flash drives are the universal
choice.In recent years, we’ve also seen the popularization of solid-state drives (SSDs) which have put high
demands on flash manufacturers in terms of volume, as they rapidly displace moderately high capacity
hard drive storage from mainstream computers.

However, while flash continues to grow in popularity, it is not without its downsides. The nature of flash
memory itself operates on the principle of charge trapping and tunneling, which is not a perfect process.
The charges that represent the data can be lost over time due to the insulator being leaky or damaged –
the whole process of writing and erasing data relies on tunneling that induces damage into the insulator
which is not readily repairable in-place. Worst still, owing to price pressure, the inclination to get more
data onto less silicon has resulted in smaller feature sizes (e.g. 16nm) which result in smaller cells which
hold less charge and have a shorter lifetime presumably due to the smaller insulator area as well. This is
compounded by the move to triple-level cell (TLC) storage, which results in the need to accurately
distinguish between eight levels of cell voltage rather than the four of MLC or two of SLC, thus reducing
the margin for error. As a result, at least for planar NAND, the move to smaller lithography and TLC
have resulted in reduced endurance.

For the most part, most opinions do not focus on the issues of endurance as much as in the early days of
SSDs. Larger drives, forced overprovisioning and better wear levelling technologies have largely resulted
in consumer drives seeing only a limited number of write-erase cycles before being retired because
they’re too slow or too small. Simplistic calculations of say 10Gb written a day to a 1Tb SSD only results in
18.25 cycles (assuming no write amplification and perfect wear levelling) within a lifetime of five years.
As a result, manufacturers seem to warrant SSDs based on terabytes written which can result in
anywhere between 150-500 cycles for TLC drives in general, even though most of their drives will go
further. However, the situation for inexpensive USB flash drives and memory cards are not so clear.
Part of the issue is a lack of diagnostic data (e.g. SMART as on SSDs) which would allow users to
understand the condition of their drives. Another is the lack of willingness on the manufacturer’s behalf
to state any information as to the endurance of their products.

As a result, I decided to run a little experiment to try and find out just how robust inexpensive USB flash
drives of today are.
The Contenders and Methodology
Three contenders were chosen for this experiment – namely, three different drives I have reviewed in the
past on this site, of which I still had brand new samples which were never used. The drives are:

Comsol 8Gb UF4-8000 USB Flash Drive


Sandisk Cruzer Facet 8Gb
Verbatim Store’n’Go Pinstripe 8Gb

They were attached to a computer with a USB 2.0 port and formatted to exFAT (to allow for large files
>4Gb to be stored). Cygwin was used to continually write to the drive and log the progress using the
following command:

while :; do dd if=/dev/random of=/cygdrive/X/rawout.raw bs=8M &>> stress

Note that the command continually recreates a file filled with pseudo-random contents, thus the drive
cannot “cheat”. A large block size was chosen to avoid write amplification due to small-block accesses
being immediately purged to the drive. However, no effort to verify the validity of written data or
verify that any stored data would be retained over a period of cold storage was undertaken. The
experiment would be terminated and the log file examined to tally the number of write cycles endured as
soon as the drive failed to continue to receive data. A sleep time was set for each loop to avoid excessive
CPU utilization on failure (although I should have probably checked the return value of dd to terminate
the loop instead). Post-failure examination of the drive status was undertaken.

This experiment was conducted beginning November, prior to leaving for my holiday and for a variety of
reasons (including a power failure and loss of system control due to loss of internet connectivity while I
was overseas), was not completed until recently.

Results
Write Endurance

Under the same conditions, the first drive to fail was the Sandisk Cruzer Facet, at 632 cycles written.
This aligns with the rough expectation that planar TLC NAND may only achieve anywhere from 300-1000
cycles.
The more devastating result was its failure mode. The drive exhibited a reluctance to conduct writes
which resulted in the filesystem becoming corrupted. On a removal and reinsertion, the drive needed to
be formatted as no valid filesystem was recognized.

The damage was, however, more serious than that as the internal drive geometry appears to have been
lost. The drive was unable to report its original size, and thus was unable to be formatted, nor read.
Recovery of data is very unlikely even with some expertise, as the whole drive is a single system-on-
package and thus the NAND and controller are encapsulated in the one package.
In one case, I was able to get the geometry to recognize, however, the drive failed to return any data.

The next drive to fail was the Comsol UF4-8000 which first hiccuped at 743 cycles by dropping out of the
USB bus with partial corruption. A reformat was successful, and another 216 write cycles were completed
before the drive again dropped out, for a total of 959 cycles. Partial corruption, especially of the
filesystem, was experienced, however, the drive remained readable.
Under this condition, it seems likely that data would be recoverable when the drive first faulters.
However, wishing to explore the failure mode further, I reformatted the unit and attempted to run
H2testW on it, for it to fail in less than a cycle of writes.

Unfortunately, as it turns out, the act of trying to further write to the drive seems to have destroyed it –
it now wants to be formatted, but fails to be formatted. It’s likely that the drive has now failed in a read-
only state, but in a corrupted one. As a result, it probably indicates that when a drive is exhibiting drop-
out symptoms from high cycle use, that one should avoid any repairs to the filesystem and just image
the card and recover from a copy of the data, as any writes may well exhaust the few spare-cells that
remain.
It’s interesting to see that while both were based around Sandisk NAND, the Comsol did last a little longer
(but not by that much) and fail differently as it used a third party controller. Another factor is likely due
to the greater overprovisioning – 7.26GiB user accessible for the Comsol versus 7.44GiB for the Sandisk
Cruzer Facet.

The last to fail was the Verbatim Store’n’Go which achieved an impressive 9751 cycles. This drive was
made of Toshiba eMMC memory, which is of a grade expected to be embedded in tablet devices, and thus
its endurance was likely to be greater purely due to this fact. Another benefit may have been the greater
overprovisioning on the Verbatim, which only exposed 7.21GiB of user accessible storage.

At failure, the drive was recognized for size and format, with data still partially readable. Attempting to
format and run H2testW resulted in all writes appearing to succeed very slowly but failure to verify
written data.
This may be a preferable end-of-life behaviour, as it would appear to “accept” any filesystem repair
writes despite not being able to write it to memory, while allowing for whatever is still salvageable to be
read. The cycle life result approaches that as claimed for MLC memory (of 10,000), and thus is quite
respectable.

Discussion and Other Key Points


Most users of cheap commodity USB memory sticks will probably be wondering why any of this is
important – after all, many of them may not even use 100 cycles before the stick is lost or damaged, so
even 600 cycles is plenty. However, there’s a few things to consider.

For one, this cycle life test did not evaluate the data accuracy after writing – only when the drive failed to
write or the filesystem got mangled, did we conclude the test and examine the drive. It’s fairly probable
that some data corruption or access failures may have occurred earlier if verification was undertaken. As
a result, the numbers we are getting are an upper bound result.

Secondly, the cycle life test did not evaluate how the persistence of the data stored on the drive. Over
consistent cycling, the damage to the insulator is expected to increase the charge leakage rate, and thus it
is quite probable that data stored on drives which have undergone cyclic writes will not be retained for
as long, especially due to the more stringent voltage margins of TLC storage and the lack of sophistication
in low-cost USB memory controllers. It seems quite probable that after even less than a year of storage,
that data could be permanently lost to the point of the drive failing due to a loss of firmware or geometry
metadata (note my experiences with microSD cards and even a Samsung 850 EVO SSD).

Due to the cost pressures on this segment of the market, the majority of the USB memory sticks on the
market are likely to be planar TLC NAND and suffer such issues. The cheaper, slower, all integrated
miniature USB devices appear to be particularly vulnerable and problematic as any recovery from them
will be complicated by the fact that the NAND can not be directly accessed. Regardless, the failure modes
also vary, and in some cases result in sudden and complete loss of access to data, thus “self rescue” by
software based recovery tools is not a possibility.

With this in mind, these conclusions are likely to extend to many memory cards, particularly microSD
cards where high densities and low costs prevail. In this application, it seems quite likely that cycle-life
exhaustion could occur, especially when used in embedded computing with data logging, or in dashcams
and security cameras which record in a loop. This voids the warranty of many microSD cards, but it’s
important to keep this in mind, as it would render the solution completely ineffective if the card were
to fail or become unreadable when extracted from the camera. In the case of long-term storage, loss of
data is indeed a possibility as such flash is not well suited for long term storage.

Usage patterns will also make a difference, as will the sophistication of the wear levelling algorithm on
the controller. Regardless, if a USB key is acting “iffy”, it could well be a sign that it is running out of spare
cells to reallocate, and could completely lock-up into read-only or become unreadable soon. It’s unwise,
with that in mind, to rely on cheap USB sticks as your sole storage of working documents despite the
popularity of doing this. However, there’s no guarantee that even more expensive units are better.

Conclusion
The endurance of cheap USB flash drives was examined in a write-only scenario, with the Sandisk Cruzer
Facet failing at 632 cycles with complete failure to read any data, the Comsol UF4-8000 failing at 959
cycles with an earlier hiccup and partial corruption but in a readable state that rejected writes, and
finally, the Verbatim Store’n’Go failing at 9751 cycles with partial corruption but a partially readable state
that accepted writes but could not commit them to memory. The failure modes were varied and have an
impact on the “recoverability” of the drive once end of life is reached.

This test produces an upper bound figure on the endurance of cheap drives – it is likely if verification was
completed that drives may have failed earlier, and that impacts on data retention over time would have
also occurred. Even though the cycle figures may seem ample for most consumers, there are a number of
applications (e.g. embedded data-logging, dash cam/surveillance) where it can be exhausted. Use of flash
drives as primary storage for working documents or long term storage is probably unwise. Drives with
drop-out symptoms or random write-failures/verification failures are likely experiencing a pre-failure
symptom and should be imaged/recovered without further writes to avoid complete drive failure.

In light of this, the drives are still very suitable for occasional use and data interchange.

Other technologies other than flash, for example, crosspoint memory as demonstrated in Intel Optane
modules, may well be an alternative technology that overcomes some of these problems. Improvements
in NAND geometry, such as 3D VNAND by Samsung and BiCS by Toshiba may also offset some of the loss
in cycle life endurance, however, it is unlikely that you will find such improved technologies especially
when shopping at the price sensitive bottom-end of the market, as many people do. However, even
paying more is no guarantee as to quality, however, it seems quite likely that the NAND and controllers in
SSDs are made to be significantly more reliable than that of such cheap drives.

Like it? Share it!

 Facebook  Twitter  Reddit  LinkedIn  More

Related
Am I Crazy? An SSD in a USB 3.0 Enclosure Dealing with Bad USB Keys Review, Teardown: Corsair Flash Voyager LS
February 26, 2013 May 8, 2013 32Gb USB 3.0 Flash Drive
In "Computing" In "Computing" February 28, 2015
In "Computing"

Project: Recovery from Logically Damaged Review: SanDisk Ultra Dual Drive m3.0 Review, Teardown: Kingmax SME35 Xvalue
Comsol 8GB USB Key 128Gb USB Flash Drive 240Gb SSD
June 9, 2016 May 20, 2018 April 5, 2014
In "Computing" In "Computing" In "Computing"

Q&A: Why Reformatting a USB Key can Review, Teardown: Transcend SSD340 Review: ADATA Ultimate SU650 480GB 3D
Change its Size & Performance 256Gb 2.5″ Solid-State Drive (TS256GSSD340) NAND SSD (ASU650SS-480GT-R)
January 24, 2016 August 26, 2014 December 19, 2018
In "Computing" In "Computing" In "Computing"

About lui_gough
I'm a bit of a nut for electronics, computing, photography, radio, satellite and other technical hobbies. Click for more
about me!
View all posts by lui_gough →

This entry was posted in Computing, Flash Memory and tagged computer hardware, flash, flash memory, storage, tested, usb. Bookmark the
permalink.

16 Responses to Experiment: 8Gb USB Flash Drive Endurance Test

Tesla1856 says:
May 31, 2017 at 3:34 am

Good post, thanks.


Back in 2011 … one of the first flash drives I ever bought was a SanDisk Cruzer 2gb from Staples. It was plain
without any U3 or other security/encryption features. It always “long” formatted fine (as FAT32). I could even write
small files to it ok. It worked fine at first, but then started to fail. When I wrote files to it, they would get secretly
corrupted. Windows never was able to see that the file was getting corrupted during copy. I had to use a file-copy
utility that verified every file as it was written to find the problem.
Needless to say, I’ve never bought another SanDisk USB-Flash drive again (or ever tried their SSDs). I now stick to
Verbatim and Lexar (both have lifetime warranties here is USA). Have never had one of those fail.
Reply

Michael says:
July 16, 2017 at 4:21 pm

Hey Gough,

Awesome post! I was always wondering regarding the endurance of contemporary flash drives, especially since all
of the blog posts and S. Boboila’s paper are from more than a couple years ago.

There was just one thing I was curious about – it is for sure that SSDs can benefit from overpartitioning. However, is
this also true for cheap flash drives? I identified a controller for a cheap Emtec flash drive a couple years ago, its
technical advertising stated it could do static wear leveling – though some time ago dynamic wear leveling was the
way USB flash usually went I wonder if that’s still the case now.

For example, would a cheap 8GB with a 6GB partition exhibit more endurance than the same model 8GB with a 7GB
partition? Is that unpartitioned space going to be used for wear leveling? What do you think? Is this a question
you’d be interested in investigating and writing about?

I’d be thrilled if so!

– Michael
Reply

lui_gough says:
July 17, 2017 at 9:27 am

Dear Michael,

I had a think about this and short of actually finding another fresh sample and destroying it, the answer is a
qualified “it depends”.

The first thing is how sophisticated the flash controller is. As external drives *mostly* do not support TRIM
over USB, the operating system has no way of signalling to the flash drive as to which blocks are no longer
needed and can be discarded (e.g. a deleted file). Without this mechanism, the pool of “spare” blocks will be
the overprovisioning, minus any bad blocks, assuming the drive had been filled completely at least once.
The only way around this is if the controller is very intelligent (e.g. Sandforce SF-2281 controllers) and can
utilize compression (where possible) to reduce block usage and give some extra room to move blocks
around. The other way is if the controller has an understanding of the underlying filesystem and does things
“on its own”, although this can result in very unexpected clashes with OSes depending on how they manage
their writes (e.g. write caches often mean data is written first, and metadata updated later).

Of course, there *is* a way that it can work without any of this, and that is if the flash mapping table is
completely blank after manufacturing and only populated as the sectors become used. I’ve seen some
evidence – some units have very high read speeds when blank and slow down once completely written to
the specification speed. In these cases, ensuring that the “extra” space is *never* used should leave it free for
the controller to use at its own will. However, whether a particular controller does this is not entirely
obvious, and if the spare area ever gets written, the space is ultimately allocated and taken away from the
controller.
The best way, although not without risk and not always possible, is to use the manufacturing process tools
for your given controller to “re-certify” or “re-manufacture” the drive with a different capacity altogether.
The downside is that doing this may not work in some cases where the compatible tool for a given controller
cannot be found, sometimes it involves a firmware downgrade, it takes time, the verification process for the
flash may be different since the “factory marked” bad block table may be destroyed, and the software could
be laden with viruses/problematic filter drivers that cause USB port issues down the track.

So hence the conclusion – a qualified “maybe”.

Sincerely,
Gough
Reply

dr0 says:
July 14, 2019 at 7:51 am

Brilliant article. I was curious about this subject for a while but have never come across anything like this
experiment of yours. One question: how did you count the number of write cycles? Is there a diagnostic program
for pen drives that can display this number or you just took this information from a log file created by Cygwin?
Reply

lui_gough says:
July 14, 2019 at 10:18 am

As I wrote a simple infinite loop script, the results of the dd command are written appended to the log. By
examining the logfile, you can easily count the instances where the dd process completed by first back-
tracking from the end of the log and snipping away all the lines where the drive had an error and the write
failed, then counting the lines where it shows the average write speed to determine the number of write
cycles completed.

– Gough
Reply

dr0 says:
July 15, 2019 at 8:35 am

Thanks for the reply. One more question: can you help me convert your command to run on Linux? I tried to run it
on one of my machines that runs Linux Mint 18.3, but it didn’t work. I’m pretty bad with software side of things,
especially terminal commands, so maybe I’m just doing something wrong.
Reply

lui_gough says:
July 15, 2019 at 9:29 am

Please please please be extra careful. Don’t go blindly running commands without understanding them, as
dd can and will overwrite things including the drive you are booting from if you are unfortunate enough to
make a critical typo and run it as the root user.

Basically, the reason it won’t work is because you need to change the paths. On Linux, your USB (I presume
it’s already mounted – if not, just open it up in the file browser) would normally be at
/media/username/serialnumber or something similar. You want your test file created inside that – so where
the dd command has of (short for output file), change that to reflect the drive you’re using (e.g.
/media/linuxuser/1234-5678). For drives larger than 4GiB, they MUST be formatted to a filesystem that can
accept a file of the size of the drive (i.e. not FAT32). If you do not, then dd will only be able to overwrite the
first 4GiB repeatedly and your cycle counts will not be accurate! On Linux, you could use ext2/3/4 and likely
even NTFS. Or better yet, you can gain root privileges and instead write to the block device directly (the best
case as this avoids a filesystem altogether allowing for more accuracy in testing the full capacity) but this is
where you need root privileges and a typo in the device name will overwrite and destroy data that you want
(and I can’t do this on Windows on the box I was running unattended while on holiday). You should also
change the logfile path/name to something sane for you or invoke it inside a directory where you can store
the logfile (e.g. ~/logfile.txt). Needless to say, trying to store the logfile on the USB stick itself would not end
well.

– Gough
Reply

dr0 says:
October 15, 2019 at 2:29 am

Hello again. So, I changed the command to while :; do dd if=/dev/random


of=/media/compaq/5a151eb8-c245-4896-8e91-bb713698c21d/rawout.raw bs=8M &>> ~/stress.log; sleep
5; done … But it writes extremely slowly, literally a few bytes per second. The command have been
executing for almost an hour, but the rawout.raw file is only 8.7KB in size and the log file is still
empty. When I write to this very drive via regular file copy the speed is absolutely normal and hovers
around 20MB/s. I have tried only two file system types for executing this command so far though
(FAT32 and EXT4). Do you know what can be the cause of such slow write speeds while executing
your script?
Reply

lui_gough says:
October 15, 2019 at 5:09 am

This is likely due to entropy exhaustion and will depend on the sources of randomness
available in the system.

To overcome this, you can use /dev/urandom instead, which provides a faster stream of
random data but perhaps is not quite as random as /dev/random is. This shouldn’t really
impact on the quality of the test and should improve speeds.

– Gough
Reply

dr0 says:
October 30, 2019 at 9:12 am

Thanks for the tip. Using “urandom” instead of “random” did help to increase the
write speed to its maximum level that my drive is capable of. But the resulting file gets
overwritten every 4GB regardless of target filesystem. So I decided to fill my drive
with random data and leave only 4GB of free space on it and constantly overwrite that
free area. It’s been running this way for a few days now. At first the write speed was
around 18-21 MB/s, as was expected from this drive during normal operation. But
after 1413 cycles of 4GB of free space being overwritten the write speed cut in half and
now is hovering around 8-10 MB/s. At this point the drive already reached 3220
instances of its free area being overwritten. Frankly, I didn’t expect this TLC-based
drive to last this long. While I don’t really care whether P/E cycles will be accurately
counted or not. I just want to wear out the NAND cells on this drive as soon as I can.
Any advice?

lui_gough says:
October 30, 2019 at 11:22 am

The file should not be overwritten every 4GB – something must be wrong. This is most
common on FAT32 formatted devices. I had no issues with NTFS on Windows via
Cygwin. Your other alternative is to make a file of fixed size – e.g. bs=8M count=1000
would produce a file of 8000MB and check that it works properly. Perhaps you’re
running a 32-bit OS and there’s something odd with how /dev/urandom is working for
you.

Perhaps you might need to try making a random file of the right size, store it with a
hash on a drive, copy it to the device, hash it from the device and repeat. This would
also check data integrity, as sometimes drives may silently corrupt data or pretend to
write even though it has failed. It depends on the controller.

Also remember, if half the drive has static data, assuming there is static wear levelling,
writing half the drive will (on average) consume half a cycle from the standpoint of
the whole device.

Note that the drive may outlive your expectations – this is not unsurprising. Some
devices have long outlived their specifications. Look at the Samsung SSD for example –
https://techreport.com/review/27909/the-ssd-endurance-experiment-theyre-all-dead/

– Gough

dr0 says:
November 2, 2019 at 12:04 am

>>> “Perhaps you might need to try making a random file of the right size, store it with
a hash on a drive, copy it to the device, hash it from the device and repeat.” – How do I
do that? I figured out how to write a full drive capacity in one cycle but I have no clue
how to automate the hash checking process…

lui_gough says:
November 2, 2019 at 3:20 pm

Unfortunately, I’ve got many other things to do at this time than to solve this at this
point in time – but if you’re using Linux, you could go file-system-less by using dd with
the block device itself (i.e. /dev/sdX). Likewise, in that case, as long as you know the
size of the device (e.g. from fdisk -l), then you can produce a few sets of random data
of that exact size that you store on a hard drive with their hashes (e.g. out of md5sum).
Write each set in sequence using dd from the hard drive to the USB drive, then hash
the drive itself and log results (e.g. md5sum /dev/sdX), comparing to the pre-recorded
hashes. You might need to use sed/awk to extract just the hash portion from the
returned text.

Otherwise, maybe you’d prefer testing with patterns (e.g. as h2testw does) which
makes it easier to verify the data, but then the issue is that static patterns may be
liable to compression by the drive and may not detect all sorts of corruption. You
might need to write your own program to do this most conveniently.
– Gough

Zbig says:
January 20, 2021 at 11:46 pm

After dealing with a failure of SanDisk Cruzer storing ESXi image I found your blog and wondered if Verbatim still
exists in 2021 and if so – does it still use eMMC memory in Store n’ Go model you tested if that model is still
available. I managed to find 8GB sticks that looks the same as yours however the memory chip looks different (smd
package soldered on two shorter sides). Anyway I bought 20 of them and are in the process of sacrificing one. It’s
been holding up pretty well for over a month – 1257 write cycles with data validation so far without an error.
Reply

Zbig says:
April 28, 2021 at 1:05 am

The stick failed on #4436 full write and read cycle in Check Flash 1.17.0 on April 23rd so it took serious
beating quite well – about 6 months of non-stop writing at 8.48 MB/s followed by reading at 24.59 MB/s.
Reply

lui_gough says:
April 28, 2021 at 9:51 am

That’s not a bad result for a modern product! Thanks for sharing.

– Gough
Reply

Gough's Tech Zone All content © 2012 - 2021 Gough Lui unless otherwise stated.
Proudly powered by WordPress.

You might also like