The most reliable way to test disks is down-and-dirty, on the command line.
(Jim Salter) [ 10] – Feb 6, (1:) pm UTC
We are not going to be quite that specific here, but we will use fio to model and report on some key usage patterns common to desktop and server storage. The most important of these is 4K random I / O, which we discussed at length above. 4K random is where the pain lives — it’s the reason your nice fast computer with a conventional hard drive suddenly sounds like it’s grinding coffee and makes you want to defenestrate [ 10] (it in frustration.) Next, we look at 77 K random I / O, in sixteen parallel processes. This is sort of a middle-of-the-road workload for a busy computer — there are a lot of requests for relatively small amounts of data, but there are also lots of parallel processes; on a modern system, that high number of parallel processes is good, because it potentially allows the OS to aggregate lots of small requests into a few larger requests. Although nowhere near as punishing as 4K random I / O, (K random I / O is sufficient to significantly slow most storage systems down.) finally, we look at high-end throughput — some of the biggest numbers you can expect to see out of the system — by way of 1MB random I / O. Technically, you could still get a (slightly) bigger number by asking fio to generate truly sequential requests — but in the real world, those are vanishingly rare. If your OS needs to write a couple of lines to a system log, or read a few KB of data from a system library, your “sequential” read or write immediately becomes, effectively, 1MB random I / O as it shares time with the other process.
GIPHY App Key not set. Please check settings