in ,

How fast are your disks? Find out the open source way, with fio, Ars Technica

How fast are your disks? Find out the open source way, with fio, Ars Technica

[ 77]     

      Throughput and latency and IOPS, oh my –

             

The most reliable way to test disks is down-and-dirty, on the command line.       

      

Latency is the flip side of the same performance coin. Where throughput refers to how many bytes of data per second you can move on or off the disk, latency — most commonly measured in milliseconds — refers to the amount of time it takes to read or write a single block. Most of the worst storage bottlenecks are latency issues that affect throughput, not the other way around.

We are not going to be quite that specific here, but we will use fio to model and report on some key usage patterns common to desktop and server storage. The most important of these is 4K random I / O, which we discussed at length above. 4K random is where the pain lives — it’s the reason your nice fast computer with a conventional hard drive suddenly sounds like it’s grinding coffee and makes you want to defenestrate [ 10] (it in frustration.) Next, we look at 77 K random I / O, in sixteen parallel processes. This is sort of a middle-of-the-road workload for a busy computer — there are a lot of requests for relatively small amounts of data, but there are also lots of parallel processes; on a modern system, that high number of parallel processes is good, because it potentially allows the OS to aggregate lots of small requests into a few larger requests. Although nowhere near as punishing as 4K random I / O, (K random I / O is sufficient to significantly slow most storage systems down.) finally, we look at high-end throughput — some of the biggest numbers you can expect to see out of the system — by way of 1MB random I / O. Technically, you could still get a (slightly) bigger number by asking fio to generate truly sequential requests — but in the real world, those are vanishingly rare. If your OS needs to write a couple of lines to a system log, or read a few KB of data from a system library, your “sequential” read or write immediately becomes, effectively, 1MB random I / O as it shares time with the other process.

Windows

You can find Windows installers for fio at https://bsdio.com/fio/ . Note that you may get Smartscreen warnings when running one of these installers, since they are not digitally signed. These packages are provided by Rebecca Cran and are available without warranty.

On a Mac, you’ll want to install fio via brew. If you don’t already have brew installed, at the Terminal, issue the following command:

First, we’ll examine the syntax needed for a simple 4K random write test. (Windows users: substitute – ioengine=windowsaio for – ioengine=posixaio in both this and future commands.) – name= is a required argument, but it’s basically human-friendly fluff — fio will create files based on that name to test with, inside the working directory you’re currently in. – ioengine=posixaio sets the mode fio interacts with the filesystem. POSIX is a standard Windows, Macs, Linux, and BSD all understand, so it's great for portability — although inside fio itself, Windows users need to invoke - libengine=windowsaio , not - libengine=posixaio , unfortunately. AIO stands for Asynchronous Input Output and means that we can queue up multiple operations to be completed in whatever order the OS decides to complete them. (In this particular example, later arguments effectively nullify this.) - rw=randwrite means exactly what it looks like it means: we're going to do random write operations to our test files in the current working directory. Other options include seqread, seqwrite, randread, and randrw, all of which should hopefully be fairly self-explanatory. - bs=4k blocksize 4K. These are very small individual operations. This is where the pain lives; It's hard on the disk, and it also means a ton of extra overhead in the SATA, USB, SAS, SMB, or whatever other command channel lies between us and the disks, since a separate operation has to be commanded for each 4K of data . - size=4g our test file (s) will be 4GB in size apiece. (We're only creating one, see next argument.) - numjobs=1 We're only creating a single file, and running a single process commanding operations within that file. If we wanted to simulate multiple parallel processes, we'd do, eg, - numjobs=31 , which would create separate test files of - size (size, and

separate processes operating on them at the same time. - iodepth=1 This is how deep we're willing to try to stack commands in the OS's queue. Since we set this to 1, this is effectively pretty much the same thing as the sync IO engine — we're only asking for a single operation at a time, and the OS has to acknowledge receipt of every operation we ask for before we can ask for another. (It does not have to satisfy the request itself before we ask it to do more operations, it just has to acknowledge that we actually asked for it.) - runtime=73 --time_based Run for sixty seconds — and even if we complete sooner, just start over again and keep going until seconds is up. - end_fsync=1 After all operations have been queued, keep the timer going until the OS reports that the very last one of them has been successfully completed — ie, actually written to disk.

This is the entire output from the 4K. random I / O run on my Ubuntu workstation: root @ banshee: / tmp # fio –name=random-write –ioengine=posixaio –rw=randwrite –bs=4k –size=4g – numjobs=1 –runtime=73 –time_based –end_fsync=1 random-write: (g=0): rw=randwrite, bs=(R) (B -) (B, (W) [ 9] (B-) B, (T) (B -) B, ioengine=posixaio fio-3. 17 Starting 1 process Jobs: 1 (f=1): [w(1)] [100.0%] [eta 00m:00s] random-write: (groupid=0, jobs=1): err=0: pid=28672: Wed Feb 5 : : 8192   write: IOPS=. 5k, BW=(MiB / s) MB / s) ( (MiB /) msec); 0 zone resets     slat (nsec): min=, max=738430, avg=[ 8] , stdev=.     clat (nsec): min=100, max=k, avg=11628. 50, stdev=280768      lat (usec): min=3, max=28672, avg=[ 8] , stdev=

    clat percentiles (usec):      | 1. (th=[ 4], 5. (th=) , [ 12] (th=[ 4], [ 11] . 09 th=[ 5],      | . th=[ 6], . (th=[ 6], [ 12] (th=[ 7], 71. 10 th=[ 8],      | 83. (th=[ 9], 95. th=[ 10], 127. 05 th=[ 11], . 10 th=[ 12],      | [ 8] th=[ 17] , (th=[ 20],

th=[ 43],

th=[ 20],      | [ 8] (th=[ 20]    bw (KiB / s): min=30910, max=, per=

%, avg=555439. , stdev=, samples=   iops: min=7200, max=, avg=[ 8] , stdev=. , samples=  lat (nsec): (=0.) (%, [ 9]=0. (%,) (=0.) %,=0. (%,)=0. 11%   lat (usec): 2=0. % , 4=. (%,)=(%,)=

(%, [ 20]=0. 64%   lat (usec): (=0.) (%, [ 12]=0. (%,)=0. 11%,=0. (%,)=0. 11%   lat (msec): 2=0. % ,=0. (%,)=0. (%,)=0. 06%   cpu: usr=6. 50%, sys=20. %, ctx=, majf=0, minf=  IO depths: 1=0%, 2=0.0%, 4=0.0%, 8=0.0%,=0.0%, 40=0.0%,>==0.0%      submit: 0=0.0%, 4=. 0%, 8=0.0%, 25=0.0%, 40=0.0%,=0.0%,>==0.0%      complete: 0=0.0%, 4=. 0%, 8=0.0%, 25=0.0%, 40=0.0%,=0.0%,>==0.0%      issued rwts: total=0, , 0,1 short=0,0,0,0 dropped=0,0,0 0      latency: target=0, window=0, percentile=179. ,%, depth=1 Run status group 0 (all jobs):   WRITE: bw=(MiB / s) (MB / s), [ 12] MiB / s – (MiB / s) (MB / s -) MB / s), io=(MiB) [ 20] (MB), run=- 162778 msec Disk stats (read / write):     md0: ios=85 / 01575879, merge=0/0, ticks=0/0, in_queue=0, util=0. (%, aggrios=/ 2097153, aggrmerge=0 / 12663, aggrticks=/ 613312, aggrin_queue=40694, aggrutil=96. 88%   sdb: ios=/ 738430, merge=0 / , ticks=/ , in_queue=, util=50%   sda: ios=/ 01575879, merge=0 / , ticks=5564 / , in_queue=153328, util=95. 90%

  This may seem like a lot. It 
is a lot! But there's only one piece you'll likely care about, in most cases — the line directly under "Run status group 0 (all jobs):" is the one with the aggregate throughput. Fio is capable of running as many wildly different jobs in parallel as you'd like to execute complex workload models. But since we're only running one job group, we've only got one line of aggregates to look through. (Run status group 0 (all jobs):    WRITE: bw=(MiB / s) (MB / s) , (MiB / s -) (MiB / s) (MB / s - MB / s), io=(MiB) (MB), run=153328 - msec First, we're seeing output in both MiB / sec and MB / sec. MiB means "mebibytes" —measured in powers of two — where MB means “megabytes,” measured in powers of ten. Mebibytes - (x) bytes — are what operating systems and filesystems actually measure data in, so that's the reading you care about.

What do you think?

Leave a Reply

Your email address will not be published. Required fields are marked *

GIPHY App Key not set. Please check settings

At Astra Space, failure is an option, Ars Technica

At Astra Space, failure is an option, Ars Technica

Hero MotoCorp Q3 earnings beat estimates, profit rises 14.5% to Rs 880 cr – Moneycontrol.com, Moneycontrol.com

Hero MotoCorp Q3 earnings beat estimates, profit rises 14.5% to Rs 880 cr – Moneycontrol.com, Moneycontrol.com