rageek

A place for Unix Thoughts and Ideas

Seeing a summary of I/O Throughput on a Solaris server

I recently migrated 16TB of storage between 2 systems & arrays using parallel copies with star.

As part of this, I wanted to know my total I/O bandwidth so I could tell when I hit the optimal number of parallel copy jobs and to estimate a completion time (btw the optimal number was 10 jobs).

Here is a simple way of seeing the IO throughput of the fibre/sas/scsi controllers on Solaris.

iostat -xCnM 5 | egrep '%|c.?$'
    r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
    1.0    1.0    0.0    0.0  0.0  0.0    0.1    7.1   0   1 c0
  410.8  178.3   62.5    8.8 13.6  4.5   23.2    7.7   0 125 c1
  410.8  178.5   62.5    8.8 13.6  4.5   23.2    7.7   0 125 c3
    r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0    0.4    0.0    0.0  0.0  0.0    0.0    5.7   0   0 c0
  692.9  227.1  151.0   20.4  0.0  8.1    0.0    8.8   0 395 c1
  678.1  253.9  151.9   21.3  0.0  8.1    0.0    8.7   0 377 c3
    r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
    0.0    0.0    0.0    0.0  0.0  0.0    0.0    0.0   0   0 c0
  816.3  274.4  164.3   24.0  0.0  7.9    0.0    7.2   0 364 c1
  830.7  280.0  169.9   23.8  0.0  8.5    0.0    7.6   0 378 c3
    r/s    w/s   Mr/s   Mw/s wait actv wsvc_t asvc_t  %w  %b device
    6.6    0.0    0.1    0.0  0.0  0.0    0.0    4.2   0   2 c0
  785.8  273.8  153.1   24.4  0.2  7.6    0.2    7.2   0 355 c1
  832.0  260.8  168.4   21.8  0.1  8.1    0.1    7.5   0 377 c3

If all you storage is configured in Veritas, you can use vxstat to get a complete summary of reads and writes.

You can modify the vxstat commands to have it only show throughput for a specific disk group.

INT=5; while /bin/true; do CNT=$((`vxstat -o alldgs |wc -l ` + 2)); vxstat -o alldgs -i$INT -c2 | tail +$CNT | nawk -v secs=$INT ‘BEGIN{ writes=0; reads=0}{writes+=$6;reads +=$5} END {printf (“%.2f MB/s Reads %.2f MB/s Writes\n”, reads/2/1024/secs, writes/2/1024/secs) }’; done
46.58 MB/s Reads 334.42 MB/s Writes
47.25 MB/s Reads 320.77 MB/s Writes
47.55 MB/s Reads 340.67 MB/s Writes
45.85 MB/s Reads 498.19 MB/s Writes
52.51 MB/s Reads 478.23 MB/s Writes
42.32 MB/s Reads 465.49 MB/s Writes
31.30 MB/s Reads 439.65 MB/s Writes

every once in a while it will get a blip like
15937437.54 MB/S Reads 9646673.26 MB/S Writes

That is obviously wrong, not sure why I get it, but it can be ignored or filtered out by piping the output through

perl -ne ‘split; print if (!(@_[0] > 100000) || !(@_[3] > 100000))’

Putting it together:

INT=5; while /bin/true; do CNT=$((`vxstat -o alldgs |wc -l ` + 2)); vxstat -o alldgs -i$INT -c2 | tail +$CNT | nawk -v secs=$INT ‘BEGIN{ writes=0; reads=0}{writes+=$6;reads +=$5} END {printf (“%.2f MB/s Reads %.2f MB/s Writes\n”, reads/2/1024/secs, writes/2/1024/secs) }’; done | perl -ne ‘split; print if (!(@_[0] > 100000) || !(@_[3] > 100000))’

You could also tweak this a little bit for use with SNMPD for graphing.

Updated 2/9/11: Reads and Writes were swapped in text output.
Updated 2/6/12: Didn’t realize the iostat one-liner pasted was flat-out wrong and didn’t work.

Advertisements

One response to “Seeing a summary of I/O Throughput on a Solaris server

  1. Affordable locksmith Thousand Oaks September 20, 2014 at 5:29 pm

    Appreciating the time and energy you put into your blog and in depth information you provide.
    It’s great to come across a blog every once in a while that isn’t the same out of
    date rehashed information. Great read! I’ve saved your site and I’m
    adding your RSS feeds to my Google account.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: