Close

Performance testing with vidio and frametest

Topic

Once you have your StorNext file system up and running, you may want to know what the performance is.

File system performance measurement and a holding hand to perform some troubleshooting to hunt down the root cause.

Situation

When you set up your file system, you may have or have not performed a speed test. Especially in a shared environment, you most likely want to know, how fast you can push/pull data from the file system. You also have researched the internet for some good ideas, tools, and what you should get out of your file system. Assuming here, that you stumbled upon dd, frametest, AJA System Test and BMD Disk Speed Test.  What you may not have found is the small command line tool within StorNext called vidio.

Abstract

While it’s certainly more comfortable (and more beautiful looking) to have graphical applications such as AJA and BMD, this article will focus on the command line versions and the power in it. The reason here to leave the UI versions out is that you can’t push a connected client to the maximum or have multiple sessions running.

That leaves dd, frametest, and vidio.

  • The open source tool dd (Data Definition)
    It can measure raw devices such as disks or LUNs and even create and read files on the file system level. The trick here is that with dd, it’s all about the block size you use for your tests. If you read or write files for example with a block size of 64MB, it would push a lot of data BUT, it’s not how an application on a client behaves.
  • The open source tool frametest There are a few sites providing resources on how to measure performance. While this tool is pretty good, it lacks the support for files larger 4K and an easier way to simulate a stream.
    I might write a full guide for frametest sometime…
UPDATE: After receiving a a few request to show how to test storage with frametest, follow this link to the frametest guide.

  • StorNext vidio Video frame producer-consumer performance test. A simple, yet powerful tool to simulate single and multiple reads and writes via command line.

Having said this, I’ll focus on the StorNext tool vidio, as there is no helpful guide on the internet currently.

Details

The vidio tool is part of the snfs-client package since version 5 of the StorNext file system (SNFS).

  • # vidio
usage: vidio [options] dir_path [dir_path1 dir_path2 . . .]
 options:
 -B use system buffered I/O
 -c continuously update display
 -d[dd] debug
 -f frame_size framesize in bytes or frame type:
 sdtv standard definition video
 hdtv high definition video
 fa2k full aperture 2K frames
 fa4k full aperture 4K frames
 -F frame_rate frame_rate in frames/second
 -l frame_list read frame file names from list file
 -n nframes number of frames
 -N nframes number of frames per file - default = 1
 -p name prefix of file names - default 'vidio'
 -q queue_depth async i/o queued queue_depth deep
 -r read: consumer mode
 -T <millisecs> stop test if IO time is > millisecs
 -v[vv] verbose
 -V print version and exit
 -w write: producer mode - default
  • vidio needs an existing folder to be able to write files onto the file system:
    # cd /mnt/snfs1
    # mkdir t1
  •  Create a sequence of 1000 files in the directory t1:
    # vidio -w -c -f fa4k -n 1000 ft1

    Let’s take the line above apart:
    -w: write: producer mode – default – you can skip this if you want
    -c: continuously update display – without this, you would only get the result
    -f fa4k: frame size in bytes or frame type – defines the size to be written. Options are sdtv, hdtv, fa2k & fa4k
    -n: the number of frames you want to create – the default is with 60 frames quote low
    ft1: the target folder where we want to have our frames written to.

  • Read the output:
    vidio: Timing 1 stream of 1000 frames of 50987008000 byte direct writes queued 1 deep
    
    stream[0] {
    Seconds 10.16
     Number of frames 1000
     Frame time (ms) 19.41
     Frames/sec 51.53
     MBytes 50,987
     MBytes/sec 1700.88
    } ft1
    vidio: Aggregate: 50,987 MBytes in 10.16 seconds @ 1700.88 MBytes/sec

    The output is providing you with the information, that your average write performance for a 4K sized frame based sequence is 1.7GB/s. While a 4K 10bit DPX averages around 50MB. The required performance would be 1.2GB/s.

  • Read back the created sequence:
    # vidio -r -c -n 1000 ft1
  • Additional options such as queue_depth and system buffered I/O are available to push even more data:
    # vidio -q 5 -B -c -n 1000 ft1
  • If you want to test how many 4K streams your storage can handle from all the connected clients (concurrent access), you want to first set a limitation on the number of frames per second, as you will be able to see when a frame has dropped. So you would run from each client the command below, reading from different directories.
    # vidio -r -F 24 -c -n 1000 ft1
  • In case you want to run multiple reads or writes from the same client, vidio presents you with a feature which makes multiple terminals or screen sessions obsolete. As an example, we create 3 parallel writes into 3 different folders.
    # cd /mnt/snfs1
    # mkdir ft2 ft3 ft4
    # vidio -w -c -f fa2k -n 1000 ft2 ft3 ft4
  •  Same is true for reading data back.
    # vidio -r -c -f fa2k -n 1000 ft2 ft3 ft4

As a conclusion, this tool will help to performance test your StorNext based file system. It will certainly help to identify slow clients or those, who have connection issues when running the same test on each machine. With the age of the file system, the performance will most likely not be as fast as initial, so keep in mind, that you are will come across a fragmented section as well.

While vidio has the same limitation concerning the file size, which is a maximum of 4K 10bit DPX, I hope it will be maintained and more options are available.

One Response to Performance testing with vidio and frametest

  1. Pingback: Profiling with frametest or another (but deeper) how-to – A blog to spread some knowledge

Leave a Reply

Your email address will not be published.