StorNext cvfsck free space fragmentation
Situation
Many StorNext users have a handful of tools that come with the default StorNext installation, basic or full blown with interface.
One of those tools which can assist in troubleshooting is cvfsck. The most common use of cvfsck is to verify the file system itself. It’s a simple and powerful command line tool if you use it for the right reasons.
Details
Checking the health of the file system can be done when the file system if active but not really recommended as the journal is updated frequently. It’s basically a sanity check which may lead you to the action to take the file system offline. If you can’t stop the file system due to production pressure but have to check it because something is odd, you should flush the journal first before doing a read-only check. Depending on the size of the file system, you need a certain amount of GB on your local disk. The temp files are written to /tmp but if the local space is low they can be redirected with the option –T to a disks with more space.
cvfsck –j <FS> cvfsck –n –v <FS> [-T /path_to_more_space/tmp ]
Most likely you will see some errors which are caused by the active state of the file system. If cvfsck reports severe errors like orphan inodes and such, you should consider taking the file system off-line and performing a read-write check. Recommendation here is to run another cvfsck pass after the first has repaired the issues to make sure that the volume is clean and healthy before bringing it on-line again.
You don’t have to run the check in verbose mode as by default a completed output will be written to /usr/cvfs/data/<FS>/trace/
That’s the most common use for cvfsck to check or fix the file system. But it can also report the free space fragmentation status. What is that you may ask because you run snfsdefrag against a directory and everything is hunky-dory with the defragged files now. On the other hand you still notice some sort of slowness and that’s where the option “-f” for the free space fragmentation report comes in handy.
Free space fragmentation: Those are the available blocks on your file system, not the data itself, which might be scattered over the entire volume (physical data). If you have, let’s say, 20% free space on your volume but only as a combination of thousands of tiny blocks, every new file you create will be fragmented and split into multiple extents. And as you might know, every additional extend means a read-head jump, hence no I/O while the head is seeking for the next extend. This might be the reason for the slowness. So even if you never go above 80% usage of your file system, you will have free space fragmentation!
You can run the free space fragmentation report on the active file system as it’s a report function only.
cvfsck –f <FS>
If you have an aged file system where many files have been written and deleted over time, the report might be a few pages long. Maybe you run this report once and after looking at pages full of numbers you might think “ermmm …. next tool”. To bring some light into the dark, look at the OUTPUT file of a recently created file system. Unfortunately the report hasn’t been simplified or made humanly readable over the years. But as said before, this is a great reporting tool, just needs some explanation to it.
Pct. (sum) Chunk Size Chunk Count ----------- ---------- ----------- <1% ( <1%) 182 1 <1% ( <1%) 256 3 <1% ( <1%) 768 2 . . . . . <1% ( 11%) 7495213 1 <1% ( 12%) 7903563 1
The formula behind this is nothing magical, just the numbers in the 3rd row (Chunk Size) multiplied by the file system block size (FsBlockSize). If you have “FsBlockSize 4K” in your configuration file you have 1 chunk of 182 * 4096 / 1024 as contiguous free space which translates to 728kb. Second line is 1mb and we have 3 chunks of 1mb.
Pct. (sum) Chunk Size Chunk Count FsBlockSize 4K ----------- ---------- ----------- -------------- <1% ( <1%) 182 1 728 KB <1% ( <1%) 256 3 1024 KB - 3 times <1% ( <1%) 768 2 3072 KB - 2 times . . . . . <1% ( 11%) 7495213 1 28.591 GB <1% ( 12%) 7903563 1 30.149 GB
The smallest contiguous chunk in this example is 728KB where the largest chunk is 30GB. What you’d like to see is a huge chunk size rather than hundreds of small and tiny chunk sizes. Imagine your largest chunk would be only 10MB and you are copying a movie file with a size of 500MB into that file system. The file would be split up into at least 50 extends. The file system doesn’t look for the biggest chunk of blocks by default as its more space optimized than performance optimized. Running a “snfsdedrag –e moviefile” against that fresh copied movie file will tell you the exact amount of extends the file has been chopped into. Certainly AS (allocation sessions) will help and increase the lifespan of a file system but only for a certain time.
Let’s say your biggest chunk is only 8MB (and this is not even the worst possible case) you are dealing with a Swiss cheese. Walking through the whole volume with snfsdefrag most likely won’t get you anywhere, as there is not enough contiguous free space available to optimize anything. You have to bite the bullet and offload the data, wipe the file system and start over if you want to have your performance back.
However, the only way to short the huge amount of lines is to write a wrapper or parser and apply your own logic to make it human readable.