THE BLOG

Never Mix Up Throughput or IOPS with Stream Performance

What to Know and What to Ask When You Shop for Storage at NAB or

How to Evaluate Different Storage Solutions for Post,  Where Real-Time Access Is Crucial

 

The most important mistake to avoid when evaluating a SAN (or NAS) storage solution is also the most common one: confusing throughput or IOPS with stream performance. Throughput or high IOPS will not provide you with the performance you need in a workflow with real-time access requirements.

And this is where it becomes tricky. Unfortunately not all storage vendors possess the expertise or experience to really understand the difference between the requirements of their standard industrial customers and their customers in the M&E or post-production sector. While industrial customers consider a storage setup with high throughput and IOPS to be high performance, customers in M&E need high (real-time) streaming performance. Bandwidth and IOPS are secondary, if not entirely irrelevant.  Post-production applications working with sequence-based raw (single frame) material write to and read from the storage sequentially, as opposed to unstructured IT environments where data is read and written randomly.

A pretty common and recurring situation when it comes to buying storage could look like this. A customer expects to have eight attached clients in total, six of which need real-time read/play in 2K raw, while two clients will require 4K raw access. Most customers do the math this way: 2 clients x 4K = 2400MB/s   +   6 clients x 2K = 1800MB/s; therefore a total throughput of 4200MB/s is required.

Since all storage vendors know what their products can do in regard to IOPS or random throughput, a vendor may tell the customer that they need, for example, two RAIDs, each providing 2500MB/s. Combined, that will provide the requested 4200MB/s, plus 800MB/s of overhead. For a standard IT-based environment, this calculation might be correct, and the SAN would probably be perfectly suitable. But when it comes to sequential and concurrent data access by more than one client at a time, the entire setup becomes a whole different beast. With a setup like this, the post-production facility will realize a couple of weeks after the SAN has been deployed that the performance is far below the eight concurrent streams they actually needed.

What went wrong? Our example customer XYZ’s requirements have not been accounted for, or – even worse – were not understood in full. To explain this in a pictographic example, customer XYZ needed a setup that was able to play two songs from a record at the same time – and vendor ZYX provided one turntable record player. Now the customer has to lift the arm of the record player (seeking) and move it to the new position on the record (latency), then lift the arm again to return it to the original position (seeking and latency) and start from the beginning. Every time the arm is lifted, there will be no music playing, no matter how fast the arm is moved.

 

Read full article at studiodaily.com

Hello!

  • " “Hello and welcome to my personal blog.

    In 20+ years working in diversified IT environments, I’ve gathered some wisdom that I’m happy to pass on and share some (hopefully) helpful and interesting tips and tricks around SAN/NAS solutions. "

    Roger L Beck
  • How to manage permissions on a StorNext filesystem

    Situation:

    In a shared environment as in a Linux based NAS, the permissions can be set/enforced via the exports file or the Samba configuration or even through yp (yellow pages aka NIS). Comparing it to a SAN based solution you will most likely run into permission issues which you haven’t experienced before. Before each client was writing into the shared volume and all the others could collaborate as expected and since you have a SAN, you face the problem that content created by client-A can’t be modified from client-B or others.
    Continue reading

    StorNext – cvadmin tricks

    Cvadmin, a powerful command line tool to query the running file systems on the MDC.

    Common cvadmin usage: cvadmin  (no arguments)

    If run on a client, and it shows the file systems available, it means that the metadata connection is working correctly.  On an HA system, you can determine which MDC is the primary by which entries have an asterisk (*) next to it.

    StorNext Administrator
    
    Enter command(s) For command help, enter "help" or "?".
    
    List FSS
    
    File System Services (* indicates service is in control of FS): 
     1>*data_vol1[0]              located on mdc1:32892 (pid 17650) 
     2>*data_vol2[0]              located on mdc1:32900 (pid 17649) 
     3> data_vol1[1]              located on mdc2:32825 (pid 9623) 
     4> data_vol2[1]              located on mdc2:32826 (pid 9624) 
    Select FSM "none"

    Continue reading

    Cloud Computing in Post-Production

    Balancing Cost Advantages in the Cloud with the Performance Requirements of Real-Time Workflow

    “The cloud” has existed for decades. Remember when the official graphic symbol for the internet was an “evil” cloud? Today, however, cloud is one of the most frequently used buzzwords. So many companies seem to offer a solution with or in the cloud, and it appears that at least one part of many software solutions or hardware set-ups has to be in the cloud somewhere and somehow.

    This is an interesting development, considering that, compared to just a few decades ago, the method of collaboration and utilizing both hardware and software has changed fundamentally, moving away from large mainframes from former market leaders such as IBM, SGI, Cray, and others to personal computers. Although PCs have become more powerful and affordable, this approach had two major downsides: post-production facilities require high-end workstations that are powerful enough to handle all the applications as well as provide sufficient render power. In addition, the maintenance effort for all individual workstations is enormous, and keeping all workstations up-to-date with the latest software version is a costly endeavor.

    At the end of the 90s, a few companies realized an opportunity to cut expenses for hardware and software updates by centralizing their applications and moving them onto the internet. Without sacrificing functionality, new web-based versions of certain applications became accessible via a generic internet connection, and the idea of an application service provider (ASP) was born.  Similar to a terminal server-based infrastructure, the maintenance burden of every application was taken away from the individual workstations and shifted to a central location — to the cloud.

    In 2001, software as a service (SAAS) arrived. Based on the same idea as ASP, SAAS provides a rental model for software whereby the software lives in the cloud. Post-production facilities can access any desired application via the internet. There are  some differences between ASP and SAAS, but basically both solutions go back to the mainframe idea of the 1960s, where large vendors provide the computing or render power and users benefit from the cost savings.

    The trend is obvious and the intention to pursue it becomes even clearer when looking at the company talent of organizations that are already big players in the field. These vendors primarily seek personnel to join their team who are familiar with and contribute to solutions around SAAS.

    Read full article at studiodaily.com