NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.
To learn about what's going on, see About the Archive Site.
On Sun, Nov 13, 2016 at 10:35 AM, Charlie Zender <zender@xxxxxxx> wrote: > Does the system blocksize differ on the new and old systems? > To find out, use "stat <any file>". > If the blocksizes differ, then "ncks in.nc out.nc" > reads/writes different numbers of times. > To adjust this behavior, read how NCO uses the system blocksize > to determine the chunksize of datasets: > http://nco.sf.net/nco.html#blocksize > You can override the defaults with different chunking policies, > as described in the manual, and examine the chunksizes with > ncks --cdl --hdn -m in.nc. > Charlie, That makes sense to me, especially since I'm using different versions of the packages in my comparisons. Based on the stat output from each system, it looks like the blocksizes are the same. In the lustre filesystem I'm getting 2097152 for the IO block, the same file size, and the same number of blocks. In /dev/shm, I'm getting 4096 for the IO block, the same file size, and the same number of blocks. When I try to examine the chunksizes thought, I get the ncks help message. I think --hdn is not a valid option with either of the NCO versions I'm using. What is that option meant to be? I've been reading the chunking section in the NCO 4.6.2-beta03 user guide reference you provided. Based on that, I'm thinking there is significant enough differences between the older versions of NCO (4.1.0), NetCDF (4.2.0), and HDF5 (1.8.8) on our Cray and the versions I'm installing on our new cluster (4.6.1, 4.4.1, 1.8.17) that comparing performance between them is not apples to apples. -- Regards, -liam -There are uncountably more irrational fears than rational ones. -P. Dolan Liam Forbes loforbes@xxxxxxxxxx ph: 907-450-8618 fax: 907-450-8601 UAF Research Computing Systems Senior HPC Engineer LPIC1, CISSP
netcdfgroup
archives: