NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.
To learn about what's going on, see About the Archive Site.
Phil, Some people run lengthy processing scripts that trigger file recopying because, as you describe, they expand the metadata. The recopying can be expensive when the (netCDF3) files are large. A possible workaround is to pad the metadata header the first time the file is processed (or better yet, created). If you use an NCO operator that deals mainly with metadata (ncrename, ncatted, and ncks), then, when it is known that some metadata will be added at a later time, you can invoke the --hdr_pad option, whose argument is the amount in bytes of extra padding. http://nco.sf.net/nco.html#hdr This may save time (but not space :) later on. This is not a well-known option, so I thought it worth mentioning. cz -- Charlie Zender, Earth System Sci. & Computer Sci. University of California, Irvine 949-891-2429 )'(
netcdfgroup
archives: