NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.
To learn about what's going on, see About the Archive Site.
Hi Bill: I havent tried using this, but it looks really cool! My only comment is that when i was thinking about this same problem a few years back, I realized that that amount of compression you can get it very data type dependent. In particular, if you require your data be stored as floating point, many datasets wont compress at all, because the low order bits of the mantissa are essentially random. Unless there is some repeating pattern like a constant field or missing data. (I was working with gridded model output) It might be worth collecting compression ratios from your library on various data files and types, so that people get a sense of what to expect for the different cases. Regards, John Caron
netcdfgroup
archives: