NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.
To learn about what's going on, see About the Archive Site.
Hi Mark Rivers (rivers@xxxxxxxxxxxxxxxxxxx) asks: > Are there any plans to add data compression to netCDF? We are strongly > leaning towards switching from our present (local) data file format to > netCDF. The only feature which we will have to give up is data > compression. We are presently using either run-length encoding or a simple > form of linear predictive coding. Both of these are loss-free. Linear > predictive coding typically reduces the size of our 32 bit integer data > files by a factor of 3, which is significant. > > It seems like it could be a very worthwhile addition. We have no plans to add data compression to netCDF (although we do plan to eventually add a form of data packing previously described on this mailing list). Implementing hyperslab access and direct access to individual array values become considerably more complicated if compression is to be supported. Consider how you might devise any effective compression scheme if the elements of an array variable can be filled in any order or as cross-sections in any direction. NetCDF permits writing elements in one order and reading them later in differnet orders. Some compression methods require that all the data to be compressed are known before starting the compression. Techniques like run-length encoding or anything that depends on exploiting similarities in nearby values can't be used if nearby values aren't all known at the time some of the data are to be written. An alternative that can be implemented above the netCDF library is to adopt a convention for compressed data that uses a "compression" attribute to encode the method of compression, e.g. x:compression = "rle" ; for run-length encoding of the data in a variable x. Then when you write the data, compress them into a bland array of bytes and write all the bytes. Note that it would be difficult to define the size of such a variable in advance, since its compressed size depends on its values. You would also have to give up on hyperslab access for such variables, but instead read the compressed array in all at once and uncompress it before using it. --Russ
netcdfgroup
archives: