NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.
To learn about what's going on, see About the Archive Site.
NOTE: The netcdf-hdf
mailing list is no longer active. The list archives are made available for historical reasons.
Hi John, > >>If we have the ability to store variable length "compressed formats", > >>then it seems like we can define some simple format for this, say a sequence > >> > >> n (int) > >> nbits (byte) > >> scale (float/double) > >> offset (float/double) > >> n nbit integers > >> > >>so that the compressed data is self contained, with no need for auxilary > >>variables or info stored outside the chunk. > >> > >> > > I thought you wanted the predefined scale/offset values to vary according > >to the chunk's location in the dataspace? > > > > > The above format would describe the compressed data format for one > chunk. So the scale, offset could vary for each chunk. > > I was proposing that we start with the case where the scale and offset > are automatically calculated by the compression filter. I think that > will be the most common case. > > Not sure if im missing something. This is fine for the "adaptive" case, but it's not sufficient for the "predefined" scale/offset that varies according to the coordinates of the chunk. If that's OK for now, then we won't have a problem. Quincey
netcdf-hdf
archives: