NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.
To learn about what's going on, see About the Archive Site.
NOTE: The netcdf-hdf
mailing list is no longer active. The list archives are made available for historical reasons.
Quincey Koziol <koziol@xxxxxxxxxxxx> writes: > The problem is in your computation of the chunk size for the > dataset, in libsrc4/nc4hdf.c, around lines 1059-1084. The current > computations end up with a chunk of size equal to the dimension size > (2147483644/4 in the code below), i.e. a single 4GB chunk for the > entire dataset. This is not going to work well, since HDF5 always > reads an entire chunk into memory, updates it and then writes the > entire chunk back out to disk. ;-) > > That section of code looks like it has the beginning of some > heuristics for automatically tuning the chunk size, but it would > probably be better to let the application set a particular chunk > size, if possible. > Ah ha! Well, that's not going to work! What would be a good chunksize for this (admittedly weird) test case: writing one value at a time for a huge array. Would a chunksize of one be crazy? Or the right size? Thanks! Ed -- Ed Hartnett -- ed@xxxxxxxxxxxxxxxx
netcdf-hdf
archives: