NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.
To learn about what's going on, see About the Archive Site.
> On Oct 23, 2014, at 4:25 AM, Heiko Klein <Heiko.Klein@xxxxxx> wrote: > when chunking files with unlimited dimension, the unlimited dimension must be > given explicitly in nccopy, and will usually be set to one. Careful with"usually".... > In a file with time as unlimited dimension, and X and Y as dimensions, it is > currently required to use > > $ nccopy -k 4 -c "time/1,X/100,Y/100" in.nc out.nc > > When running without time, it does not work: > > $ nccopy -k 4 -c "X/100,Y/100" in.nc out.nc > NetCDF: Invalid argument > Location: file nccopy.c; line 637 > > > Only for unlimited dimensions, one needs to give the dimension explicitly, > for all other dimensions a useful default (full dim-size) is used. I think a > useful default for unlimited dimensions is 1. It does make sense to have a useful default, yes. And I thought the netcdf lib itself does have defaults, so this may be a bug only in nccopy. And, indeed the lib used to default to 1 for unlimited dimensions. However, 1 is a very bad default if you have a 1-d array or the other dimensions are small. I think this had been fixed in the lib itself, but we do want to make sure 1 isn't used as a default always. Actually, a better way to think about is that the total size of the chunks should not be too small. Even your 1x100x100 example above is pretty small on modern hardware. -Chris > > > Heiko > > > > -- > Dr. Heiko Klein Tel. + 47 22 96 32 58 > Development Section / IT Department Fax. + 47 22 69 63 55 > Norwegian Meteorological Institute http://www.met.no > P.O. Box 43 Blindern 0313 Oslo NORWAY > > _______________________________________________ > netcdfgroup mailing list > netcdfgroup@xxxxxxxxxxxxxxxx > For list information or to unsubscribe, visit: > http://www.unidata.ucar.edu/mailing_lists/
netcdfgroup
archives: