NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.
To learn about what's going on, see About the Archive Site.
Appreciate your answer. Since I met some access performance problem, I need to know the details of my netcdf file. Is it possible to display compression level and chunk size by some existing tools? Since the file that I use is not generated by me, I need to check those very detailed information. Thanks On Tue, Jul 19, 2011 at 6:04 PM, Russ Rew <russ@xxxxxxxxxxxxxxxx> wrote: > Hi Chris, > > > Will the performance of access to NetCDF degrades if I set every > dimension > > unlimited? > > Short answer: yes. > > Explanation: when you make a dimension unlimited, variables that use > that dimension use chunked storage rather than contiguous storage, which > incurs some storage overhead for B-tree indexing of the resulting chunks > and for partially written chunks. > > Also, variables that use unlimited dimensions use 1, by default, for the > chunk length along unlimited dimension axes. That's a reasonable > default for multidimensional variables that have only one unlimited > dimension, but if all dimensions are unlimited, the default would make > chunk sizes for all variables only big enough to hold 1 value, which > would be a very inefficient use of chunking. > > If you specify chunk lengths explicitly for each variable, and if you > intend to append an unknown amount of data along every dimension, it may > make sense to set every dimension unlimited. Otherwise, it would be > better to only declare a few dimensions unlimited, those along which > data will be appended. > > --Russ >
netcdfgroup
archives: