NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.

To learn about what's going on, see About the Archive Site.

Re: [netcdfgroup] unlimited dimensions and chunking??

  • To: Chris Barker <chris.barker@xxxxxxxx>
  • Subject: Re: [netcdfgroup] unlimited dimensions and chunking??
  • From: Charlie Zender <zender@xxxxxxx>
  • Date: Thu, 09 Jan 2014 16:22:33 -0800
Hi Chris,

We use record and unlimited interchangeably.
A record dimenison is an unlimited dimension.

> if you had a 1-d variable of records, would that mean chunks equal the
> record size? 'cause that would be way too small  in the common case.

yes, the rd1 map is not good for access speeds on 1D vars.
use the --cnk_dmn=time,sz option to override this. or use lfp map...

> Not so clear from the amount of time I've spent reading that, but what
> would be the default chunking for a 1-d unlimited variable? or a 2-d,
> with one dimension very small (Nx3, for instance)?

for lfp map these are [N] and [N/3,3] where N is total chunksize, as
you can see with these commands:
ncks -O -4 --cnk_plc=all --cnk_map=lfp ~/nco/data/in.nc ~/foo.nc
ncks -m --cdl --hdn ~/foo.nc

> Those were the use cases where the default chunking in netcdf4
> killed us.

We may change the NCO default map from rd1 to lfp or something
similar in 4.4.1.

Your feedback is helpful!
c
-- 
Charlie Zender, Earth System Sci. & Computer Sci.
University of California, Irvine 949-891-2429 )'(



  • 2014 messages navigation, sorted by:
    1. Thread
    2. Subject
    3. Author
    4. Date
    5. ↑ Table Of Contents
  • Search the netcdfgroup archives: