NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.

To learn about what's going on, see About the Archive Site.

Re: [netcdfgroup] NF90_SYNC question

It turns out the real cause wasn't too exciting: someone else was hogging
all the bandwidth!

Thanks for mentioning chunk sizing; that's not something I had thought
about. I've got one unlimited dimension, and it sounds like that means an
inefficient default chunks size <
http://www.unidata.ucar.edu/software/netcdf/docs/netcdf/Default-Chunking.html#Default-Chunking>.
("For
unlimited dimensions, a chunk size of one is always used." What's the unit?
One DEFAULT_CHUNK_SIZE? Maybe it'll become clear as I read more.)

I guess I've got some reading ahead of me. For resources, I see the
powerpoint 
presentation<http://hdfeos.org/workshops/ws13/presentations/day1/HDF5-EOSXIII-Advanced-Chunking.ppt>that's
linked to and the HDF5 page on
chunking <http://www.hdfgroup.org/HDF5/doc/Advanced/Chunking/>. Do you have
any other recommendations?

Thanks.
-Leon

On Wed, Feb 20, 2013 at 4:31 PM, Russ Rew <russ@xxxxxxxxxxxxxxxx> wrote:
>
> Large chunk sizes might mean a lot of extra I/O, as well as extra CPU
> for uncompressing the same data chunks repeatedly.  You might see if
> lowering your chunk size significantly improves network usage ...
  • 2013 messages navigation, sorted by:
    1. Thread
    2. Subject
    3. Author
    4. Date
    5. ↑ Table Of Contents
  • Search the netcdfgroup archives: