NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.
To learn about what's going on, see About the Archive Site.
if I understand the question, this is read times -- is this in fact with exactly the same files? In which case the chunking is already set. less performant decompression?? sounds like the binary search is needed. Is EVERYTHING else the same? disk system, OS, etc? -CHB On Tue, Dec 13, 2016 at 9:21 AM, Charlie Zender <zender@xxxxxxx> wrote: > Hello Simon, > > Since both files are netCDF4 compressed that > means they use chunking. My wild guess is that > different chunking defaults cause the observed > change in dumping time. You can see the > chunk sizes employed with ncdump -s or ncks --hdn, > and you can play with the chunk sizes/policy > with either. > > Charlie > -- > Charlie Zender, Earth System Sci. & Computer Sci. > University of California, Irvine 949-891-2429 )'( > > > _______________________________________________ > NOTE: All exchanges posted to Unidata maintained email lists are > recorded in the Unidata inquiry tracking system and made publicly > available through the web. Users who post to any of the lists we > maintain are reminded to remove any personal information that they > do not want to be made public. > > > netcdfgroup mailing list > netcdfgroup@xxxxxxxxxxxxxxxx > For list information or to unsubscribe, visit: > http://www.unidata.ucar.edu/mailing_lists/ > -- Christopher Barker, Ph.D. Oceanographer Emergency Response Division NOAA/NOS/OR&R (206) 526-6959 voice 7600 Sand Point Way NE (206) 526-6329 fax Seattle, WA 98115 (206) 526-6317 main reception Chris.Barker@xxxxxxxx
netcdfgroup
archives: