NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.
To learn about what's going on, see About the Archive Site.
I have about 1700 files totaling about 2 Mbytes of data. If I load and process them individually, it takes about 7 seconds. If I concatenate them together on the unlimited dimension to form one big file, it takes about 35 seconds to load and process them. Is this expected? Wouldn't there be less overhead with just one file? I got the exact same results in both cases so I don't think did anything terribly wrong. Thanks, Charles
netcdfgroup
archives: