NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.
To learn about what's going on, see About the Archive Site.
Hi Jennifer, I have the same result on handling of a complex file. This is a NetCDF property: nc_open scans the whole file and builds internal structures containing all dimensions and variables in the file. This is the price for luxury not to bother about closing open objects as in HDF API. If you are not happy about it use HDF API: it does exactly what you ask for without spare operations. Regards, Sergei -----Original Message----- From: netcdfgroup-bounces@xxxxxxxxxxxxxxxx [mailto:netcdfgroup-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Jennifer Adams Sent: 10 September 2010 01:42 To: netCDF Mail List Subject: Re: [netcdfgroup] nc_open takes a long time to open a big file I have tried it again with netcdf-4.1.2-beta1; same results. I also tried calling nc_set_chunk_cache before nc_open, with args size=409600 (and also 4096000), nelems=51203, and preemption=0.75. That didn't seem to change anything for nc_open. Always 20 seconds for a file the OS hasn't touched in a while. --Jennifer p.s. from config.log: configure:4219: checking whether a default chunk size in bytes was specified configure:4229: result: 4194304 configure:4238: checking whether a maximum per-variable cache size for HDF5 was specified configure:4248: result: 67108864 configure:4257: checking whether a number of chunks for the default per-variable cache was specified configure:4267: result: 10 configure:4276: checking whether a default file cache size for HDF5 was specified configure:4286: result: 4194304 configure:4295: checking whether a default file cache maximum number of elements for HDF5 was specified configure:4305: result: 1009 configure:4314: checking whether a default cache preemption for HDF5 was specified configure:4324: result: 0.75 On Sep 9, 2010, at 8:06 PM, Denis Nadeau wrote: Hi Jennifer, Did you call nc_set_chunk_cache? If not, do you know what the default cache size was set up to? (Look in your config.log created after you called configure) I must admit that 20 seconds is quite long. Denis From: netcdfgroup-bounces@xxxxxxxxxxxxxxxx [mailto:netcdfgroup-bounces@xxxxxxxxxxxxxxxx] On Behalf Of Jennifer Adams Sent: Thursday, September 09, 2010 7:34 PM To: netCDF Mail List Subject: [netcdfgroup] nc_open takes a long time to open a big file Dear Experts, I'm using netcdf-4.1.1-rc1 and hdf5-1.8.4-patch1 on a 64-bit linux server running CentOS-5.5. I have a netcdf-4 file that is 18404502496 bytes large. My file's dimensions look like this: lon = 320 ; lat = 160 ; lev = 11 ; time = 1581 ; It has 7 variables that look like this: float temp(time, lev, lat, lon) ; temp:_Storage = "chunked" ; temp:_ChunkSizes = 1, 1, 160, 320 ; temp:_DeflateLevel = 1 ; temp:_Shuffle = "true" ; and 1 variable that looks like this: float sfp(time, lat, lon) ; sfp:_Storage = "chunked" ; sfp:_ChunkSizes = 1, 160, 320 ; sfp:_DeflateLevel = 1 ; sfp:_Shuffle = "true" ; Is it normal for nc_open to take 20 seconds to open this file before returning control to my C program? --Jennifer -- Jennifer M. Adams IGES/COLA 4041 Powder Mill Road, Suite 302 Calverton, MD 20705 jma@xxxxxxxxxxxxx -- Jennifer M. Adams IGES/COLA 4041 Powder Mill Road, Suite 302 Calverton, MD 20705 jma@xxxxxxxxxxxxx Click here <https://www.mailcontrol.com/sr/qZ+83bK30V3TndxI!oX7Umrf+X9KFnd0BK9MqNtI AlohmNeiQKx8Hz0mc0MEr5X25CtN22G6Q+xhjvXYAQ53ow==> to report this email as spam.
netcdfgroup
archives: