NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.
To learn about what's going on, see About the Archive Site.
Am Mon, 9 Sep 2013 11:16:39 +0200 schrieb Heiko Klein <Heiko.Klein@xxxxxx>: > Reader and Writer work concurrently on the same file, eventually on > different machines (NFS). This worked well with netcdf3 files, but > I'm not sure if this is safe with netcdf4. Do we need to take some > precautions concerning compression/chunking? I assume you talk about the HDF5-based format as opposed to classic NetCDF3/4. Feel free to ignore if that is wrong. My experience was that the HDF5-based format is fragile with such usage. While appending to a record dimension in a classic file (NetCDF4 without HDF) doesn't concern a reader, in the worst case it getting some rubbish numbers if data is not flushed quickly enough, the new format triggers errors on reading while it is being modified to add records. The HDF5 format is more "smart", there happens more in the file structure, while in old NetCDF, just a lump of data is appended. So, with the new format, you will want to ensure that you manage your concurrent access. For my purposes, I kept a library in the old format around, so my model code doesn't even need to open/close the file on each time step. It just has it open, writes and flushes (there was some fsync() change to better support that in recent versions) after each step, and the reader can repeatedly open the file in a busy wait loop and check the record count. Alrighty then, Thomas -- Dipl. Phys. Thomas Orgis Atmospheric Modelling Alfred-Wegener-Institute for Polar and Marine Research Office phone: 049 331 288 2164
netcdfgroup
archives: