NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.
To learn about what's going on, see About the Archive Site.
On Tue, Apr 24, 2007 at 02:30:39PM -0600, Ed Hartnett wrote: > However, another question: have you considered using netCDF-4? It does > not have the limits of the 64-bit offset format, and support parallel > I/O, as well as a number of other features (groups, compound data > types) which might be helpful in organizing really large data sets. Hi Ed The CDF-1 and CDF-2 file formats appear to be quite robust in the face of client failures. Greg S at least has observed file corruption with the HDF5 file format during parallel I/O if a client dies at a particular time. As I understand it, it's hard to devise a solution for the HDF5 file format that is both rock-solid robust *and* delivers high-performance. In parallel-netcdf land we also like the CDF-1 and CDF-2 file formats: they are easy to work with from an MPI-IO perspective. Also, by writing out all the metadata in define mode, there is very little chance of file corruption for any failure in data mode. Thanks ==rob -- Rob Latham Mathematics and Computer Science Division A215 0178 EA2D B059 8CDF Argonne National Lab, IL USA B29D F333 664A 4280 315B ============================================================================== To unsubscribe netcdfgroup, visit: http://www.unidata.ucar.edu/mailing-list-delete-form.html ==============================================================================
netcdfgroup
archives: