NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.
To learn about what's going on, see About the Archive Site.
Jeff Whitaker <jswhit@xxxxxxxxxxx> writes: > Hi: I need to create hdf5 files (using the hdf5 API, not netcdf4) > that will be readable by netCDF 4 clients. Is there a document > somewhere that describes the structure of the HDF5 files produced by > netCDF4? I found > http://www.unidata.ucar.edu/software/netcdf/netcdf-4/arch.html, but it > seems a bit old - is it still relevant? Are there any sample netCDF 4 > files available to look at? > > -Jeff > > -- > Jeffrey S. Whitaker Phone : (303)497-6313 > Meteorologist FAX : (303)497-6449 > NOAA/OAR/CDC R/CDC1 Email : Jeffrey.S.Whitaker@xxxxxxxx > 325 Broadway Office : Skaggs Research Cntr 1D-124 > Boulder, CO, USA 80303-3328 Web : http://tinyurl.com/5telg > > Howdy Jeff, To create HDF5 files that can be read by netCDF-4, you need to use HDF5 1.8, which is not yet released. However most (but not all) of the necessary features can be found in their latest development snapshot. So you must use that. You must remember that netCDF-4 does not have a reference type, so any references that you create in HDF5 will be ignored. Also netCDF-4 requires that you use a group structure which is strictly hierarchical. No groups which contain their own parents! These are fairly easy requirements to meet, but there is one relating to shared dimensions which is a little more challenging. Every HDF5 dataset must have a dimension scale attached to each dimension, if you want to inter-operate with netCDF-4. Dimension scales are a new feature for HF 1.8, which allow specification of shared dimensions. Unfortunately, I am not finding any place on the web where the HDF5 team describes their dimension scales. (Since this is not in a release yet). Here's some sample code which attaches dimensions scales for lat and lon to a 2D pressure dataset: printf("*** Creating a HDF5 file with one var with two dimension scales..."); { hid_t fileid, lat_spaceid, lon_spaceid, pres_spaceid; hid_t pres_datasetid, lat_dimscaleid, lon_dimscaleid; hsize_t dims[DIMS_2]; /* Create file. */ if ((fileid = H5Fcreate(FILE_NAME, H5F_ACC_TRUNC, H5P_DEFAULT, H5P_DEFAULT)) < 0) ERR; /* Create the spaces that will be used for the dimscales. */ dims[0] = LAT_LEN; if ((lat_spaceid = H5Screate_simple(1, dims, dims)) < 0) ERR; dims[0] = LON_LEN; if ((lon_spaceid = H5Screate_simple(1, dims, dims)) < 0) ERR; /* Create the space for the dataset. */ dims[0] = LAT_LEN; dims[1] = LON_LEN; if ((pres_spaceid = H5Screate_simple(DIMS_2, dims, dims)) < 0) ERR; /* Create our dimension scales. */ if ((lat_dimscaleid = H5Dcreate(fileid, LAT_NAME, H5T_NATIVE_INT, lat_spaceid, H5P_DEFAULT)) < 0) ERR; if (H5DSset_scale(lat_dimscaleid, NULL) < 0) ERR; if ((lon_dimscaleid = H5Dcreate(fileid, LON_NAME, H5T_NATIVE_INT, lon_spaceid, H5P_DEFAULT)) < 0) ERR; if (H5DSset_scale(lon_dimscaleid, NULL) < 0) ERR; /* Create a variable which uses these two dimscales. */ if ((pres_datasetid = H5Dcreate(fileid, PRES_NAME, H5T_NATIVE_FLOAT, pres_spaceid, H5P_DEFAULT)) < 0) ERR; if (H5DSattach_scale(pres_datasetid, lat_dimscaleid, 0) < 0) ERR; if (H5DSattach_scale(pres_datasetid, lon_dimscaleid, 1) < 0) ERR; /* Fold up our tents. */ if (H5Dclose(lat_dimscaleid) < 0 || H5Dclose(lon_dimscaleid) < 0 || H5Dclose(pres_datasetid) < 0 || H5Sclose(lat_spaceid) < 0 || H5Sclose(lon_spaceid) < 0 || H5Sclose(pres_spaceid) < 0 || H5Fclose(fileid) < 0) ERR; } I hope that in the future netCDF-4 will be able to deal with HDF5 files which do not have dimension scales. However, this is not expected before netCDF 4.1. Finally, there is one feature which is missing from all current HDF5 releases, but which will be in 1.8 - the ability to track object creation order. As you may know, netCDF keeps track of the creation order of variables, dimensions, etc. HDF5 (currently) does not. I have a bit of a hack in place in netCDF-4 files for this, but that hack will go away when HDF5 1.8 comes out. It's not clear to me that this will matter much to you. The files will still be readable to netCDF-4, it's just that netCDF-4 will number the variables in alphabetical, rather than creation, order. Probably this doesn't matter to you anyway. Interoperability is a complex task, and all of this is in the alpha release stage. However, I do test interoperability in libsrc4/tst_interops.c, and you can see some examples there of how to create HDF5 files, modify them in netCDF-4, and then verify them in HDF5. (And vice versa). Please let me know if you have any further questions. Thanks! Ed -- Ed Hartnett -- ed@xxxxxxxxxxxxxxxx
netcdfgroup
archives: