NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.
To learn about what's going on, see About the Archive Site.
Ed Hartnett wrote:
Ed: Attached is an HDF5 version of the netcdf program I sent before. If you run this and look at the output with h5dump, you will see that the output is the same whether the __packed__ attribute is set or not. This is evidence that you should be able to align (i.e. pack) your structs however you want, regardless of what packing HDF5 detected at compile time. If this weren't the case, then why even require the user to pass the offsets to nc_insert_compound?Jeff Whitaker <jswhit@xxxxxxxxxxx> writes:Concerning packing of structs, one of us is very confused about how HDF5 compound types work (and it's probably me). I thought that you could specify arbitrary offsets that do not necessarily correspond to the default alignment of your C compiler, and HDF5 would take care ofeverything when you read the data back in.No, sorry, this turns out not to be the case.Otherwise, how would you read a file created with HDF5 on a platform with a different default alignment than the one it was written on? Isn't the whole point of the HDF5 layer that you don't have worry about the default alignment of structs for the C compiler?HDF5 can handle it, but not if you change the alignment of your struct with a compiler directive! HDF5 figures out packing when it is built on your machine, in the HDF5 configure script. Using any other packing than the one HDF5 figured out at its build time will result in confusion. So if you want a different packing of your struct, you must specify the packing options you want with compiler flags, and make sure you use those flags when building HDF5 (and netCDF-4, and your own program). I have forwarded your question to the HDF5 team to ensure that I am telling you the correct answer, and to see if they can help explain this any more clearly. Thanks! Ed
It seems to me that somehow netcdf is not using the offsets the user passed to nc_insert_compound correctly.
-Jeff -Jeff
#include "hdf5.h" int main() { hid_t s1_tid; /* File datatype identifier */ hid_t file, dataset, space; /* Handles */ herr_t status; hsize_t dim[] = {1}; /* Dataspace dimensions */ struct s1 { short i; long long j; /*};*/ } __attribute__ ((__packed__)); struct s1 data[1]; /* Create some phony data. */ data[0].i = 20000; data[0].j = 300000; /* * Create the data space. */ space = H5Screate_simple(1, dim, NULL); /* * Create the file. */ file = H5Fcreate("test.h5", H5F_ACC_TRUNC, H5P_DEFAULT, H5P_DEFAULT); /* * Create the memory datatype. */ s1_tid = H5Tcreate (H5T_COMPOUND, sizeof(struct s1)); H5Tinsert(s1_tid, "i", HOFFSET(struct s1, i), H5T_NATIVE_SHORT); H5Tinsert(s1_tid, "j", HOFFSET(struct s1, j), H5T_NATIVE_LLONG); /* * Create the dataset. */ dataset = H5Dcreate (file, "phony_dataset", s1_tid, space, H5P_DEFAULT, H5P_DEFAULT, H5P_DEFAULT); /* * Write data to the dataset; */ status = H5Dwrite (dataset, s1_tid, H5S_ALL, H5S_ALL, H5P_DEFAULT, data); /* * Release resources */ H5Tclose(s1_tid); H5Sclose(space); H5Dclose(dataset); H5Fclose(file); }
netcdfgroup
archives: