NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.

To learn about what's going on, see About the Archive Site.

Re: extensible datasets in every dimension?

NOTE: The netcdf-hdf mailing list is no longer active. The list archives are made available for historical reasons.

Hi Ed,

> Greeting to HDF HQ!
> 
> Some questions:
> 
> 1 - If any dimension of a dataset is to be extensible, the dataset
> must be chunked, correct?
    Yes.

> 2 - If I have a dataset with one expendables dimension, and several
> fixed dimensions, I can create it using a space like this:
> 
>       if ((spaceid = H5Screate_simple(var->ndims, dimsize, maxdimsize)) < 0)
>        BAIL(NC_EHDFERR);
> 
> Where dimsize is an array of initial dimension sizes, and maxdimsize
> is an array of max dimension sizes, or -1 for an unlimited dimension.
> 
> The problem I have is this. If I try and create a space with one of
> the dimsizes as zero, and the corresponding maxdimsize as -1, HDF5
> hands when I try to close the defined dataset (this seems like a HDF
> bug, BTW).
> 
> So I can do it if I start with a dimsize of 1 instead of 0, but this
> is not the netcdf way. After defining such a dataset, when I get the
> number of records in it, using H5Sget_simple_extent_dims, I get a size
> of one along that dimension, even though I haven't written any data to
> it yet, because I have to define it as having length of at least one.
> 
> Any comment or help would be appreciated...
    Hmm, an initial size of 0 should work.  Can you send me some test code
that fails?  I'll address the bug immediately then.

    Quincey
>From owner-netcdf-hdf@xxxxxxxxxxxxxxxx 04 2003 Nov -0700 09:58:02 
Message-ID: <wrxwuagnk9x.fsf@xxxxxxxxxxxxxxxxxxxxxxx>
Date: 04 Nov 2003 09:58:02 -0700
From: Ed Hartnett <ed@xxxxxxxxxxxxxxxx>
To: netcdf-hdf@xxxxxxxxxxxxxxxx
Subject: HDF5 hangs...
Received: (from majordo@localhost)
        by unidata.ucar.edu (UCAR/Unidata) id hA4Gw4Ne018587
        for netcdf-hdf-out; Tue, 4 Nov 2003 09:58:04 -0700 (MST)
Received: from rodney.unidata.ucar.edu (rodney.unidata.ucar.edu 
[128.117.140.88])
        by unidata.ucar.edu (UCAR/Unidata) with ESMTP id hA4Gw2Ob018530
        for <netcdf-hdf@xxxxxxxxxxxxxxxx>; Tue, 4 Nov 2003 09:58:02 -0700 (MST)
Organization: UCAR/Unidata
Keywords: 200311041658.hA4Gw2Ob018530
Lines: 43
User-Agent: Gnus/5.09 (Gnus v5.9.0) Emacs/21.2
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Sender: owner-netcdf-hdf@xxxxxxxxxxxxxxxx
Precedence: bulk


Quincey,

I am having a really weird problem. At a certain point in my code, the
following code hangs in the H5Dcreate call.

This code works a bunch of times, then, at some point (I'm doing lots
of other HDF stuff, but not with this file), it hangs.

I don't know what the deal is.

   {
      hid_t dsid = 0;
      hid_t typeid1 = H5Tcopy(H5T_NATIVE_CHAR);
      hid_t plistid1 = H5P_DEFAULT;
      hid_t spaceid1 = 0;
      hid_t hdfid = 0;
      if ((hdfid = H5Fcreate("ccc_test.h5", H5F_ACC_TRUNC, H5P_DEFAULT, 
H5P_DEFAULT)) < 0)
         return (-1);
      if ((spaceid1 = H5Screate(H5S_SCALAR)) < 0)
         return (-1);
      if ((dsid = H5Dcreate(hdfid, "scaley", 
                            H5T_NATIVE_CHAR, spaceid1, plistid1)) < 0)
         return (-1);
      if (spaceid1 > 0) H5Sclose(spaceid1);
      if (hdfid > 0) H5Fclose(hdfid);
      if (dsid > 0) H5Dclose(dsid);
      dsid = 0;
   }

When I use Cntl-C to interrupt the program, I get this message:

Program received signal SIGINT, Interrupt.
0x400c997a in malloc_consolidate () from /lib/i686/libc.so.6

Somehow there is some malloc issue in H5Dcreate. Are you checking all
your malloc returns to see that the memory you are using is really
being allocated?

I can't reproduce this in a short program (yet), but I'll keep
trying...

Ed


  • 2003 messages navigation, sorted by:
    1. Thread
    2. Subject
    3. Author
    4. Date
    5. ↑ Table Of Contents
  • Search the netcdf-hdf archives: