NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.
To learn about what's going on, see About the Archive Site.
Tim, My understanding is that you are required to use a record number to read records serially using the Netcdf API. There is no such function to read records anonymously until end of file. I think you are trying to cross over a record processing paradigm into a subscript oriented interface. The conventional way do do what you ask is to do the work, use a few inquire functions to get the current dimension size on the unlimited dimension. Then each process reads records and increments their own "record index number" until the dimension size is surpassed. This is equivalent to detecting end of file. I suppose one could blindly keep incrementing the record index until one of the get_var calls returns an error code. It would probably be the specific code for "requested subscripts exceed the extent of the file", or however that's worded. See the table of error codes in the user manual or include or header files. When the dimension is unlimited, I am not even sure that that would always be a reliable test. There is a certain elegance to avoiding error returns when you have the information to do so. That would be my preference. --Dave On Tue, May 20, 2014 at 7:46 PM, Timothy Stitt <Timothy.Stitt.9@xxxxxx> wrote: > > Hi all, > > I was just wondering if there is any easy to detect the EOF of a netcdf-4 data file? I’m trying to read in records (with an unlimited leading dimension) in parallel until there are no more records left. Is there any easy way to implement this without explicitly calculating the number of records (with an inquiry function) and dividing by the #MPI processes (and figuring out what happens when there isn’t a simple division)? > > Thanks in advance, > > Tim.
netcdfgroup
archives: