NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.

To learn about what's going on, see About the Archive Site.

Re: Problems reading NetCDF data - get NaNs

We have found a solution to this problem, which I am posting to the
list for future reference.  Thanks very much to John Caron and Don
Murray for helping out with this.

The problem was that the valid_min and valid_max were expressed as
shorts (i.e. packed data) but their numerical range was correct for
unpacked data:

       :valid_min = -3; // short
       :valid_max = 40; // short
       :add_offset = 20.0; // double
       :scale_factor = 0.0010; // double

They should have been:

       :valid_min = -23000; // short
       :valid_max = 20000; // short

i.e. relative to the packed data - this way an app can tell if the
data are out of range without unpacking the data.  This would also
have been OK:

       :valid_min = -3.0; // double
       :valid_max = 40.0; // double

The problem was causing nj22 to read all data as out of range.

Jon

On 19/10/06, Jon Blower <jdb@xxxxxxxxxxxxxxxxxxxx> wrote:
Hi Don,

Thanks for your quick reply.  I switched to 2.2.17 and the problem is
still there.  The values of temperature from ncdump include (file is
large!):

32767, 32767, 32767, 32767, 32767, 32767, 32767, 32767, 32767, -20974,
    -20974, -20979, -20985, 32767, 32767, 32767, 32767, -21054, -21072,
    -21083, -21093, -21101, -21108, -21112, -21115, 32767, 32767, 32767,

32767 represents missing data (metadata has :_FillValue = 32767;) and,
for example, -21072 represents (-21072 * 0.001 + 20=) -1.072 degrees
C, which is within the valid range.

According to the file metadata, the convention is COARDS.  Also, as I
said in my original post, ncBrowse displays the data without any
problems.

Thanks again,
Jon

On 19/10/06, Don Murray <dmurray@xxxxxxxxxxxxxxxx> wrote:
> Hi Jon-
>
> I suspect the problem is with the valid_max and valid_min
> attributes.  The library will return NaN if the
> values are outside the range of the min/max.  If the
> type of the attribute is the same as the variable, they
> are compared before scale and offset.  If they are different
> they are compared after scaling and offset.  I recently
> found a bug with this that was fixed in the 2.2.17 pre-release,
> but I'm not sure if that is why you are seeing what you
> are seeing.
>
> What are the values for temperature in ncdump?
>
> Don
> *************************************************************
> Don Murray                               UCAR Unidata Program
> dmurray@xxxxxxxxxxxxxxxx                        P.O. Box 3000
> (303) 497-8628                              Boulder, CO 80307
> http://www.unidata.ucar.edu/staff/donm
> *************************************************************
>
>
>
> Jon Blower wrote:
> > Dear all,
> >
> > I'm having some problems reading data from a particular NetCDF file.
> > The code i'm using is:
> >
> >  NetcdfDataset nc
> > NetcdfDataset.openDataset("C:\\data\\OA_20060830.nc", true, null);
> >  GridDataset gd = new GridDataset(nc);
> >  GeoGrid gg = gd.findGridByName("temperature");
> >  Array arr = gg.readYXData(0, 0);
> >  IndexIterator it = arr.getIndexIteratorFast();
> >  while (it.hasNext()) {
> >      double val = it.getDoubleNext();
> >      System.out.println("" + val);
> >  }
> >  nc.close();
> >
> > I just get a load of NaNs, even though I know that there are valid
> > data in the file (ncBrowse displays the data perfectly).  The only
> > unusual thing about the data is that the data are stored as short
> > integers, with an offset and scale factor:
> >
> >     short temperature(time, depth, latitude, longitude);
> >        :long_name = "Temperature";
> >        :missing_value = 32767; // short
> >        :_FillValue = 32767; // short
> >        :units = "degree_Celsius";
> >        :valid_min = -3; // short
> >        :valid_max = 40; // short
> >        :add_offset = 20.0; // double
> >        :scale_factor = 0.0010; // double
> >        :comment = "Temperature estimate (by objective analysis)";
> >
> > I understood that the NetcdfDataset class automatically dealt with
> > offsets and scale factors  in "enhanced" mode, which is what I am
> > using (or trying to use).  Hence I was expecting to "see" the data as
> > an array of doubles.  The same happens if I use getFloatNext() or
> > getShortNext().
> >
> > Can anyone see what I'm doing wrong?

--
--------------------------------------------------------------
Dr Jon Blower              Tel: +44 118 378 5213 (direct line)
Technical Director         Tel: +44 118 378 8741 (ESSC)
Reading e-Science Centre   Fax: +44 118 378 6413
ESSC                       Email: jdb@xxxxxxxxxxxxxxxxxxxx
University of Reading
3 Earley Gate
Reading RG6 6AL, UK
--------------------------------------------------------------



--
--------------------------------------------------------------
Dr Jon Blower              Tel: +44 118 378 5213 (direct line)
Technical Director         Tel: +44 118 378 8741 (ESSC)
Reading e-Science Centre   Fax: +44 118 378 6413
ESSC                       Email: jdb@xxxxxxxxxxxxxxxxxxxx
University of Reading
3 Earley Gate
Reading RG6 6AL, UK
--------------------------------------------------------------

==============================================================================
To unsubscribe netcdf-java, visit:
http://www.unidata.ucar.edu/mailing-list-delete-form.html
==============================================================================


  • 2006 messages navigation, sorted by:
    1. Thread
    2. Subject
    3. Author
    4. Date
    5. ↑ Table Of Contents
  • Search the netcdf-java archives: