NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.

To learn about what's going on, see About the Archive Site.

Re: [thredds] Question about FMRC time discrepancy

Thanks for the quick reply, and the Python Notebook Rich!
The reason I was assuming that the index number equals an hour is
because when the aggregation is loaded using sdfopen in GrADS, it
shows the latest time as 8/10 rather than 8/12 also.
Anyway, I understand things better now.
Thanks!
Tom


On Fri, Aug 12, 2016 at 12:11 PM, David Robertson
<robertson@xxxxxxxxxxxxxxxxxx> wrote:
> On 8/12/16 2:45 PM, tom cook wrote:
>>
>> HI
>> I have been serving gridded surface currents using FMRC aggregations
>> of hourly netcdf files. Yes, I know this is not supposed to work, but
>> it has been working fine (as far as I can tell) and much faster than
>> ncml aggregations. Anyway, when I go the dataset's TDS page I see:
>>
>> TimeCoverage:
>> Start: 2011-10-01T00:00:00Z
>> End: 2016-08-12T18:37:04.088Z
>>
>> The end time here is obviously the time of the page loading.
>>
>> The end time in the NetCDF Subset page is shown by
>> 2016-08-12T16:00:00Z. The last time in WMS is also
>> 2016-08-12T16:00:00.000Z.
>>
>> Now, when I go to the OPENDAP page, this is what I see
>>
>>  time: Array of 64 bit Reals [time = 0..42606]
>> long_name: Forecast time for ForecastModelRunCollection
>> standard_name: time
>> units: hours since 2011-10-01T00:00:00Z
>> missing_value: NaN
>> _CoordinateAxisType: Time
>>
>> So, from this I can assume that the last time in the data set is
>> 2011-10-01 0:00Z + 42606 hrs. However, this is not equal to
>> 2016-08-12T16:00:00.000Z, but rather 2016-08-10T07:00:00Z
>
>
> No you can't [time = 0..42606] is not the values in the dataset but rather
> the indexes.
>
>> My first instinct is that there are missing data files, and the
>> aggregations are dumb, and assume that each file I add to it is an
>> additional hour from my dataset start time. I will run through and
>> look for missing data, but is correct thinking? Is there a better
>> practice for using FMRC aggregations for large dataset with missing
>> times?
>
>
> It is likely that you are missing data files or possibly some files are
> corrupted. As noted above the numbers you see are indexes and not values but
> you could figure out how many hourly records are missing using this number
> versus the expected number of records.
>
> Dave
>
>
> _______________________________________________
> NOTE: All exchanges posted to Unidata maintained email lists are
> recorded in the Unidata inquiry tracking system and made publicly
> available through the web.  Users who post to any of the lists we
> maintain are reminded to remove any personal information that they
> do not want to be made public.
>
>
> thredds mailing list
> thredds@xxxxxxxxxxxxxxxx
> For list information or to unsubscribe,  visit:
> http://www.unidata.ucar.edu/mailing_lists/



  • 2016 messages navigation, sorted by:
    1. Thread
    2. Subject
    3. Author
    4. Date
    5. ↑ Table Of Contents
  • Search the thredds archives: