NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.
To learn about what's going on, see About the Archive Site.
On 8/12/16 2:45 PM, tom cook wrote:
HI I have been serving gridded surface currents using FMRC aggregations of hourly netcdf files. Yes, I know this is not supposed to work, but it has been working fine (as far as I can tell) and much faster than ncml aggregations. Anyway, when I go the dataset's TDS page I see: TimeCoverage: Start: 2011-10-01T00:00:00Z End: 2016-08-12T18:37:04.088Z The end time here is obviously the time of the page loading. The end time in the NetCDF Subset page is shown by 2016-08-12T16:00:00Z. The last time in WMS is also 2016-08-12T16:00:00.000Z. Now, when I go to the OPENDAP page, this is what I see time: Array of 64 bit Reals [time = 0..42606] long_name: Forecast time for ForecastModelRunCollection standard_name: time units: hours since 2011-10-01T00:00:00Z missing_value: NaN _CoordinateAxisType: Time So, from this I can assume that the last time in the data set is 2011-10-01 0:00Z + 42606 hrs. However, this is not equal to 2016-08-12T16:00:00.000Z, but rather 2016-08-10T07:00:00Z
No you can't [time = 0..42606] is not the values in the dataset but rather the indexes.
My first instinct is that there are missing data files, and the aggregations are dumb, and assume that each file I add to it is an additional hour from my dataset start time. I will run through and look for missing data, but is correct thinking? Is there a better practice for using FMRC aggregations for large dataset with missing times?
It is likely that you are missing data files or possibly some files are corrupted. As noted above the numbers you see are indexes and not values but you could figure out how many hourly records are missing using this number versus the expected number of records.
Dave
thredds
archives: