NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.
To learn about what's going on, see About the Archive Site.
This will be a challenge for sure. The NARR, for example, will be an aggregation of ~75000 grib files. Stored in a basic ./YYYYMM/YYYYMMDD tree. The recursive datasetScan tag added recently helps a ton with this. Some of our datasets have forecast hours, some don't. Doing n forecast hour aggregation across the 00hr will help termendously with all of them, however. While it works wonderfully for NetCDF, I cannot see the NcML agg. working with this set of data ~ mainly due to the changing reference times. According to NCEP, our NAM & GFS will soon be foreced into GRIB2. But NCDC-NOMADS NWP it currently entirely a GRIB-1 archive. Only recently home-grown NCDC datasets are created in NetCDF. For NAM & GFS, we have about 6 months online, which comes out to about 700 file when stripped to a 1 forecast time (say 00hr) aggregation. But there are 61 forecast times for GFS, and 21 for NAM. -Dan >> >> >> Especially for GRIB files, you likely need this new "Forecast Model >> Run Collection Aggregation" capability. We have been working with our >> IDD NCEP GRIB files, and there are some complications, especially >> non-homogenaity due to missing records and variable time and vertical >> dimensions, that cant really be solved by the current (index based) >> aggregation. >> Ethan and I will work closely with you guys to get this working. I'd >> like to understand what you have in more detail, number and types of >> files, how they are stored, etc. Can you or someone summarize? >> >> John > > > -- Dan Swank <dan.swank@xxxxxxxx> NOMADS Project: Software & Data Management Contractor - STG, Incorporated Veach-Baley Federal Building 151 Patton Avenue Asheville, NC 28801-5001 Phone: 828-271-4007
thredds
archives: