NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.
To learn about what's going on, see About the Archive Site.
>>The requirement to store time to more than 53 bits of precision is separate >>from data density over that time span. > >I'm not sure what your point is, but I'll assume it has something to >do with the following paragraph. > >>[paragraph concerning integer/dble conversion deleted] My comment referred to your statement about the size of datasets with 2**53 entries. The need to specify time (or other variables) to great precision is separate from the need to store a great deal of data in a given dataset. If what you say is true (and I don't doubt it) about our ability to exactly recover integers from stored doubles, then using a double for time does solve *our* problem. Those who really need the extra precision would still need another solution.
netcdfgroup
archives: