NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.
To learn about what's going on, see About the Archive Site.
Hi, Timothy Hume wrote: > ... However, "growing" the unlimited dimension seemed to take a lot > of computer time compared to slotting the data into an array where > all the dimensions were fixed length. It may be that I wrote my > software inefficiently, but I suspect the slowness of "growing" the > unlimited dimension is partly due to the structure of a NetCDF > file. If it is possible, it may be more efficient to predefine the > length of your time dimension (making it large enough to hold all > the data you receive), rather than using an unlimited dimension. The time to add a new record to a netCDF file is independent of the number of records already written to the netCDF file. In other words, adding new records does not get slower as more records are added. Writing a record variable can be slower than a non-record variable if it is the first record variable written in a new record, because the record is initialized by updating the record count (the current size of the unlimited dimension) in the file header. Also (assuming you haven't explicitly set "no fill mode"), the record is completely written with fill values the first time any record gets written. Subsequent writes of record variables in any initialized record will be as efficient as writes of non-record variables. In the rare instance that you haven't explicitly set "no fill mode" and you write a record far beyond the current last record, then intervening records are written with fill values for each record variable, but subsequent writes to those intervening records are also just as fast as writes to non-record netCDF variables. In summary, I/O with record variables in netCDF is very similar to I/O with non-record variables, except that the first time any new record is written some extra processing occurs to update the record count and write fill values. How significant this extra processing is depends on how many record variables there are and how large the records are. John Thaden is correct in saying the file format is intended to gracefully handle appending new data, as in real time acquisition. --Russ ============================================================================== To unsubscribe netcdfgroup, visit: http://www.unidata.ucar.edu/mailing-list-delete-form.html ==============================================================================
netcdfgroup
archives: