NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.
To learn about what's going on, see About the Archive Site.
Jennifer Adams <jma@xxxxxxxxxxxxx> writes: > My nc_put_vara_double() call writes the entire array at once; in this > case, it sounds like the library does more or less the same thing as my > algorithm outlined in the original message. > Breaking up the write into pieces would probably be a performance hit, and > would make my code a bit more complicated, since deciding if splitting is > necessary and how many splits to make is highly system dependent. > I guess I will have to live with being a memory hog -- if I document this > feature, then it becomes the user's problem, not mine. But it seems to me > that it is still better to let the library do the type conversion. Will > the library return a meaningful error if it is unable to allocate the > required memory? Have I overlooked any other costs/benefits? > --Jennifer These difficulties have been grappled with by Russ in nccopy, which copies arbitrarily sized variables on any machine. The error NC_ENOMEM (-61) is returned when netCDF runs out of memory. Thanks, Ed -- Ed Hartnett -- ed@xxxxxxxxxxxxxxxx
netcdfgroup
archives: