NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.
To learn about what's going on, see About the Archive Site.
Hello,I apologize ahead of time if this question has already been answered; I didn't know what to search for.
I'm currently working at NOHRSC on a project which involves ETL from one PostgreSQL 8.3.14 database holding NetCDF files to anotherPostgreSQL database which will hold the same information as SQL primitives. The program that will handle the ETL operations will be written with one of the C++ Interfaces for NetCDF.
In looking at the older C++ interface, I see that it's designed only to workwith files. I haven't installed Appel's version yet, but it appears to be built on a similar C interface, and sources seem mixed on whether or not it can take classic files at all. (http://www.unidata.ucar.edu/software/netcdf/docs/cxx4/
says yes in 2010/4, [netCDF #KLN-901251] says no in 2010/11.) The PostgreSQL large object functions can export an array of bytes or a file, but we would like to avoid creating temporary files on thesystem that will be running the ETL program, if possible. What's the cleanest way to alter an interface so that it can interpret NetCDF
from memory instead of in a file? Thank you very much. Tom Niedzielski NOHRSC
netcdfgroup
archives: