NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.
To learn about what's going on, see About the Archive Site.
Hi Ben, > This is a terrific idea. One suggestion I have is to build it so the > processing services can be set up in a brokering layer -- that is, so the > input datasets can be accessed via web services and the output can be > served via web services. I don't mean that this should be the only way to > implement the nco processing, rather just keep it in mind so it's > relatively easy to set up such a three tier architecture for the nco > processing. > I just heard from Charlie Zender and have confirmed that the NCO routines can operate on opendap URLs. This opens up numerous possibilities. In the context of ramadda one can have explicit opendap links, e.g.: http://ramadda.org/repository/alias/brokerexample All of the ramadda data services (cataloging, metadata ingest, subset, nco (soon), grid visualizations, etc) are available for that opendap link. However, we have to keep in mind performance ramifications. It still takes a long time to move gigabytes of data across a network. This brings up the importance of moving the computation to the data, instead of moving the data to the computation. For some data sets and many use cases remote access to data works very well so things like brokering are tractable. However, for *big* data sets (e.g., climate model output) we need to come up with richer mechanisms (like the NCO on local data) to bring computation to the data. -Jeff
thredds
archives: