NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.
To learn about what's going on, see About the Archive Site.
The netCDF operators NCO version 4.0.5 are ready. http://nco.sf.net (Homepage) http://dust.ess.uci.edu/nco (Homepage "mirror") This is a "brown paper bag" release intended to deliver important bugfixes affecting ncks and ncra. http://nco.sf.net#bug_ncks_nc4_nc4_hyp_fix http://nco.sf.net#bug_ncra_cf_crd_rec_crd Also worth mentioning is that the latest Debian (Sid) and Ubuntu (Maverick) packages finally support DAP, netCDF4, and UDUnits2. http://nco.sf.net#debian http://nco.sf.net#ubuntu Those executables are now comparable to what the developers use. (RedHat/Fedora RPMs have supported these features for many years). Work on NCO 4.0.6 is underway. Areas of improvement include DAP transparency, and more chunking rulesets. "New stuff" in 4.0.5 details: A. Versions 4.0.3--4.0.4 of ncks contain a bug which triggers a core-dump when hyperslabbing (along a non-record dimension) a netCDF4-format input file into a netCDF4-format output file, e.g., ncks -d 0,1,lat in4.nc out4.nc. This bug does not affect netCDF3-format files. Two workarounds that do not require NCO upgrades (or downgrades) are to explicitly specify ncks chunking with, e.g., ncks --cnk_plc=all -d 0,1,lat in4.nc out4.nc, or, to use ncea instead of ncks for hyperslabbing, e.g., ncea -d 0,1,lat in4.nc out4.nc This works because ncea does a no-op when there is only one input file. B. Fix bug where ncra incorrectly treats record variable as a fixed variable if it is specified in the "coordinates" attribute of any variable in a file processed with CCM/CCSM/CF metadata conventions. This bug caused core dumps. And even weirder behavior like creating imaginary time slices in the ouput. C. There is a known problem triggered by using the stride argument when accessing a file through the DAP protocol. We are working to identify and fix the cause of this problem. ncks -O -F -D 9 -v weasdsfc -d time,100,110,5 http://nomad3.ncep.noaa.gov:9090/dods/reanalyses/reanalysis-2/6hr/flx/flx ~/foo.nc "Sticky" reminders: J. All operators support netCDF4 chunking options: These options can improve performance on large datasets. Large file users: Send us suggestions on useful chunking patterns! More useful chunking patterns may be implemented in NCO 4.0.6. ncks -O -4 --cnk_plc=all in.nc out.nc http://nco.sf.net/nco.html#chunking K. Pre-built, up-to-date Debian Sid & Ubuntu Lucid packages: http://nco.sf.net#debian L. Pre-built Fedora and CentOS RPMs: http://nco.sf.net#rpm M. Did you try SWAMP (Script Workflow Analysis for MultiProcessing)? SWAMP efficiently schedules/executes NCO scripts on remote servers: http://swamp.googlecode.com SWAMP can work command-line operator analysis scripts besides NCO. If you must transfer lots of data from a server to your client before you analyze it, then SWAMP will likely speed things up. N. NCO support for netCDF4 features is tracked at http://nco.sf.net/nco.html#nco4 NCO supports netCDF4 atomic data types, compression, and chunking. NCO 4.0.5 with was built and tested with HDF5 hdf5-1.8.4-patch1 and with 4.1.2-beta2-snapshot2010101108 NCO may not build with earlier, and should build with later, netCDF4 releases. This is particularly true since NCO 4.0.5 takes advantage of an internal change to the netCDF nc_def_var_chunking() API in June 2009. export NETCDF4_ROOT=/usr/local/netcdf4 # Set netCDF4 location cd ~/nco;./configure --enable-netcdf4 # Configure mechanism -or- cd ~/nco/bld;./make NETCDF4=Y allinone # Old Makefile mechanism O. Have you seen the NCO logo candidates by Tony Freeman, Rich Signell, Rob Hetland, and Andrea Cimatoribus? http://nco.sf.net Tell us what you think... Enjoy, Charlie GSL functions added to ncap2 in 4.0.5: none! (Most desired GSL functions have been merged) -- Charlie Zender, Department of Earth System Science University of California, Irvine (949) 891-2429 :)
netcdfgroup
archives: