NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.
To learn about what's going on, see About the Archive Site.
The netCDF operators NCO version 4.0.9 are ready. http://nco.sf.net (Homepage) http://dust.ess.uci.edu/nco (Homepage "mirror") This release contains a number of minor improvements for corner cases encountered with ncea, ncap2, and ncatted. This has been the longest release cycle in many years, because no serious bugs were known, few new features were added, and because...hey, we're volunteers, no further explanations necessary. Fortunately, NASA has recently given us indications (nothing binding yet), that they may fund NCO to develop support for netCDF4 groups and to write some NCO wrappers for HDF-EOS files. More on that next time. 4.0.9 also reverts the 4.0.8 workaround for the NOFILL bug recently found to be present in all netCDF versions since 1999. NCO 4.0.9 assumes that the underlying netCDF library has been patched to fix the NOFILL bug, i.e., NCO 4.0.9 requires netCDF version 4.1.3 or later. http://nco.sf.net#bug_nofill Work on NCO 4.1.0 is underway, focused on making a more robust build/configuration environment and on simplifying issues described in the KNOWN BUGS NOT YET FIXED section below. Enjoy, Charlie Other "New stuff" in 4.0.9 summary (full details always in ChangeLog): A. Revert netCDF NOFILL bug workaround from NCO 4.0.8. Assume netCDF 4.1.3+ is installed and use NOFILL mode as before. http://nco.sf.net#bug_nofill B. Improve CF convention support for "bounds" attribute. http://nco.sf.net/nco.html#bnd C. ncap2 array() function now works on multi-dimensional arrays. Pass array() a template the same shape as the desired output: var_out=array(1,2,three_dmn_rec_var); // 1,3,5,...155,157,159 D. Fix ncap2 print() function E. Warn when number of attributes exceeds NC_MAX_ATTRS F. Print helpful hint with NC_EVARSIZE errors. Suggest users try more capacious output file types. G. Warn when concatenating files containing packed data. User is now warned to be sure that packing factors in all input files exactly match those in the first file since factors are copied from first file only. Otherwise users should unpack, then concatenate, then pack. H. Warn when appended variable has suspicious record length. NCO forbids appending variables if dimension sizes do not match between input and output files. Except for the record dimension. For various reasons, it should be possible, and so NCO allows, appending variables with differently sized record dimensions. Now instead of doing this silently, NCO issues a warning. Power users can still enjoy this "feature", while novice users will hopefully take heed of the warning. I. Fix overly-zealous ncap2 packing propagation: Thanks to Ken5746 for reporting this problem. Previous versions of NCO treated packing attributes like all other attributes in that the scale_factor and add_offset of the LHS-most variable on the RHS of an expression were propagated to the LHS result. This caused problems when multiple RHS variables had differing packing attributes. Hence scale_factor and add_offset are no longer propagated to the LHS. Thanks to Henry Butowsky for the fix. J. Fix treatment of _FillValue in ncea: When invoked with the (non-default) summing and integration option -y ttl = -N, ncea would incorrectly handle the corner case where the value of an array element in the first two input files was _FillValue. In this case, ncea wrote a zero instead of _FillValue, as demonstrated by ncea -O -y ttl -v mss_val_scl -p ~/nco/data in.nc in.nc ~/foo.nc ncks -H ~/foo.nc The correct output of this is the _FillValue, i.e., 1.0e36, not 0. KNOWN BUGS NOT YET FIXED: This section of the ANNOUNCE file is intended to make clear the existence and severity of known, not yet fixed, problems. A. NOT YET FIXED Correctly read netCDF4 input over DAP, write netCDF4 output, then read resulting file. Replacing netCDF4 with netCDF3 in either location of preceding sentence leads to success. DAP non-transparency: Works locally, fails through DAP server. Demonstration: ncks -4 -O -v three_dmn_rec_var http://motherlode.ucar.edu:8080/thredds/dodsC/testdods/in_4.nc ~/foo.nc ncks ~/foo.nc # breaks with "NetCDF: Invalid dimension ID or name" 20120213: Verified problem still exists Bug report filed: netCDF #QUN-641037: dimension ID ordering assumptions B. NOT YET FIXED netCDF4 library fails when renaming dimension and variable using that dimension, in either order. Works fine with netCDF3. Problem with netCDF4 library implementation. Demonstration: ncks -O -4 -v lat_T42 ~/nco/data/in.nc ~/foo.nc ncrename -O -D 2 -d lat_T42,lat -v lat_T42,lat ~/foo.nc ~/foo2.nc # Breaks with "NetCDF: HDF error" ncks -m ~/foo.nc 20120213: Verified problem still exists Bug report filed: netCDF #YQN-334036: problem renaming dimension and coordinate in netCDF4 file C. NOT YET FIXED Unable to retrieve contents of variables with period '.' in name Metadata is returned successfully, data is not. DAP non-transparency: Works locally, fails through DAP server. Demonstration: ncks -O -C -D 3 -v var_nm.dot -p http://motherlode.ucar.edu:8080/thredds/dodsC/testdods in.nc # Breaks with "NetCDF: DAP server error" 20110506: Verified problem still exists. Stopped testing because inclusion of var_nm.dot broke all test scripts. Bug report filed: https://www.unidata.ucar.edu/jira/browse/NCF-47 "Sticky" reminders: A. All operators support netCDF4 chunking options: These options can improve performance on large datasets. Large file users: Send us suggestions on useful chunking patterns! ncks -O -4 --cnk_plc=all in.nc out.nc http://nco.sf.net/nco.html#chunking B. Pre-built, up-to-date Debian Sid & Ubuntu Maverick packages: http://nco.sf.net#debian C. Pre-built Fedora and CentOS RPMs: http://nco.sf.net#rpm D. Did you try SWAMP (Script Workflow Analysis for MultiProcessing)? SWAMP efficiently schedules/executes NCO scripts on remote servers: http://swamp.googlecode.com SWAMP can work command-line operator analysis scripts besides NCO. If you must transfer lots of data from a server to your client before you analyze it, then SWAMP will likely speed things up. E. NCO support for netCDF4 features is tracked at http://nco.sf.net/nco.html#nco4 NCO supports netCDF4 atomic data types, compression, and chunking. NCO 4.0.9 with was built and tested with HDF5 hdf5-1.8.7 and with 4.2-snapshot2012020522. NCO may not build with earlier, and should build with later, netCDF4 releases. This is particularly true since NCO 4.0.9 takes advantage of an internal change to the netCDF nc_def_var_chunking() API in June 2009. export NETCDF4_ROOT=/usr/local/netcdf4 # Set netCDF4 location cd ~/nco;./configure --enable-netcdf4 # Configure mechanism -or- cd ~/nco/bld;make dir;make all;make ncap2 # Old Makefile mechanism F. Have you seen the NCO logo candidates by Tony Freeman, Rich Signell, Rob Hetland, and Andrea Cimatoribus? http://nco.sf.net Tell us what you think... -- Charlie Zender, Department of Earth System Science University of California, Irvine 949-891-2429 )'(
netcdfgroup
archives: