NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.
To learn about what's going on, see About the Archive Site.
Ed Hartnett wrote:
Jeff Whitaker <jswhit@xxxxxxxxxxx> writes:Ed: Thanks! beta1 works fine and passes all the tests with my python module. However, ncdump appears to be broken - every netcdf-4 file I try results in something like this: ncdump: test.nc: HDF error It seems to work fine with netcdf-3 files however.Howdy Jeff! In this release, ncdump and ncgen can handle netCDF-4 files as long as they conform to the classic model. That is, as long as you don't use user-defined types and the new atomic types, it should work fine. (Ncdump does handle groups correctly, but not yet ncgen.) This is tested with the ncdump tests, when the script tst_netcdf4.sh is run in the ncdump directory. This script causes ncgen to create some netCDF-4/HDF5 files from the same CDL scripts that are used to test the netCDF-3 ncgen/ncdump stuff. Then the script feeds the resulting data files back to ncdump to make sure that the same CDL output results. For example: bash-3.00$ ../ncgen/ncgen -v3 -b -o c0.nc ../ncgen/c0.cdlbash-3.00$ ./ncdump c0.nc netcdf c0 {dimensions: Dr = UNLIMITED ; // (2 currently) D1 = 1 ; D2 = 2 ; D3 = 3 ; dim-name-dashes = 4 ; dim.name.dots = 5 ; variables: char c ; c:att-name-dashes = 4 ; c:att.name.dots = 5 ; <etc.> Do your files conform to the classic model?
Ed: Yes, they do. ncdump crashes on every netcdf4_classic file I give it. I'll try running the test and send you the results. -Jeff -- Jeffrey S. Whitaker Phone : (303)497-6313 NOAA/OAR/CDC R/PSD1 FAX : (303)497-6449 325 Broadway Boulder, CO, USA 80305-3328 ============================================================================== To unsubscribe netcdfgroup, visit: http://www.unidata.ucar.edu/mailing-list-delete-form.html ==============================================================================
From owner-netcdfgroup@xxxxxxxxxxxxxxxx 16 Wed, Sep
Date: Wed, 16 Sep 1992 22:38:37 EDT From: RIVERS@xxxxxxxxxxxxxxxxxxx (Mark Rivers) To: netcdfgroup@xxxxxxxxxxxxxxxx Subject: Data compression Received: by unidata.ucar.edu id AA22748 (5.65c/IDA-1.4.4 for netcdfgroup-send); Wed, 16 Sep 1992 20:40:29 -0600 Received: from bnlx26.nsls.bnl.gov by unidata.ucar.edu with SMTP id AA22744 (5.65c/IDA-1.4.4 for <netcdfgroup@xxxxxxxxxxxxxxxx>); Wed, 16 Sep 1992 20:40:27 -0600 Organization: . Keywords: 199209170240.AA22744 Message-Id: <920916223837.22800687@xxxxxxxxxxxxxxxxxxx> X-Vmsmail-To: SMTP%"netcdfgroup@xxxxxxxxxxxxxxxx" Are there any plans to add data compression to netCDF? We are strongly leaning towards switching from our present (local) data file format to netCDF. The only feature which we will have to give up is data compression. We are presently using either run-length encoding or a simple form of linear predictive coding.Both of these are loss-free. Linear predictive coding typically reduces the size of our 32 bit integer data files by a factor of 3, which is significant.
It seems like it could be a very worthwhile addition. Mark Rivers (516) 282-7708 or 5626Building 815 rivers@xxxxxxxxxxxxxxxxxxx (Internet) Brookhaven National Laboratory rivers@bnl (Bitnet) Upton, NY 11973 BNLX26::RIVERS (Physnet)
netcdfgroup
archives: