NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.

To learn about what's going on, see About the Archive Site.

Re: [netcdfgroup] [EXTERNAL] file sizes: netCDF classic vs. netCDF-4

  • Subject: Re: [netcdfgroup] [EXTERNAL] file sizes: netCDF classic vs. netCDF-4
  • From: John Caron <caron@xxxxxxxx>
  • Date: Thu, 19 Feb 2015 13:22:11 +0000
More or less theres a fixed amount of overhead per file and per variable.
As the files get larger, they become approx the same size. If you enable
compression, the netcdf-4 can be much smaller. However, theres also a fixed
amount of overhead per data chunk; if you make very small chunks on
incompressible data, the netcdf-4 file can be bigger. In sum, netcdf-3 file
sizes are easily calculable and netcdf-4 not.

John

On Wed, Feb 4, 2015 at 10:32 PM, Chris Barker <chris.barker@xxxxxxxx> wrote:

> On Wed, Feb 4, 2015 at 2:25 PM, Sjaardema, Gregory D <gdsjaar@xxxxxxxxxx>
> wrote:
>
>> It isn¹t usually a constant file size multiplier, it is typically an
>> offset in file size.  In other words, typically the file will be somewhat
>> larger for the hdf5-based version (with no compression) than a
>> corresponding netcdf-based classic file, but the size differential will be
>> smaller as the overall file sizes increase.
>>
>
> unless there are poor settings for chunk sizes -- the defaults used to be
> horrible for 1-d arrays (or with arrays with one or more very small
> dimension) -- though I think recent versions fixed this. Might be worth
> checking out though.
>
> -CHB
>
>
>
>
>
>
>> Note that with the netcdf4 hdf5-based file, you can also enable
>> compression with the "-d #² and -s options.  Œ#¹ can rance from 1 to 9,
>> but values of 1 or 2 along with the shuffle (-s) option typically give
>> good results.
>>
>> ..Greg
>>
>> On 2/4/15, 3:16 PM, "Nico Schlömer" <nico.schloemer@xxxxxxxxx> wrote:
>>
>> >Hi all,
>> >
>> >When converting classical netCDF files to their modern format (compare
>> >the thread starting at [1]) I noticed that the file size blows up
>> >considerably, e.g.
>> >```
>> >$ du -sh pacman-classical.e
>> >40K pacman-classical.e
>> >$ nccopy -k hdf5 pacman-classical.e pacman.e
>> >$ du -sh pacman.e
>> >4.1M pacman.e
>> >```
>> >with `pacman-classical.e` from [2]. I'm not too worried about this
>> >now, but is this something you would expect?
>> >
>> >Cheers,
>> >Nico
>> >
>> >
>> >[1]
>> >
>> http://www.unidata.ucar.edu/mailing_lists/archives/netcdfgroup/2015/msg000
>> >19.html
>> >[2] http://win.ua.ac.be/~nschloe/other/pacman.e
>> >
>> >_______________________________________________
>> >netcdfgroup mailing list
>> >netcdfgroup@xxxxxxxxxxxxxxxx
>> >For list information or to unsubscribe,  visit:
>> >http://www.unidata.ucar.edu/mailing_lists/
>>
>> _______________________________________________
>> netcdfgroup mailing list
>> netcdfgroup@xxxxxxxxxxxxxxxx
>> For list information or to unsubscribe,  visit:
>> http://www.unidata.ucar.edu/mailing_lists/
>>
>
>
>
> --
>
> Christopher Barker, Ph.D.
> Oceanographer
>
> Emergency Response Division
> NOAA/NOS/OR&R            (206) 526-6959   voice
> 7600 Sand Point Way NE   (206) 526-6329   fax
> Seattle, WA  98115       (206) 526-6317   main reception
>
> Chris.Barker@xxxxxxxx
>
> _______________________________________________
> netcdfgroup mailing list
> netcdfgroup@xxxxxxxxxxxxxxxx
> For list information or to unsubscribe,  visit:
> http://www.unidata.ucar.edu/mailing_lists/
>
  • 2015 messages navigation, sorted by:
    1. Thread
    2. Subject
    3. Author
    4. Date
    5. ↑ Table Of Contents
  • Search the netcdfgroup archives: