NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.
To learn about what's going on, see About the Archive Site.
On Jun 5, 2012, at 12:06 PM, Ted Mansell wrote:
If you are only using 15 processors, I would suggest using a 'round robin' approach with non-parallel, chunked and compressed output. Essentially, each processor writes in succession to the file (open, write, close, next processor). This works really well for me for smallish numbers of processors (less than 60, say). If the chunking is set up such that each processor writes just its own chunks, this method works well.good luck! -- Ted On Jun 5, 2012, at 10:23 AM, Kristopher Bedka wrote:I'm not quite sure what you mean by the "application layer"? My goal was to have 15 different processors process 15 segments of a satellite orbit, where each processor would write to the same NetCDF file in the most disk space efficient manner possible without any problems with simultaneous NetCDF writes. I had previously done the compression with the "nf_def_var_chunking" function call in non-parallel NetCDF. As this function does not seem to be available in parallel NetCDF, I'd be interested in alternative suggestions to accomplish my goal. Sorry I am more of the scientist type and am not a software engineer, so I may absorb some of these concepts a little slower than others.Thanks for the help, Kris On Jun 5, 2012, at 11:13 AM, Rob Latham wrote:On Tue, May 29, 2012 at 02:29:12PM -0600, Russ Rew wrote:Hi Kristopher,I am processing a large volume of satellite data where multiple processes could be simultaneously writing data to the same netcdffile. This has not been supported in previous NetCDF versions and I've gotten fatal errors when two simultaneous writes conflicted. Inow understand that recent NetCDF versions do support thisfunctionality. Could someone tell me or provide an example of what Ineed to do (i.e. newfunction calls, options in netcdf open, etc...) to make this work for me? I've tried the pnetcdf package does not support chunking whichI need to internally compress these files.No, sorry, it's not supported in current netCDF versions either. NetCDF-4 uses HDF5 as its storage layer, and HDF5 does not support compression with parallel access, as explained here:Is there any chance you can compress at the application layer? Eachprocessor takes it's local hunk of data, compresses it, then writes tothe file. I admit, you will quickly find out why parallel writes with compression is not already implemented in these parallel I/O libraries! However, it's possible that at your application level, there may be ways to simplify the parallel, compressed writes problem that a general purpose library cannot use. ==rob -- Rob Latham Mathematics and Computer Science Division Argonne National Lab, IL USA========================================================= Kristopher Bedka Science Systems & Applications, Inc. @ NASA Langley Research Center Climate Science Branch 1 Enterprise Parkway, Suite 200 Hampton, VA 23666 Phone: (757) 951-1920 Fax: (757) 951-1902 Kristopher.m.bedka@xxxxxxxx ========================================================= _______________________________________________ netcdfgroup mailing list netcdfgroup@xxxxxxxxxxxxxxxx For list information or to unsubscribe, visit: http://www.unidata.ucar.edu/mailing_lists/
========================================================= Kristopher Bedka Science Systems & Applications, Inc. @ NASA Langley Research Center Climate Science Branch 1 Enterprise Parkway, Suite 200 Hampton, VA 23666 Phone: (757) 951-1920 Fax: (757) 951-1902 Kristopher.m.bedka@xxxxxxxx =========================================================
netcdfgroup
archives: