NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.
To learn about what's going on, see About the Archive Site.
Hi Kevin:You need to use time partitions, as a rule of thumb, when the total number of GRIB records is greater than 3-5 million.
Once the ncx file is created, it will take much less space to actually use / read from it. Its just the creation process thats so memory intensive.
Can you estimate the number of records and files you have?Also, Im doing a big refactor of partitions in 4.5, to deal with the case of millions of files, which I hope it will be possible to handle. Ill let you know when thats ready for testing.
John On 12/5/2013 2:00 PM, Kevin Manross wrote:
Greetings! I have a relatively large static dataset of grib files that I'm trying to get set up in my TDS. I currently have my tomcat running with 8Gb of heap, but run into heap space problems when trying to create the .ncx files I think part of my problem is that I'm not partitioning my collection (I'm trying this next). I have been trying to create the indices "offline" with the following: java -Xmx512m -classpath netcdfAll-4.3.jar ucar.nc2.grib.grib1.Grib1Index $gribfile then java -Xmx12288m -verbose -classpath netcdfAll-4.3.jar ucar.nc2.grib.grib1.Grib1Collection -make collection_name.ncx "/path/to/files/*.grib" But even with 12 Gb of heap allocated, I am still getting heap dumps. Even if I am successful at creating the collection index file (collection_name.ncx), will it require massive amounts of heap space to read in the collection by THREDDS? Is partitioning my best bet? Thanks! -kevin. -- Kevin Manross NCAR/CISL/Data Support Section Phone: (303)-497-1218 Email:manross@xxxxxxxx <mailto:manross@xxxxxxxx> Web:http://rda.ucar.edu _______________________________________________ thredds mailing list thredds@xxxxxxxxxxxxxxxx For list information or to unsubscribe, visit: http://www.unidata.ucar.edu/mailing_lists/
thredds
archives: