NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.
To learn about what's going on, see About the Archive Site.
Unidata Support wrote:
------- Forwarded MessageTo: support-netcdf-java@xxxxxxxxxxxxxxxx From: "Peyush Jain" <peyush.jain@xxxxxxxx> Subject: netCDF Java - Out Of Memory Error Organization: UCAR/Unidata Keywords: 200503291926.j2TJQ3Qk025344Institution: NASA Package Version: Version 2.1 Operating System: Win XP Pro Hardware Information: P4 3GHz, 2GB RAM Inquiry: Hello, I was able to figure out how to archive streaming data. In my previous example, I created 5 one-dimensional arrays of "unlimited" length. Then I created origin[1] which stored the offset for incoming data and then used write(name, origin, doubleArray) to write the data. So, now the file size is increasing. Now, I am running into another problem. By default, all the incoming data are stored in the variables and are not written to the file until I stop the incoming data and close the file. Therefore, after a few minutes, I get an "java.lang.OutOfMemoryError" error. Is there any way to force netCDF to write to the file after each time data is received (so that I don't run out of memory, flush() didn't do the job). Peyush
looks like our emails crossed. have a look at the example and see if that solves your problen. the trick is to not store more than one time step at a time.
netcdf-java
archives: