NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.
To learn about what's going on, see About the Archive Site.
Hello, JongKwan, I'm guessing a bit, since I don't see the internals of FloatMatrix(), but it might be that you have 100 arrays of 100 elements on 100 different locations in memory. The array you feed into NcVar::put() has to be a consecutive block of memory, Also when using the C++ standard for memory allocation, you must avoid the following: /* Not correct, the 10000 cells are not contiguous, float** snow_we = new float* [tcount] for (int j=0; j<100; j++) snow_we[i] = new float [npix] */ Instead you allocate one big vector: float* snow_we = new float[tcount*npix] // Safe! This means you have to calculate the index yourself, instead of letting the compiler do it: for (int t=0; t<tcount; t++) for (int j=0; j<npix; j++) snow_we[t*npix+j] = we_val[t*npix+j]; // The array is t-major Precede all put and get statements with set_cur: // Thus, for writing the entire table: data->set_cur(0,0); data->put(snow_we,100,100); // Or, for writing one line of pixels: data->set_cur(t,0); data->put(snow_we,1,100); // Or, for writing one column, if you for some reason get your data ready in a j-major order data->set_cur(0,j); data->put(snow_we,100,1); // Just as easy, just don't lose track of your array dimension order! // And as always when you're done: delete [] snow_we; Actually, you can live with the 100 arrays of 100 if you are sure to only write one line at the time to NetCDF. The need to keep track of dimension order and use the set_cur statement is no less, though. If you're on Windows and your matrix is really huge, say 500MB, you might be forced to split it up in several smaller (don't be fooled by the task manager claiming you have plenty of free memory). // In that case, you must: for (int j=0; j<100; j++) delete [] snow_we[j]; // before you... delete [] snow_we; Good luck! Sjur K. :-) ________________________________ From: netcdfgroup-bounces@xxxxxxxxxxxxxxxx [mailto:netcdfgroup-bounces@xxxxxxxxxxxxxxxx] On Behalf Of JongKwan Kim Sent: 24. mars 2010 07:23 To: netcdfgroup@xxxxxxxxxxxxxxxx Subject: [netcdfgroup] netCDF C++ Library : Different Data type Hello guys, I have a problem with netCDF C++ Library. I am using the "molloc" to assign the memory like below. float **snow_we; snow_we = FloatMatrix(tcount,npix); -> this is to assign the memory using "molloc". And, save the data into snow_we like below. for (int j = 0; j < npix; j++) { snow_we[count-1][j] = we_val[j]; } Then, using the netCDF c++ library, create the netCDF file. However, the data does not saved properly, because I think the "snow_we[0][0]" in the red line is made by "molloc". If I use "float snow[100][100]" instead of "molloc" when I was assigning teh memory, it is saved properly. But, I must use "molloc" because of lots of iteration. So, do you guys know how can I solve this problem? NcFile dataFile("netcdfname", NcFile::Replace); if (!dataFile.is_valid()) { cout << "Could not open the netcdf file ! \n " << endl; } NcDim* xDim = dataFile.add_dim("x",tcount); NcDim* yDim = dataFile.add_dim("y",npix); NcVar *data = dataFile.add_var("data",ncFloat,xDim,yDim); data->put(&snow_we[0][0],tcount,npix); FreeFloatMatrix(snow_we,tcount); Thanks,
netcdfgroup
archives: