NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.
To learn about what's going on, see About the Archive Site.
Hi,Good day! I am attaching a Fortran code in that, I am trying parallel IO. It is writing approx 56 GB data. But, to call *nf90_def_var *takes a lot time approx 90 sec. Also, *nf90_unlimited* does not work when I replace NZ with *nf90_unlimited*. It probably fails during write operation it gives "HDF error". Please could you suggest me how can I improve performance in parallel I/O. I am attaching the file here.
Thanks a lot in advance Alok
use netcdf use mpi character (len = *), parameter :: FILE_NAME = "a_xy.nc" integer, parameter :: NDIMS =3,NX=10000,NY=880,NZ=1600 real(4) A(NX,NY) integer I,J integer ncid, varid, dimids(NDIMS),chunksizes(NDIMS) integer x_dimid, y_dimid,z_dimid integer start(NDIMS),count(NDIMS) integer rank,nprocs,J1,J2,ierr double precision t1,t2,t3,t4 call MPI_Init(ierr) call MPI_Comm_rank(MPI_COMM_WORLD, rank, ierr) call MPI_Comm_size(MPI_COMM_WORLD, nprocs, ierr) J1=rank*NZ/nprocs+1 J2=(rank+1)*NZ/nprocs DO J=1,NY DO I=1,NX A(I,J)=rand() ENDDO ENDDO T3=MPI_WTIME() call check( nf90_create(FILE_NAME, NF90_NETCDF4, & ncid, comm=MPI_COMM_WORLD, & info = MPI_INFO_NULL) ) call check( nf90_def_dim(ncid, "y", NY, y_dimid) ) call check( nf90_def_dim(ncid, "x", NX, x_dimid) ) call check( nf90_def_dim(ncid, "z", NZ, z_dimid) ) dimids = (/x_dimid, y_dimid,z_dimid/) T4=MPI_WTIME() call check(nf90_def_var(ncid, "data", NF90_REAL4, dimids,varid)) call check( nf90_enddef(ncid) ) T1=MPI_WTIME() DO J=J1,J2 start= (/1,1,J /) count= (/NX,NY,1/) call check( nf90_put_var(ncid, varid, A,start = start, & count = count) ) ENDDO call check( nf90_close(ncid) ) T2=MPI_WTIME() print* ,'TIME',T2-T1,T1-T3,T1-T4,T1,rank call mpi_finalize(ierr) contains subroutine check(status) integer, intent ( in) :: status if(status /= nf90_noerr) then print *, trim(nf90_strerror(status)) stop 2 end if end subroutine end
netcdfgroup
archives: