NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.
To learn about what's going on, see About the Archive Site.
I am not a Linux expert but I believe Linux caches disk IO in memory. The first read of a data block would come from disk. If there is unused memory, successive reads of the same data block would be out of memory until some other read ages it out. If I am correct on this, and your students are mostly trying to access the same data, going to SSD drives may not get you much. I've played around with trying to put the LDM PQ file in memory and have had only marginal improvement in performance. Neil Smith wrote:
Was wondering if anyone has considered or made use of speed advantages of solid state drives (SSD) for serving decoded ldm data to gempak, garp, and too-be AWIPS2 processes running on network clients? -- particularly in the classroom environment where visualization tools from 20+ network clients are hitting the same $GEMDATA/models/<model> collection at the same time. When would SSDs be worthwhile? If the (NFS) clients are on a 100 Mbps subnet and server is on separate 1000 Mbps subnet, is the network the bottleneck, leaving modern drives or SSDs of negligible difference?-Neil --- Neil Smith neils@xxxxxxxx <mailto:neils@xxxxxxxx> Comp. Sys. Mngr., Atmospheric Sciences
-- Larry Oolman Department of Atmospheric Science University of Wyoming Dept. 3038, 1000 E. University Ave. Laramie, WY 82071 ldoolman@xxxxxxxx http://www-das.uwyo.edu
ldm-users
archives: