NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.
To learn about what's going on, see About the Archive Site.
If the server has sufficient RAM, then the first disk access will put the model data into kernel cache, and subsequent file access will read from RAM. It is more likely that the network connection will be the bottleneck. Of course, the first disk access may be faster with a SSD, and you have fewer concerns about media errors over the long haul. In contrast, if you have many different files being accessed (in which case the kernel disk cache doesn't provide much advantage), perhaps an SSD could improve throughput. (Be careful about the type of drive, however, as the NAND-type storage has a limited number of disk writes over the life of the drive.) (We run a classroom of 30 Linux workstations accessing NFS-mounted data from a NAS-type file server. We are generally pleased with performance.) Bret Whissel SysAdmin Earth, Ocean and Atmospheric Science Department Florida State University On Tue, 2011-03-29 at 09:43 -0500, Neil Smith wrote: > Was wondering if anyone has considered or made use of speed advantages > of solid state drives (SSD) for serving decoded ldm data to gempak, > garp, and too-be AWIPS2 processes running on network clients? > > > > -- particularly in the classroom environment where visualization tools > from 20+ network clients are hitting the same $GEMDATA/models/<model> > collection at the same time. > > > When would SSDs be worthwhile? If the (NFS) clients are on a 100 Mbps > subnet and server is on separate 1000 Mbps subnet, is the network the > bottleneck, leaving modern drives or SSDs of negligible difference? > > > -Neil > > --- > Neil Smith neils@xxxxxxxx > Comp. Sys. Mngr., Atmospheric Sciences
ldm-users
archives: