NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.
To learn about what's going on, see About the Archive Site.
Joe, Just coming back from spring break, I read the LDM performance comments: > This morning, I was benchmarking file transfers using LDM vs copying the > same file with 'scp'. On the two files I checked, (1237KB and > 2446KB),LDM took 3 times as long to send the file compared to scp. ... > Actually, I do want to use this feature of pqact, but I don't think I > can afford the overhead of using LDM. Just as background, the LDM was designed to optimize the delivery of a large number of small data products, but it was later extended to deliver large products as well, by splitting them into smaller segments. Each small product or segment of a larger product is delivered by a remote procedure call (RPC) that requires a round-trip network transaction. If you have control of both upstream and downstream LDMs, you can tune the protocol for large products by changing the size of segments into which larger products are split. But you may not have as much control as you want because TCP is still used to implement the RPC calls, and it also splits large messages into smaller packets for reliable delivery. Nevertheless, you could try increasing the maximum size of a chunk of data in the LDM by changing the definition of DBUFMAX in protocol/ldm.h: #define DBUFMAX 16384 Be warned that if you do this, you have changed the protocol, so don't expect your changed LDMs to interoperate with other LDMs. The average size of data products delivered over the IDD has been increasing over the last few years, but that size is still smaller than 16384 bytes: Year prod prods prods mbytes kbytes size /hr /sec /hr /sec 1995 4971 6357 1.8 32 9 1996 4486 9368 2.6 42 12 1997 5742 9484 2.6 55 15 1998 9680 15507 4.3 157 44 1999 10328 33432 9.3 345 96 2000 11070 42742 11.9 484 134 2001 12505 34629 9.6 438 122 2002 13926 45909 12.8 641 178 That table came from the longterm average of what the motherlode server (and its predecessor thelma) deliver to a downstream site. I would be interested in ftp or scp performance at copying a large number of small products over a long period of time. The LDM can deliver over 50000 products per hour comprising over 1 Gbyte/hour to multiple downstream sites for extended periods. In LANs where RPC latency is negligible, it can deliver up to 2000 products/second. If the LDMs delivery performance for large products becomes more of an issue, we may have to use different mechanisms for large products to achieve something closer to optimum, but so far the number of products has been more of a problem than their size ... --Russ _____________________________________________________________________ Russ Rew UCAR Unidata Program russ@xxxxxxxxxxxxxxxx http://www.unidata.ucar.edu
ldm-users
archives: