NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.

To learn about what's going on, see About the Archive Site.


[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

20010619: NEXRAD server



>From:  Russ Dengel <address@hidden>
>Organization:  SSEC
>Keywords:  200106191811.f5JIB0110437 McIDAS-X NEXRAD server

Russ,

>        I just found an "interesting" problem in the nexrad server when
>doing a ID=LIST. Operations here at SSEC has set the number of products
>per ID to 250. When we issue the command:
>
>           IMGLIST NEXRAD/BREF1 ID=LIST
>
>We usually get a server timeout failure from the NEXRAD server. I assume
>that is because the machine is too slow to wade throught 250 products
>for ~150 stations.  The timeout is 2 minutes and on a rare occasion, the
>server does complete before the 2 minutes expires.

This was a common problem before I did one of my rewrites of my server,
but should be fixed now IF one files the data in a logical manner AND
then informs the server of the filing's use of the stations' IDs in
the hierarchy.

I would venture to guess that the problem on your system lies in how
the products are being filed or in how the access to the products has
been defined.  I say this because we save a full day's worth of data
for each station for each product on motherlode.ucar.edu  This means
that for a full day there can be up to 288 files for each station, and
the time to get a list of stations back from motherlode typically is in
the one to several seconds range.

If one does not setup the filing to be in a logical hierarchy, or if
one doesn't define the dataset access to explicitly tell the server
that the station ID is part of the hierarchy and/or name, then the
server has to open each file to find out what station the file
represents.  This would fit your observation that a timeout occurs when
trying to open, read to extract ID information, and then sort 150*250
files.

So, the first place to look is in how the files are being saved.  The
second is in how the dataset is defined.

BTW, I _did_ find a bug in the listing of available stations when all
of the files and product types are put into a single directory (i.e.,
no hierarchy).  This problem results in a failure to list the available
stations, not in the slowness of listing them.  I will be updating the
CVS repository with my fix for this as soon as I get a chance to pretty
up some other code that irks me right at the moment.

Tom
--
+-----------------------------------------------------------------------------+
* Tom Yoksas                                             UCAR Unidata Program *
* (303) 497-8642 (last resort)                                  P.O. Box 3000 *
* address@hidden                                   Boulder, CO 80307 *
* Unidata WWW Service                             http://www.unidata.ucar.edu/*
+-----------------------------------------------------------------------------+