NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.

To learn about what's going on, see About the Archive Site.

Re: [gembud] batch product generation

Hi Neil,

As Mike's counterpart in GEMPAK crime, I should also mention that you will see 
*significant* performance increase by using gdplot_gf rather than gdplot. 

Best, 

--
Dr. Vittorio (Victor) A. Gensini
Associate Professor
Meteorology
College of DuPage
425 Fawell Blvd.
Glen Ellyn, IL 60137
Office: Berg Instructional Center 3503
ph: +1 (630) 942-3496 
http://weather.cod.edu/~vgensini

> On Jan 4, 2017, at 1:14 PM, Mike Zuranski <zuranski@xxxxxxxxxxxxxxx> wrote:
> 
> Hi Neil,
> 
> What I've found works well is to have two scripts, a product script and a 
> runner.  The product script is mostly the code you have there, minus the 
> loop.  Have it accept forecast hour, and possibly other vars (e.g. 
> initialization time) as command line arguments.  Then take that foreach loop 
> and put it into another script, I call it a runner.  Inside the loop execute 
> the product script, passing the necessary variables.  You then can have 
> multiple product scripts if you want to plot different things.  
> 
> The important part is to add an ampersand '&' to the end of the command.  
> This will run the product script in the background, and move on with the rest 
> of the code and/or the next iteration of the loop.  If you think it 
> necessary, you can include a 'wait' command after the loop to pause the rest 
> of the runner until the background product scripts are finished.
> 
> Then just execute the runner, kick back with some coffee, and watch it go.
> 
> Hope this helps,
> 
> -Mike
> 
> 
> ======================
> Mike Zuranski
> Meteorology Support Analyst
> College of DuPage - Nexlab
> Weather.cod.edu
> ======================
> 
>> On Wed, Jan 4, 2017 at 12:44 PM, Smith, Neil R <n-smith2@xxxxxxxxxxxxx> 
>> wrote:
>> One can get hold of some pretty impressive horse power these days — cpu core 
>> count and massive system RAM.
>> 
>> I have experimental access to one such and am wanting to test batch 
>> submission of forecast maps to see just what I can get away with. Could I 
>> get 20 gdplot2 jobs of 20 gfs forecast hours running on 20 cores 
>> simultaneously?
>> 
>> I think I’m asking what’s a good way to submit gdplot2 image production in 
>> the background?
>> 
>> eg., if I currently plot GFS 250mb height, winds, and isotaches by running 
>> each forecast hour successively with:
>> 
>> ————
>> #!/bin/csh
>> 
>> # restore file gfs.215.nts has appropriate GDFILE specification
>> 
>> foreach fcst ( `seq -w 000 006 120` )
>> 
>> set outfile = 250wnd_gfs_f${fcst}.gif
>> 
>>   gdplot2<<END_INPUT
>>    restore gfs.215.nts
>>    GDATTIM  = f${fcst}
>>   \$MAPFIL  = TPPOWO.GSF
>>    GLEVEL   = 250
>>    GVCORD   = pres
>>    GDPFUN   = knts(mag(wnd)) ! hght ! kntv(wnd)
>>    CINT     = ! 120 !
>>    TITLE    = 31/-3/GFS FORECAST INIT ^ ! 31/-2/${fcst}-HR FCST VALID ?~ ! 
>> 31/-1/250-HPA HEIGHTS, WINDS, ISOTACHS (KT)
>>    DEVICE   = GIF|$outfile|1880;1010
>>    FINT     = 70;90;110;130;150;170
>>    FLINE    = 0;5;10;17;13;15;30
>>    TYPE     = f ! c ! b
>>   r
>> 
>>   exit
>>   END_INPUT
>> gpend
>> 
>> end
>> ————
>> 
>> how could I modify this effort to submit each forecast hour job 
>> simultaneously onto the system?
>> 
>> And I’m not averse to bash shell. If it’s much easier with bash, I’ll take 
>> any suggestions.
>> 
>> Neil
>> _______________________________________________
>> NOTE: All exchanges posted to Unidata maintained email lists are
>> recorded in the Unidata inquiry tracking system and made publicly
>> available through the web.  Users who post to any of the lists we
>> maintain are reminded to remove any personal information that they
>> do not want to be made public.
>> 
>> 
>> gembud mailing list
>> gembud@xxxxxxxxxxxxxxxx
>> For list information or to unsubscribe,  visit: 
>> http://www.unidata.ucar.edu/mailing_lists/
> 
> _______________________________________________
> NOTE: All exchanges posted to Unidata maintained email lists are
> recorded in the Unidata inquiry tracking system and made publicly
> available through the web.  Users who post to any of the lists we
> maintain are reminded to remove any personal information that they
> do not want to be made public.
> 
> 
> gembud mailing list
> gembud@xxxxxxxxxxxxxxxx
> For list information or to unsubscribe,  visit: 
> http://www.unidata.ucar.edu/mailing_lists/
  • 2017 messages navigation, sorted by:
    1. Thread
    2. Subject
    3. Author
    4. Date
    5. ↑ Table Of Contents
  • Search the gembud archives: