NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.

To learn about what's going on, see About the Archive Site.

Re: [mcidas-x] [ldm-users] Canadian radar data on IDS|DDPLUS channel

  • To: Kevin C Baggett <kevinb@xxxxxxxxxxxxx>
  • Subject: Re: [mcidas-x] [ldm-users] Canadian radar data on IDS|DDPLUS channel
  • From: Stonie Cooper <cooper@xxxxxxxx>
  • Date: Sat, 23 Aug 2025 06:44:54 -0500
Kevin - for us, Tom did this:

DDPLUS|IDS      ^([^S]|S[^D]|SD[^C]|SDC[^N])    PIPE
        util/xcd_run DDS
-
Stonie Cooper, PhD
Software Engineer III
NSF Unidata
cooper@xxxxxxxx


On Fri, Aug 22, 2025 at 10:08 PM Kevin C Baggett <kevinb@xxxxxxxxxxxxx>
wrote:

> Hi,
> I was wondering if any other users were having trouble with the recent
> channel changes.
> We discovered that there are now fairly large .hdf files coming across the
> IDS|DDPLUS feed - Canadian radar files.
> It has been filling up our text data files and overwriting data when it
> reaches the max size (unfortunately old -XCD with 32-bit 4.2 G addressable
> binary - we have a new version that does not have this limit)
> Ideally, upstream would be never have added these files to the IDS|DDPLUS .
> But we are trying to filter out these SDCN01 WMO header messages with
> pqact.conf.
> After finagling a bit on my own, the suggested ChatGPT (bless its heart)
> pqact.conf entry is:
>
> IDS|DDPLUS      "^SD.*" FILE    -overwrite      -close  /dev/null
>
> IDS|DDPLUS      ^.*     PIPE    /home/oper/mcidas/bin/ingetext.k
> NTXT
>
>
> But this still has the SD readings from the PIPE as it doesn’t remove them
> from the product queue.
>
> There are some other things we have tried with awk, but the SDCN01
> readings are still there.
>
>
>
> Any suggestions?
>
> Thanks!
>
> Kevin Baggett
>
> UW-SSEC
> _______________________________________________
> ldm-users mailing list
> ldm-users@xxxxxxxxxxxxxxxx
> To subscribe: ldm-users-join@xxxxxxxxxxxxxxxx
> To unsubscribe: ldm-users-leave@xxxxxxxxxxxxxxxx
>