NOTICE: This version of the NSF Unidata web site (archive.unidata.ucar.edu) is no longer being updated.
Current content can be found at unidata.ucar.edu.
To learn about what's going on, see About the Archive Site.
Well Howdy, This sounds like a queue that is not holding an hour's worth of data and thus getting duplicated / triplicated product transmisions. My questions: 1) Is your LDM queue holding an hour's worth of data `pqmon`? Do you know if your upstream sources are holding an hour's worth of data? Is your LDM logging anything exciting about switching upstreams etc ? 2) Do the file write timestamps on the files make sense? Such that the write time coincides with the end of the VCP transmission? 3) Do your file byte sizes match what is found on the files at AWS / my realtime site? https://mesonet-nexrad.agron.iastate.edu/level2/raw/KDVN/ Do you have a recent example (today / yesterday) of a 2x / 3x file? Can you share that file for others to interrogate! daryl -- /** * daryl herzmann * Systems Analyst III -- Iowa Environmental Mesonet * https://mesonet.agron.iastate.edu */ ________________________________________ From: ldm-users <ldm-users-bounces@xxxxxxxxxxxxxxxx> on behalf of Victor Gensini <vgensini@xxxxxxx> Sent: Tuesday, September 29, 2020 11:30 AM To: ldm-users@xxxxxxxxxxxxxxxx Subject: [ldm-users] level2 via LDM Tom/Steve and LDM junkies, Have there been any recent changes to the format of LEVEL2 radar files coming over LDM that I perhaps missed? A couple of weeks ago, I had a couple scripts start to break that are using gpnexr2_gf with: Program received signal SIGSEGV: Segmentation fault - invalid memory reference. Backtrace for this error: #0 0x7FD1DF3C1697 #1 0x7FD1DF3C1CDE #2 0x7FD1DE8BC3AF #3 0x7FD1DE9F0B54 #4 0x4BCFCB #5 0x4B8E59 #6 0x4B93C7 #7 0x42A510 #8 0x4288FE #9 0x40F5A7 #10 0x40903D #11 0x40554A #12 0x405588 #13 0x7FD1DE8A8504 #14 0x4046D7 #15 0xFFFFFFFFFFFFFFFF Segmentation fault (core dumped) A quick peek of the file size for these stations (relative to others) indicate nearly a doubling (in some cases tripling) of the file size versus other stations that are still working. As an example, compare your file sizes today for KDVN and KILX, both of which are in VCP 35. I'm using the standard hhmmssRadarII perl script to create the volumes off of LDM. Again, no issues for some sites, and then this memory error is popping up for others. I sense a change in file contents or format... Any/all ideas welcome. Victor ----------------------------------------------- Vittorio (Victor) A. Gensini, Ph.D., CCM Associate Professor Department of Geographic and Atmospheric Sciences Northern Illinois University DeKalb, IL 60115 https://atlas.niu.edu<http://atlas.niu.edu> https://wcs.niu.edu<http://wcs.niu.edu/> <http://a><http://atlas.niu.edu>
ldm-users
archives: