Blog from October, 2010

Installed the latest version of nidas (revision 5771M) today, with the new process running at 19:49 UTC.

The new nidas has some improvements in the serial handling efficiency. Don't see any effect on the number of "spurious interrupts" though.

Also restarted ntp daemon. Added a "server ral" entry in /etc/ntp.conf so that we can compare our local GPS time source with the ral server.

ntpq -p shows good agreement (-8.285 millisecond offset) with the ral server:

ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
xral             208.75.88.4      3 u    7   64  377    0.368   -8.285   1.249
oGPS_NMEA(0)     .GPS.            2 l   15   16  377    0.000   -0.028   0.031

Querying the ral ntp server, with ntpq -p ral shows that it has offsets with its servers, probably related to the big delays over its wifi connection:

ntpq -p ral
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
+64.6.144.6      128.252.19.1     2 u  300 1024  177   43.800  -32.309  12.143
*208.75.88.4     192.12.19.20     2 u  997 1024  377   56.151   16.212   0.321
+64.73.32.134    192.5.41.41      2 u  502 1024  377   42.897    7.307 110.558

Later, Oct 16, 15:24 MDT, saw smaller offsets all around:

root@manitou root# ntpq -p
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
xral             208.75.88.4      3 u   39   64  377    0.354    1.108   0.390
oGPS_NMEA(0)     .GPS.            2 l    -   16  377    0.000    0.003   0.031
root@manitou root# ntpq -p ral
     remote           refid      st t when poll reach   delay   offset  jitter
==============================================================================
+64.6.144.6      128.252.19.1     2 u  135 1024  373   41.912    0.464  74.268
*208.75.88.4     192.12.19.20     2 u  840 1024  357   56.112    2.963   1.735
+64.73.32.134    192.36.143.150   2 u  317 1024  377   39.507    1.365  64.459
7m TRH intermittent

Data from the 7m TRH has been intermittent for more than a month.

Here's a typical dropout, where the unit quits reporting, then 15
hours later comes alive, with startup messages:

2010 10 02 14:07:54.4656       0      40 \x00\x00\r Sensor ID16    data rate: 1 (secs)\n
2010 10 03 05:28:49.2103 5.525e+04      29 \rcalibration coefficients:\r\n
2010 10 03 05:28:49.2517 0.04144      21 Ta0 = -4.042937E+1\r\n
2010 10 03 05:28:49.2827 0.03095      21 Ta1 =  1.022852E-2\r\n
2010 10 03 05:28:49.3134  0.0307      21 Ta2 = -2.096747E-8\r\n
2010 10 03 05:28:49.3445  0.0311      21 Ha0 = -1.479133E+0\r\n
2010 10 03 05:28:49.3773 0.03286      21 Ha1 =  3.554063E-2\r\n
2010 10 03 05:28:49.4057 0.02836      21 Ha2 = -1.382833E-6\r\n
2010 10 03 05:28:49.4390 0.03329      21 Ha3 =  3.354407E-2\r\n
2010 10 03 05:28:49.4693 0.03031      21 Ha4 =  3.666422E-5\r\n
2010 10 03 05:28:49.8050  0.3357      29 TRH16 4.91 94.96 4474 3057\r\n 

Email from Ned:
the TRH at 7m now seems to be re-appearing during the night time and then at about 8am it drops out again...

http://www.eol.ucar.edu/isf/projects/BEACHON_SRM/isfs/qcdata/plots/20101001/Tprof_20101001.png

My guess is that it is a power problem, either in the cable, or corrosion in the unit. This unit and
cable was replaced on 7/22. The problem before was different, looking like a RS232 problem, where
I don't think we saw the bootup messages we're seeing now. https://wiki.ucar.edu/x/vBWdAw