Blog from September, 2018

Status 30 Sep

IOP ran last night.  Haven't heard the outcome.  We had relt run out of battery at 0100 and would have lost data from 2 dsms there.  In other news today:

  • Swapped out batteries in rel to give it good voltages.  It helps that today was mostly sunny for charging.  We expect things to stay up tonight (finally!)
  • Half implemented cable sleeving using PVC pipe.  rel and uconv are now in pretty good shape.  Will work on lconv tomorrow.
  • relt's (cabled) ethernet died again today.  Nothing was found wrong with the cable.  Swapped ethernet switch ports with rel2 and both came back.  Strange.  Will keep on monitoring.
  • Gary noticed that my naming of the new Ubiquit access point conflicted with a dsm.  He's changed it.
  • Did a nids_merge and part of a statsproc_redo to get more complete statistics, but had lots of sample time errors.  Gary and I think that this may be due to the corrupt messages from uconv2's GPS.  (BTW, I note that at least part of the messages are decipherable if bit 7 of each character is forced to 0.) . I've just changed the config on uconv2 to remove the GPS messages from the future data archive.  It still isn't clear how to handle the past...
  • Also removed legacy daily plot R code to get more plots going.  Still have one group of plots to work on...

Plan for tomorrow:

  • Add USB cables to monitor power on both branches of init and the other branches at rel and uconv.
  • Start on the guy-wire flagging project


Status 29 Sep

Several fix-it tasks today:

  • found loose screw inside solar panel junction box supplying power to lconvm.  Now Icharge between lconvt and lconvm are the same, though still pretty marginal during today's totally overcast sky.  We may yet want to swap/add batteries.
  • added Ubiquiti to lconv.a1 as access point for P4.  Thought we had prepped by telling P4 to use the new SSID (now savant4, rather than savant, for this link), but the Ubiquti firmware wanted us to enter a confirmation command so it didn't take.  Later did the command at P4 and got it back on the network.  Repointed P4 Ubiquiti azimuth a bit.  Note that this link doesn't have line-of-sight, since the AP is only at 10' on lconv.a1.  P4 can only see above 7m on lconv, nevertheless, we got a link of about 67dB; 30Mb/s.  No error messages now display on the ustar console (that has been very distracting).
  • added usb cable to dustrack.6m.rel
  • reseated usb cable to dustrack.6m.lconv
  • replaced TRH housing at 6m.rel, with a very noisy fan
  • found chewed-up ethernet cable causing outages on relt, replaced with consumer grade, so this one will get eaten as well.  We are now on the search for better cable sleeving
  • various cable bundling and protection at lconv and rel

I note that the PIs have added AC power to rel, though AC would be live only during IOPs when the generator is turned on.  We definitely will NOT CLIMB when AC is live.  The connections are not weather-tight.  I just taped up the ones that I noticed, but this should be improved.

Status 28 Sep

Shift change Kate→Hendrik, so spent most time transferring information.  Took a break to Allerton this morning.

Outstanding task list (many mentioned a few days ago):

  • debug lconvm power shutdowns (high priority)
  • redo dustrack.6m.lconv USB connection (high priority: this will be Hendrik's first climb tomorrow)
  • flag guy anchor points and radiometers in crop
  • still some cable bundling to do
  • replace grey PVC cables that are getting chewed up
  • sleeve other PVC (USB, etc.) cables on ground
  • add ubiquiti link P4→lconv2 (med priority, but just about ready to do)
  • software
    • add R code for more webplots
    • get statsproc computing correct statistics (I think I got this running last night with a new stats_5min.xml and savant/../check_processes..., but haven't checked)
    • restart ncharts to use new statistics and new noqc_geo dataset

charge/load currents

I don't yet have a good way to view these on qctables, so I'll make my own table:

dsmdsmIDIloadIchargePanelsBatteries
p1410.60.811
p2420.70.811
p343

11
p444

11
p5450.70.911
p6460.70.711
relt142.55.032
uconv121

22
lconvm332.71.932
lconvt341.64.932


Clearly, lconvm is only operating on one solar panel, so the short cable between panels is probably not completely connected.

Status 27 Sep

This morning, a quick trip to the field:

  • reinstalled csat3a.1.5m.a1.rel (the same head and box that was removed yesterday, but now after changing AA to 50, as suggested by Larry at CSI).  Green Lights!
  • swapped in a fresh battery to lconvm/lconv2
  • connected power monitor to lconvm
  • swapped out Gill.0.2m.uconv1 and its cable that had some blue corrosion and had died the night before last


Status 26 Sep

Quick update before Francina brings her class here for a tour....

lconvm (and thus all lconv networking) died again last night, along with lconv2, at 22:00 and revived just before I arrived this morning.  Presumably, this is due to power loss, despite us adding an extra solar panel and swapping in 2 new batteries yesterday, with an entire afternoon of sun coming through scattered/broken skies.  lconv1, lconvt, and all rel seem to have stayed up.

Campbell reports that all 5 sonics that we sent to them as dead were working fine as received.  This is really annoying. So far, we have 3 dead CSAT3As here – both of our spares and one deployed (now at 1.5m.a1.rel).

Given the guy tensions measured yesterday, every tower will need to be retensioned. <heavy sigh>


Status 25 Sep

Overcast, rain off and on last night and more expected today.  No IOP tonight.

  • lconv down again, as expected, since there would have been no charging after I reoriented solar panels last evening
  • I think relm stayed up (since cockpit is still running), presumably because I swapped in the battery last night
  • csat3a.1.5m.rel still bad.  We probably should swap in the .a1 head.

Actions done today:

  • swapped csat3aw.1.5m.a1.rel with csat3aw.1.5.rel.  a1 is now bad, as expected, but a lower priority
  • taped up the cable end of the other 3 Gill WindObservers (but upon inspection, they all seemed dry and clean, despite rain overnight)
  • added a pmon usb cable to relt (still need to add it to the config)
  • added a third solar panel to each of the lconv and rel power systems, i.e 3 panels now go into 1 charger with 2 batteries, then duplicated at each site
  • swapped a charged battery into each bank of lconv (though the old lconv batteries seemed to have charged up in the broken-to-scattered skies today).  In the process, momentarily shorted the charge controller at lconv.a1, causing one of the ground traces to burn up!.  Rewired the controller to make this work again.  Drat.
  • measured all guy wire tensions.  Kate will have updated the wiki table by now.
  • found that lconv2's Pi had locked up – perhaps associated with the power work I did earlier.  Cycling power brought it back.

FYI, just after we returned to the hotel after dinner (about 9pm) a gust front with short-lived, but intense rain, went through.  Radar showed that it went through the field site perhaps 30min earlier, with winds up to 10 m/s.

Boom Angles
LocationAngleType
Init38 degM (Magnetic)
Rel1110M
Rel108M
Rel2106

M

Uconv1132M
Uconv132M
Uconv2132M
Lconv154M
Lconv54M
Lconv254M
Status AM 24 Sep

I left last night before things really started getting going for the dry run.  From co2 data, it seems like conditions were good for the first ~2 hours, and CTEMPS was running (for the bit of it that was deployed).  April still doesn't have the Doppler lidar, but they were going to try smoke and 2 aerosol lidars.  Kate reports that the one radiosonde worked, but had a very low rise rate – they cut it off after 45min to 700mb.

  • visited P5, found battery at 10.33V, so Pi was continuously rebooting (and chrony was not happy).  Swapped in a good battery from the base.  I <think> the solar in connector to the sunsaver wasn't quite seated properly.  In the meantime noticed that the config still didn't have a pmon entry.  Back at the sodar trailer, tried to use ansible to reload the config, but got a huge error message, suggesting that I use "--limit @/home/daq/isfs/projects/SAVANT/ISFS/ansible/update-projects.retry", which didn't work.  Logged into P5 and added pmon manually.  Now seems happy.
  • decided to check the other pressure sites, to see if low power was systematic.  P4 was at 12.44V.  P6 was at 12.59V.  So power availablilty should be okay, even though today is overcast and drizzle. BTW. P4 could ping ustar, and ustar can ping P4, but I can't ssh, and noticed (from lsu) that rsync isn't working to P4.
  • csat3a.1.5m.rel died about 6pm last night, about when I was playing with 232/422 jumpers inside this DSM to get csat3.1.5m.a2 working. However, the EC150 data are still coming in, so it isn't as simple as power or comms.  I note that the co2/h2o statsproc values also aren't being created due to the missing sonic data.  Guess I'll be wandering down there soon, in the mosquito-filled mist.
  • all of lconv seemed to be down a bit ago, but is up now.
  • and still have to do:
    • check guy tensions
    • grab serial numbers from soybeans?
    • shoot boom angles
    • clean up several sets of cables
    • flag tower anchor points in the field
  • and April wants to figure out how to attach the CTEMPS fiber vertically between the tall and aux towers....
  • Also noticed that several serial cables are grey PVC.  I don't know why green ones weren't used.  The PVC is already being chewed on – I taped up the one bit that I noticed.


We have been about 700 mg/m3 during the day, but now are going above 900.  I assume that the corn is trapping air from respiring soil/grass (the corn itself is pretty much dead, I think).


Status 23 Sep

More fixes today:

  • reseated Bulgin/Bulgin connection (missing a purple ring) for rad.20m.rel
  • changed trh.8.5m.rel to RS232 mode (rather than RS422)
  • replaced CSAT3A at 6m.init with CSAT3A from 1.5m.a2.rel
  • replaced CSAT3A at 1.5m.a2.rel with a CSAT3, requiring a new cable, new mount to be invented, new serial port in RS232 mode.  Note that the new mount puts the path about 5cm higher and 3cm further north.
  • installed a new sensor for Tsoil.grass.uconv
  • also tried to organize cables at rel.a2 and grass.a2, but am not really happy with the job I did.  I really should have started by disconnecting the ethernet cables.

I <thought> that fixed all sensor issues, but now see a dead CSAT3A at 1.5m.rel.  I can't win and the stupid CSAT3A's keep on dying and Campbell hasn't fixed the batch of 5 or 6 bad ones that I shipped to them a month ago....

I still don't know why pmons at uconv, P3 and P5 aren't working.

gps.uconv2 is a known problem that Gary has worked around.

Dry run IOP just starting now (9pm).  The PI team is pretty excited, and so are the mosquitos.

@Gary: I'm not sure I did the best change-over of the sonic.1.5m.rel2.  I changed the port, (usb1 → usb5), id (1010 → 1016), and name (.a2 → .a2n), which made ck_xml happy, but now we have another set of variable names.  I <thought> there was a way that we could, with one config, process both the data up to now and the data from now, into the same variable names.

gps.uconv2

This hasn't been working since I've started checking data.  I assume that it is the old I2C issue on the Pi.

@Gary: should we replace this DSM?  If not, is it set up to use another NTP server?


Morning status 23 Sep

Heavy dew this morning.

Took most of yesterday off.  Still a few sensors issues to deal with today:

  • grass Tsoil – probably replace
  • init.6m - probably need to replace, but don't have a spare (still at least 6 back at Campbell being repaired) Confirmed a red sonic LED in EC100, and all aspects of installation look fine.
  • TRH.8.5m.rel - unknown issue (climbed and replaced SHT sensor, but turned out to be RS422/232 jumpering. Now fixed.)
  • mote.20m.rel not sending data (mote console cable Bulgin to standard serial Bulgin connection at 20m doesn't have a purple ring. Retaped and got mote LEDs alive.)

Electric fence is mostly up (to protect CTEMPS fiber).

Dry run IOP is scheduled for tonight: 2200-0000.

Data flow was horrible this morning – cockpit wouldn't connect to streams/nagios showed critical something on almost all dsms.  Did an ansible restart_dsm on everything, which got most going (P4 still won't connect, as usual with morning dew).



Replaced corn HFT

Installed HF14 to replaced the chewed HFT.  i note that the yellow shrink-wrap over the TP01 wisard board also was chewed, so taped it up.  All soil cables now have a plastic sleeve.

Took a photo of half-dollar-sized footprints in the mud, directly on top of the soil sensor installation patch, with footpads and sharp points – looks like a small fox footprint to me.



  • Pirga not using P_RANGE
  • Vbatt not using VIN_RANGE
  • Tbatt not using T_RANGE
  • need to redefine Ixx_RANGE in mA, not A
  • need different dTsoil/dt range
  • T.8.5m.rel dead
  • relt dead (oops, had done "ddn" when using minicom and hadn't brought back up)
  • need Alt range
  • co2 offscale (why?)
  • lconv - no pmon data (got working, but ansibile didn't update config, and config needed to have a suffix defined to work)
  • csat.6m.init dead
  • gps.uconv2 dead
  • pmon.p3, p4, p5 dead
  • Tsoil.grass not right, but others now okay!