Blog

Lidar networking 6/30

Went to Marshall today to get the lidar on the instrument network in iss1 via a ubiquiti link.

At the lidar, installed a waterproof box w/ built in plugs that Bill found. When I got the lidar plugged in to the outlet in the box the fans in the lidar didn't turn back on, and after a few minutes I smelled burning and saw some smoke. So I don't think we should be using the outlets in that box. Instead, I added a power strip I found in MISS to the box and plugged the lidar and the ubiquiti POE injector into that. I didn't find any good spots on the neck of the trailer to mount the ubiquiti (the hose clamps they come with for mounting are very small), so mounted it on a separate pole near the neck of the trailer.

At iss1, I mounted the ubiquiti directly below the existing ubiquiti that connects to the Marshall network. I was worried about interference between the two nanobeams based on some info I had found on the internet, but it doesn't seem to be a problem. The lidar link is configured as point-to-point and I set a different SSID, so the lidar link shouldn't be able to connect to the Marshall network.


With the link working I still couldn't connect to the lidar, so I went back to troubleshoot. At one point when I was closing up the waterproof box I had bumped the reset button on the power strip, and I think that tripped something on the computer in the lidar because the lidar didn't come up again after that. The fans were working and the external power light was on, but the head wasn't moving and I couldn't ping or browse to the lidar. With some help from Bill (thanks Bill!) we found that pushing the reset and power buttons on the computer box in the lidar brought it back up. After that I verified that I could ping the lidar from iss1, as well as connect from the lidar laptop (now set up on the instrument network in iss1) and get live data.

As a side note, the UPS in the lidar doesn't seem to be working. Presumably the battery is dead.

Though now as I write this blog post I see that the lidar (and the ubiquiti at the lidar) are no longer reachable. I suspect the power strip tripped again, so I'm heading out to try replacing that with another one.



1 Comment  · 
ISFS Data Check

Reviewed the measurements on NCharts since Friday 25 June. 

  • TRH - ok
  • P - ok
  • Radiometer/Wetness - It had been raining everyday since Friday, 25 June. I’m surprised the wetness sensor remains flatlined. Shouldn’t there be spikes or a saturated level recorded? Was a disdrometer was installed? 
  • Soils
    • Gsoil - ok range -40 - 40 W/m^2 
    • Lamdasoil - ok range 0.25 - 0.325 W/m
    • Qsoil - are now positive but values are very low (< 1% vol)  for all the precipitation we’ve been getting. This is compared with Relampago and CHEESEHEAD where values are > 14 %vol
    • Tau 63 - ok range 6-10 s 
    • Tsoil - Note very high values (> 45degC on 25 June that relax to more representative values 25-28 degC max)
    • Vheat - ok 
  • Sonic - ok - w range +/- 0.5m/s, u &v range -10 - +5 m/s
  • co2/h2o
    • Crazy co2.27m values >> 1500 g/m3
    • h2o - ok-ish h2o.27m also show high-ish values > 40 g/m3


  • iload range on the high end - 1820 - 1950 mA
  • icharge range -190 to -80 mA (shouldn’t this be positive and in the thousands??)
  • Vbatt - ok
  • l3mote - without the spikes range is 50-62 mA
  • lmote - with the spikes range is 15-18 mA
  • Lamda - no values on Ncharts
  • Pirga - ok
  • Tirga - ok
  • Rfan - ok
  • Vcharge - ~27.93 V
Quick data scan

Just some data notes:

  • About 3 June, co2.27m became very large (and h2o.27m became negative) and spikey.  On 23 June, the levels of both magically became much more normal.  We didn't do anything and I don't think it even rained on 23 June.  Could be related to birds liking to perch on this top level.
  • differences in co2 levels still range over mg/m^3 which is horrible.  (P.S. units are wrong – just fixed config)
  • Soils are happier now that they are back in the dirt, though Qsoil is still negative
  • Pirgas have biases of a few mb – not unexpected.  They show the tower raising yesterday (just before losing the data connection).
ISS networking setup 6/24

I got both ISS sites at Marshall on the net today.

Of the existing ISFS ubiquiti setup, I had to pivot the pedestal ubiquiti a bit so it was more directed at the ISS sites. That doesn't seem to have any affect on the link to the ISFS tower. I also changed the link type of the two existing ubiquitis to 'point to multipoint' from 'point to point', so I could add new links to the same network.

At MISS I mounted the ubiquiti on the rail on the roof of the trailer:

I asked John about interference with the sounding antenna and he thinks they're far enough apart it should be fine. I guess we'll know better once we turn the sounding system on.

At ISS1 I happened to have a pole that would fit where the railings would mount on the trailer platform that isn't currently set up. Feel free to improve upon my mounting techniques...

 

Both ubiquitis are aimed at the ubiquiti on the mar-m05ped pedestal, between the RAL trailers (behind MISS) and some of the bigger snow gauges:

The distance is short and line-of-sight is good, so there's no need for them to be as high up as they are. Anything above 4 or 5 feet and with line of sight should work.

Both data managers are directly on the Marshall network on their WAN interfaces, no router anymore. Details on how to access them on the networking page. It just occurs to me that I didn't remove or turn off the cell modem at MISS, but hopefully that won't matter if we're not sending any data over the cell link.

While I was still at Marshall Steve told me tt was no longer on the net, so I went to investigate. Lights were on and fans were running but I couldn't log in over ethernet and didn't have the console cable to try that, so I rebooted. Seems to have come back up OK (except that now it's down again with the same PIO problem we had last week). I suspect the cause may have been that I unplugged and replugged the power side of POE injector in the dsm, to power cycle the ubiquiti so I could get to its wifi interface, and it seems like sometimes unplugging/replugging other cables will cause similar dsm problems. Haven't had a chance to check the logs about it, but I will when it's back up again.


Site visit today

Many tasks:

  • Met with Electrician Tom and showed him the plug we need for CLAMPS.  He should be able to get it installed before CLAMPS arrives on 12 July.
  • Fully extended TT around 11:00 and left it (guyed) that way.  Noted that the tower near the balloon inflation shed is the same height, so not worried about aircraft
  • Reinstalled soil sensors about 11:30 – also training for Chris
  • Removed the previous extension cords that initially were providing power to TT
  • Matt did Leica scans of all the sensors, using a GPS-based coordinate system!!!
  • Isabel added ISS to the Ubiquiti network, which broke the ISFS link temporarily.  She'll probably be adding a separate blog post about this.

TODOS:

  • shoot boom angle the old way with a compass, just as a check on the Leica data
  • work up the Leica data
  • recheck soil data coming in
  • at some point, take some soil cores?
  • keep on thinking about whether to lower TT
  • place a warning label on the power supply in the job box

John S and Liz also were on site working to get 449 working.  They thought they'd get it running before they left today.

P.S. Since tt wasn't up, Isabel rebooted, but data weren't coming in.  When typing "pio -v" to see the setting, pio killed power to the Ubiquiti, so it is off the net again.  I'll return to the site at the end of the day to restore the pio settings - we really need to fix this command...

Site visit yesterday

I had found a few detail mistakes in the LOTOS configuration and asked Isabel to implement them.  As this happened, some sensors were turned off and, in the process of powering them up, we lost internet connection to the site.  Thus, I made a last-minute site visit...

  • Indeed, pio reported that power was off for 28V (the Ubiquiti), bank1, bank2 (the FTDI board and sensor front-panel), and aux.  I logged in and used pio to enable all of these (except aux).  The DSM came back online.
  • Also noticed that Facilities had installed a power drop to our trailer.  Switched the power to our power supply to this new outlet.
  • Checked on why soil data were all strange, including Tsoils that were way offscale.  Found that the maintenance staff had disinterred all soil sensors (disconnecting the Tsoil probe from its mote in the process) during their weed control operations 2 weeks ago.  Unfortunate, since the soil is now quite dry.  Connected the Tsoil probe, but didn't rebury since I didn't have the soil tools.
  • Also looked for a power drop for CLAMPS by the CP3 pad.  There isn't a 14-50 outlet available; indeed I saw no free 220V outlets.  We'll need Facilities to install one.

FYI - noticing that the GPS and the CHRONY variables are missing in the DSM dashboard and NCharts. I've been told data are being collected but not monitored by NIDAS (question)

LOTOS data is now on Ncharts on datavis: http://datavis.eol.ucar.edu/ncharts/projects/LOTOS/noqc_instrument

datavis is accessible from everywhere, while datavis-dev is internal only.

Data look okay

Didn't do a complete look at everything, but did look at perhaps half of the variables, both with ncharts and dashboard today.  Everything looked fine, though a few notes:

  • The sonics didn't appreciate the weekend's rain, with several level shifts
  • Even so, the sonic tc values have some biases with respect to their colocated TRH's.  Made me wonder if we could characterize these biases by serial number, not just array position, to track them through different projects...
  • The batteries look like they <finally> charged up about 11AM today
  • The EC150 barometers (Pirga) have biases of a few mb with respect to each other and with respect to the (one) nanobarometer
  • The EC150 thermometers (Tirga) have a radiation error of up to 2 degrees.  The error only tracks Rsw.in less than half the time, but likely also depends on wind speed (that I didn't look at).

None of this is especially surprising...

ISFS setup 5/28

Steve and I paid a last visit to Marshall today.

Steve replaced the power supply we were using and changed some settings on the Victron, which restored power to the DSM. The new power supply is in the corner of the job box to protect it since some parts aren't shrink wrapped.

I updated the static IP configuration on the dsm and the tower ubiquiti once they were back on. The dsm can now connect to the outside world and the dsm and both ubiquitis are accessible from the ucar network. (See post about networking for more details). I also need to check if these are accessible from everywhere, or just on the ucar network.

I am working on getting dsm dashboard running, which should eventually be accessible at tt.ucar.edu. For now ncharts is running on datavis-dev.

ISFS setup 5/27

Went out to Marshall this morning to finish setup (we thought). Took care of all the things we had noted we still have to do and tried to set up networking. See Steve's comment on last blog post for more details. Here's a pic of our ubiquiti and POE injector waterproofing setup at the pedestal:

Back at FL noticed that when I'm not on the same subnet of the UCAR network I can't get to the dsm or ubiquitis, because I had guessed wrong on the gateway IP and default DNS settings.

Went out again this afternoon to finish setup (we thought). Upon getting there discovered there's no power to the dsm, culprit most likely is the power supply we're using. We had noticed the battery voltage was just 11.4 when we were out there this morning, so they must be all the way dead. We didn't have any job box keys (oops) and we were running short on time so we didn't do much troubleshooting. The pedestal ubiquiti was still powered so I tried setting the gateway IP and DNS to the gateway I got from my laptop's DHCP connection to UCAR Internal at the site (still haven't heard anything from NETS about what I should be using). Once that was done the pedestal ubiquiti can access the net, and back at foothills I can get to it over ssh and even browse to the web interface!

We're heading back to Marshall tomorrow morning to figure out a new power supply situation and make the IP configuration changes to the dsm and tower ubiquiti once they're back on...


ISFS setup 5/25

Steve and I took a short trip to Marshall today, primarily to meet with NETS about networking for the ISFS tower. We worked out that they will activate a port for us in the further-out pedestal than the one we were looking at, and will leave a cable for us to set up the ubiquiti with on Thursday. We also discussed networking for ISS some, and decided the solution that will probably work best for the MISS trailer and iss5 is to set up ubiquiti links to the pedestal near the HAO trailer. Once we're ready to do that we'll let them know and they can turn on a port.

After that we took a quick stop at the tower. We brought out a good battery for the generator, but haven't run it yet since we still haven't gotten gas. Steve added waterproofing to the connections between extension cords and I got the MAC address of the dsm so we can get it a static IP. We tried a new USB cable to the power monitor, which now shows up as a USB device on the dsm and is giving data. However, now we're having problems with the serial board blowing fuses, so I couldn't check the rest of the data. We will need to bring fuses on Thursday so we can do more troubleshooting. We mounted the ubiquiti at the tower, pointing toward the pedestal, but haven't tested that yet (waiting on other ubiquiti and static IPs). 

To do on Thurday:

  • Gas for generator
  • Mount ubiquiti at pedestal, mount waterproofing box for POE injector
  • Set static IPs for ubiquitis and DSM (assuming I have heard back from CISL)
  • Check ubiquiti connection (looks like line-of-sight won't be a problem)
  • Lower tower (once we have gas in generator) and:
    • Replace tape w/ bulgin caps on EC100 boxes
    • Remove tennis ball
  • DSM serial board troubleshooting. Bring some fuses and maybe try unplugging sensors one by one to see when it fails.
ISFS setup 5/21

Steve and I went back to Marshall today to address some of the problems we found yesterday and fully raise the tower. 

We started by pivoting the tower back down. The battery on the generator still wasn't charged enough to start it, so we used one of the batteries from the job box. Steve brought the generator battery back to the lab to charge it. When we went to undo the bolts holding the tower vertical, the tower kept tilting as we loosened the bolts, and when we removed the bolts entirely the tower settled a couple inches out past where the bolts had been holding it. Pretty unsettling:

Steve could pivot it more upright, which means the limit switch was no longer engaged,  and then we pivoted it down.

We switched the barometer cabling so it's measuring at 7m, added a quad pressure port, and covered the pressure ports and bulgin ports for the barometers on the other EC100 boxes. I swapped out a 15m bulgin for a 5m bulgin for the EC100 at 27m, and we rerouted the sonic and barometer cabling to run across and down through the rings instead of down the leg of the tower. We also flipped the plate the lowest EC100 is on, since the hook on the back looked like it was touching some of the cables in the tower. Currently that plate is just being held on with zip ties:

With that done we pivoted back to vertical and raised the tower all the way, successfully this time. Steve adjusted and tensioned the guy wires and I tidied up the bulgins now that the tower's fully extended. Everything save the power monitor was giving good data. 

I took a look at some networking options since we were still waiting to hear back from NETS. The pedestal by the HAO trailer has 4 open ethernet ports, but none of them seem to be active. The pedestal near the snow gauges (that Steve looked at earlier) had 4 or 5 open ports, and when I connected my laptop to one of them I could get an IP address using DHCP, so the port must be active.

We also plugged in the USB cable from the power monitor, but couldn't get it to show up as a USB device on the DSM, though we tried a bunch of different ports. I can use the Victron app to connect over bluetooth, so hopefully it's just something wrong with the USB cable. 

Steve flagged the power cable as well.

We lowered the tower before we left, but left it vertical. We loosened each of the guy wire turnbuckles by 3 turns before we lowered the tower.  They probably would be fine left this way, so we installed anti-twist cable ties on the turnbuckles.


Misc notes/to do:

  • Job box is now locked, bring keys if you want to get into it.
  • Generator is almost completely out of gas! It'll need to be refilled before we can run it again.
  • Bring the (charged) generator battery back to the trailer and reinstall it.
  • Remember to take the tennis ball off the lightning protection next time we pivot the tower down (oops!)
  • While removing the tennis ball, cover the 2 unused nanobarometer Bulgin connectors with caps (we didn't have any – just used tape)
  • Try a new USB cable for the power monitor
  • Note that the guy turnbuckles fall off of the large D rings, and may bind when next raised.  CHECK THEM when near the top of raising.  Probably should secure them in place with cable ties.
  • Networking!


ISFS setup 5/20

Yesterday Steve, Dan, and Chris sited and leveled the tower. Today we added the generator, instrumented the tower and radiation stand, pivoted it to vertical, and did a partial test of raising the tower. When we had raised the tower partway we realized that some of the zip ties we used to run cables down the tower got in the way of raising it completely:

(A little dark, but cable ties around the leg of the tower get snagged by the next section in when raising. We left the tower partially extended temporarily and powered up the DSM to check sensors. Everything was giving good data, except the power monitor, which we haven't set up yet. After that test we retracted the tower and left it vertical until our next visit. The DSM is still running and saving data locally, but is not on the network.


Still to do/notes:

  • redo cabling from 7m EC100, send it through the rings instead of down the leg of the tower.
  • The config says the barometer is at 7m but we installed it at 27m, so either change the config or switch which EC100 box the barometer cable goes to.
  • Need to add a pressure port to whichever EC100 we decide to use, and cover the pressure ports on the other EC100s.
  • I pinched the cable to the 17m TRH (I think) while raising it. Still seems to work fine, but I marked the point that got pinched on the cable with white tape.
  • If we remember next time, we could replace the 15m bulgins with 5m bulgins for the cables to the top level, since they already have 25m bulgins on them.
  • DSM dashboard only shows a blank screen. 








The MISS (Mobile Integrated Sounding System) was deployed at Marshall on May 13.  


It is positioned approximately 30 meters south of the sounding building and is orientated at about 285 degrees.  Power is being supplied from the sounding building, but network is not yet connected.  The 915 MHz wind profiler was started around 19:30 UTC (May 13) and is working well, currently recording winds up to about the 4 km level, although that will vary considerably depending on conditions.  A WXT weather sensor currently positioned about 6 meters south of the trailer is also collecting data.  The system includes a GAUS sounding rack capable of tracking RS92 radiosondes, although that is not yet running.