Blog from March, 2020

Status - Mar 20

s1: not reporting.  Last message (18 Mar) missing P, Vbatt was okay.

s3: all working

s4: all working

s8: ec150 not installed

s10: all working

s14: P, TRH, mote all down (mote has a lot of 0x00 characters before message)

s15: Qsoil needed power cycle

s17: not reporting, suspect DSM usb issue.  Last message (13 Mar) had bad TRH fan, RH questionable, missing Ott, missing TP01 (might just have been timing, since prior message was okay), Vbatt was okay


Wind directions

So... we want to offer a dataset to the PIs in geo coordinates.  Speaking with Kurt, he is confident that the tripods of each site were oriented with a compass to make the csat point out from the mast at an angle of 315 deg (NW), to within about 2 degrees.  I have thus entered Vazimuth = 315- 180 - 90 = 45 into the cal files for s1, s3, s4, s8, s10, s14, s15, and s17.

Dan told me that the orientation of the Gill 2D could be any multiple of 90 degrees from the csat orientation.  By creating a scatterplot of each site's csat vs Gill, I verified this and entered the appropriate multiple + 45 also into the cal files.  Running statsproc with noqc_geo produces dir=Dir now, so I think we're close enough for an unsupported project.

IF the teardown crew has nothing better to do, it would be nice to actually measure these angles...

Status 19 Mar

I guess we can't leave this blog up in perpetuity without some explanation of what has happened in the last week!

Due to the world-wide Covid-19 coronavirus pandemic, all staff were recalled from the field.   On 3/12, s13, which had been partially assembled but never transmitting data, was removed and the field crew started securing the base and Pod.  On 3/13, Dan left and Kurt and Clayton serviced TRHs at s8 and s10.  On 3/14, Kurt and Clayton left the site as well.

This left s1, s3, s4, s8, s10, s14, s15, and s17 installed.  The EC150 was never installed at s8.   The barometer at s1 seems to be flaky.  s17 connects very intermittently, presumably due to a USB issue in the DSM that is rebooting it frequently –  the last data came through 13 Mar.

We will continue to let these run, perhaps with a bit of servicing by UCSB, until we are next cleared for travel.  At that point, we will send out a tear-down crew to pull everything and wait for SWEX2021...

s17 temporarily up

Logged in to see what's up. Steve fixed the udev rules so now the pwrmon is reporting. I noticed in the dsm logs that dsm statsproc from relampago was still trying to run, so I disabled it and turned off the service. Steve rsynced the data files to barolo.

Looked at logs to see if I could figure out why it's been off the net so much. Looks like it's rebooting frequently due to USB problems:

Mar  8 15:15:10 s17 kernel: [   42.553988] usb 1-1.5-port1: cannot disable (err = -71)
Mar  8 15:15:10 s17 kernel: [   42.555830] usb 1-1.5: Failed to suspend device, error -71
Mar  8 15:15:10 s17 kernel: [   42.562454] usb 1-1.5: USB disconnect, device number 118
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@Mar  8 01:17:06 s17 kernel: [    0.000000] Linux version 4.9.35-v7+ (dc4@dc4-XPS13-9333) (gcc version 4.9.3 (crosstool-NG crosstool-ng-1.22.0-88-g8460611) ) #1014 SMP Fri Jun 30 14:47:43 BST 2017
Mar  8 01:17:06 s17 kernel: [    0.000000] CPU: ARMv7 Processor [410fc075] revision 5 (ARMv7), cr=10c5387d
Mar  8 01:17:06 s17 kernel: [    0.000000] CPU: div instructions available: patching division code
Mar  8 01:17:06 s17 kernel: [    0.000000] CPU: PIPT / VIPT nonaliasing data cache, VIPT aliasing instruction cache
Mar  8 01:17:06 s17 kernel: [    0.000000] OF: fdt:Machine model: Raspberry Pi 2 Model B Rev 1.1
Mar  8 01:17:06 s17 kernel: [    0.000000] cma: Reserved 8 MiB at 0x3a800000
Mar  8 01:17:06 s17 kernel: [    0.000000] Memory policy: Data cache writealloc
Mar  8 01:17:06 s17 kernel: [    0.000000] On node 0 totalpages: 241664
Mar  9 01:17:14 s17 kernel: [   16.952991] usb 1-1.5: Product: USB 2.0 Hub
Mar  9 01:17:14 s17 kernel: [   16.954917] hub 1-1.5:1.0: USB hub found
Mar  9 01:17:14 s17 kernel: [   16.955420] hub 1-1.5:1.0: 4 ports detected
Mar  9 01:17:14 s17 kernel: [   17.171205] hub 1-1.5:1.0: hub_ext_port_status failed (err = -71)
Mar  9 01:17:14 s17 kernel: [   17.172380] usb 1-1.5: Failed to suspend device, error -71
Mar  9 01:17:14 s17 kernel: [   17.226561] usb 1-1.5: USB disconnect, device number 28
Mar  9 01:17:14 s17 kernel: [   17.520553] usb 1-1.5: new full-speed USB device number 29 using dwc_otg
^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^
@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^
Mar  9 01:17:06 s17 kernel: [    0.000000] Linux version 4.9.35-v7+ (dc4@dc4-XPS13-9333) (gcc version 4.9.3 (crosstool-NG crosstool-ng-1.22.0-88-g8460611) ) #1014 SMP Fri Jun 30 14:47:43 BST 2017
Mar  9 01:17:06 s17 kernel: [    0.000000] CPU: ARMv7 Processor [410fc075] revision 5 (ARMv7), cr=10c5387d
Mar  9 01:17:06 s17 kernel: [    0.000000] CPU: div instructions available: patching division code
Mar  9 01:17:06 s17 kernel: [    0.000000] CPU: PIPT / VIPT nonaliasing data cache, VIPT aliasing instruction cache
Mar  9 01:17:06 s17 kernel: [    0.000000] OF: fdt:Machine model: Raspberry Pi 2 Model B Rev 1.1
Mar  9 01:17:06 s17 kernel: [    0.000000] cma: Reserved 8 MiB at 0x3a800000
Mar  9 01:17:06 s17 kernel: [    0.000000] Memory policy: Data cache writealloc
Mar  8 15:19:10 s17 kernel: [   42.911045] 
Mar  8 15:19:10 s17 kernel: [   42.911078] ERROR::handle_hc_chhltd_intr_dma:2215: handle_hc_chhltd_intr_dma: Channel 3, DMA Mode -- ChHltd set, but reason for halting is unknown, hcint 0x00000002, intsts 0x06000021
Mar  8 15:19:10 s17 kernel: [   42.911078] 
Mar  8 15:19:10 s17 kernel: [   42.911112] ERROR::handle_hc_chhltd_intr_dma:2215: handle_hc_chhltd_intr_dma: Channel 4, DMA Mode -- ChHltd set, but reason for halting is unknown, hcint 0x00000002, intsts 0x06000021
Mar  8 15:19:10 s17 kernel: [   42.911112] 
Mar  8 15:19:10 s17 kernel: [   42.911181] ERROR::handle_hc_chhltd_intr_dma:2215: handle_hc_chhltd_intr_dma: Channel 7, DMA Mode -- ChHltd set, but reason for halting is unknown, hcint 0x00000002, intsts 0x06000021
Mar  8 15:19:10 s17 kernel: [   42.911181] 
Mar  8 15:19:10 s17 kernel: [   42.911214] ERROR::handle_hc_chhltd_intr_dma:2215: handle_hc_chhltd_intr_dma: Channel 5, DMA Mode -- ChHltd set, but reason for halting is unknown, hcint 0x00000002, intsts 0x06000021
Mar  8 15:19:10 s17 kernel: [   42.911214] 
Mar  8 15:19:10 s17 kernel: [   42.911247] ERROR::handle_hc_chhltd_intr_dma:2215: handle_hc_chhltd_intr_dma: Channel 2, DMA Mode -- ChHltd set, but reason for hMar  8 01:17:06 s17 kernel: [    0.000000] Booting Linux on physical CPU 0xf00
Mar  8 01:17:06 s17 kernel: [    0.000000] Linux version 4.9.35-v7+ (dc4@dc4-XPS13-9333) (gcc version 4.9.3 (crosstool-NG crosstool-ng-1.22.0-88-g8460611) ) #1014 SMP Fri Jun 30 14:47:43 BST 2017
Mar  8 01:17:06 s17 kernel: [    0.000000] CPU: ARMv7 Processor [410fc075] revision 5 (ARMv7), cr=10c5387d
Mar  8 01:17:06 s17 kernel: [    0.000000] CPU: div instructions available: patching division code
Mar  8 01:17:06 s17 kernel: [    0.000000] CPU: PIPT / VIPT nonaliasing data cache, VIPT aliasing instruction cache
Mar  8 01:17:06 s17 kernel: [    0.000000] OF: fdt:Machine model: Raspberry Pi 2 Model B Rev 1.1

Lots of these reboots in the logs. Interestingly when the system reboots it seems to always come up with a time right around 01:17:05 of the current day, even if it means jumping back in time by minutes or hours.

There were some other usb messages in the logs that didn't seem to trigger a reboot, but were still notable:

Mar  8 01:17:10 s17 kernel: [   13.663002] usb 1-1.5: Product: USB 2.0 Hub
Mar  8 01:17:10 s17 kernel: [   13.664314] hub 1-1.5:1.0: USB hub found
Mar  8 01:17:10 s17 kernel: [   13.664799] hub 1-1.5:1.0: 4 ports detected
Mar  8 01:17:11 s17 kernel: [   13.980630] usb 1-1.5.1: new full-speed USB device number 35 using dwc_otg
Mar  8 01:17:11 s17 kernel: [   13.982863] usb 1-1.5-port1: cannot reset (err = -71)
Mar  8 01:17:11 s17 kernel: [   13.983375] usb 1-1.5-port1: cannot reset (err = -71)
Mar  8 01:17:11 s17 kernel: [   13.983959] usb 1-1.5-port1: cannot reset (err = -71)
Mar  8 01:17:11 s17 kernel: [   13.984481] usb 1-1.5-port1: cannot reset (err = -71)
Mar  8 01:17:11 s17 kernel: [   13.984980] usb 1-1.5-port1: cannot reset (err = -71)
Mar  8 01:17:11 s17 kernel: [   13.984988] usb 1-1.5-port1: Cannot enable. Maybe the USB cable is bad?
Mar  8 01:17:11 s17 kernel: [   13.985569] usb 1-1.5-port1: cannot disable (err = -71)
Mar  8 01:17:11 s17 kernel: [   13.986115] usb 1-1.5-port1: cannot reset (err = -71)
Mar  8 01:17:11 s17 kernel: [   13.986665] usb 1-1.5-port1: cannot reset (err = -71)
Mar  8 01:17:11 s17 kernel: [   13.987144] usb 1-1.5-port1: cannot reset (err = -71)
Mar  8 01:17:11 s17 kernel: [   13.987727] usb 1-1.5-port1: cannot reset (err = -71)
Mar  8 01:17:11 s17 kernel: [   13.988208] usb 1-1.5-port1: cannot reset (err = -71)
Mar  8 01:17:11 s17 kernel: [   13.988215] usb 1-1.5-port1: Cannot enable. Maybe the USB cable is bad?
Mar  8 01:17:11 s17 kernel: [   13.988765] usb 1-1.5-port1: cannot disable (err = -71)
Mar  8 01:17:11 s17 kernel: [   13.989280] usb 1-1.5-port1: cannot reset (err = -71)
Mar  8 01:17:11 s17 kernel: [   13.989862] usb 1-1.5-port1: cannot reset (err = -71)
Mar  8 01:17:11 s17 kernel: [   13.990342] usb 1-1.5-port1: cannot reset (err = -71)
Mar  8 01:17:11 s17 kernel: [   13.990997] usb 1-1.5-port1: cannot reset (err = -71)
Mar  8 01:17:11 s17 kernel: [   13.991627] usb 1-1.5-port1: cannot reset (err = -71)
Mar  8 01:17:11 s17 kernel: [   13.991637] usb 1-1.5-port1: Cannot enable. Maybe the USB cable is bad?
Mar  8 01:17:11 s17 kernel: [   13.992182] usb 1-1.5-port1: cannot disable (err = -71)
Mar  8 01:17:11 s17 kernel: [   13.992779] usb 1-1.5-port1: cannot reset (err = -71)
Mar  8 01:17:11 s17 kernel: [   13.993295] usb 1-1.5-port1: cannot reset (err = -71)
Mar  8 01:17:11 s17 kernel: [   13.993872] usb 1-1.5-port1: cannot reset (err = -71)
Mar  8 01:17:11 s17 kernel: [   13.994386] usb 1-1.5-port1: cannot reset (err = -71)
Mar  8 01:17:11 s17 kernel: [   13.994945] usb 1-1.5-port1: cannot reset (err = -71)
Mar  8 01:17:11 s17 kernel: [   13.994952] usb 1-1.5-port1: Cannot enable. Maybe the USB cable is bad?
Mar  8 01:17:11 s17 kernel: [   13.995477] usb 1-1.5-port1: cannot disable (err = -71)
Mar  8 01:17:11 s17 kernel: [   13.995515] usb 1-1.5-port1: unable to enumerate USB device
Mar  8 01:17:11 s17 kernel: [   13.996019] usb 1-1.5-port1: cannot disable (err = -71)

I saw this message for both port 1 and port 2 of usb 1-1.5.

Mar  8 01:17:40 s17 kernel: [   43.600792] ERROR::handle_hc_chhltd_intr_dma:2215: handle_hc_chhltd_intr_dma: Channel 0, DMA Mode -- ChHltd set, but reason for halting is unknown, hcint 0x00000002, intsts 0x06000001
Mar  8 01:17:40 s17 kernel: [   43.600792] 
Mar  8 01:17:40 s17 kernel: [   43.600848] ERROR::handle_hc_chhltd_intr_dma:2215: handle_hc_chhltd_intr_dma: Channel 7, DMA Mode -- ChHltd set, but reason for halting is unknown, hcint 0x00000002, intsts 0x06000001
Mar  8 01:17:40 s17 kernel: [   43.600848] 
Mar  8 01:17:40 s17 kernel: [   43.600907] ERROR::handle_hc_chhltd_intr_dma:2215: handle_hc_chhltd_intr_dma: Channel 1, DMA Mode -- ChHltd set, but reason for halting is unknown, hcint 0x00000002, intsts 0x06000021
Mar  8 01:17:40 s17 kernel: [   43.600907] 
Mar  8 01:17:40 s17 kernel: [   43.600983] hub 1-1:1.0: hub_ext_port_status failed (err = -71)
Mar  8 01:17:40 s17 kernel: [   43.601058] ERROR::handle_hc_chhltd_intr_dma:2215: handle_hc_chhltd_intr_dma: Channel 4, DMA Mode -- ChHltd set, but reason for halting is unknown, hcint 0x00000002, intsts 0x06000001
Mar  8 01:17:40 s17 kernel: [   43.601058] 
Mar  8 01:17:40 s17 kernel: [   43.601100] ERROR::handle_hc_chhltd_intr_dma:2215: handle_hc_chhltd_intr_dma: Channel 6, DMA Mode -- ChHltd set, but reason for halting is unknown, hcint 0x00000002, intsts 0x06000001
Mar  8 01:17:40 s17 kernel: [   43.601100] 
Mar  8 01:17:40 s17 kernel: [   43.601144] ERROR::handle_hc_chhltd_intr_dma:2215: handle_hc_chhltd_intr_dma: Channel 0, DMA Mode -- ChHltd set, but reason for halting is unknown, hcint 0x00000002, intsts 0x06000001
Mar  8 01:17:40 s17 kernel: [   43.601144] 
Mar  8 01:17:40 s17 kernel: [   43.601192] usb 1-1-port5: cannot reset (err = -71)
Mar  8 01:17:40 s17 kernel: [   43.601254] ERROR::handle_hc_chhltd_intr_dma:2215: handle_hc_chhltd_intr_dma: Channel 7, DMA Mode -- ChHltd set, but reason for halting is unknown, hcint 0x00000002, intsts 0x06000001
Mar  8 01:17:40 s17 kernel: [   43.601254] 
Mar  8 01:17:40 s17 kernel: [   43.601294] ERROR::handle_hc_chhltd_intr_dma:2215: handle_hc_chhltd_intr_dma: Channel 1, DMA Mode -- ChHltd set, but reason for halting is unknown, hcint 0x00000002, intsts 0x06000001
Mar  8 01:17:40 s17 kernel: [   43.601294] 
Mar  8 01:17:40 s17 kernel: [   43.601336] ERROR::handle_hc_chhltd_intr_dma:2215: handle_hc_chhltd_intr_dma: Channel 4, DMA Mode -- ChHltd set, but reason for halting is unknown, hcint 0x00000002, intsts 0x06000001
Mar  8 01:17:40 s17 kernel: [   43.601336] 
Mar  8 01:17:40 s17 kernel: [   43.601379] usb 1-1-port5: cannot reset (err = -71)


Do we think this is the fault of one bad USB device, like the usb stick or cell modem? Is usb 1-1 the external hub? If Kurt and Dan paid a visit to try and get s17 more reliably online, would it be better to swap out the whole DSM so we can troubleshoot this one back in Boulder, or are we fairly confident swapping out one component would fix it?

s17 is back down now, so I can't keep looking. Steve copied some of /var/log into /scr/tmp/oncley, but it didn't seem to get very far before the connection went down.

Also, Steve noted that ssh to isfs17.dyndns.org connects to s3 right now because the dyndns names haven't been updated, which is confusing.


Quickie status - Mar 11

s3 now up.  All sensors okay.

Quickie status - 10 Mar

The field crew waited out the rain this morning and then rushed to install s1 in the afternoon.  Everything appears to running, except the barometer, no doubt the 7E1 problem.

Thus, we now have reporting s1, s4, s8, s10, s14, s15.

Also, s17 briefly came in this afternoon (22:29 - 22:37 UTC). Vmote values were generally reasonably high, indicating that the station has power.

Ott: all okay

TRH: s10 died today, s8 reporting fan not working, others okay

P: s1 not reporting – probably 7E1 issue (but can't log in to fix), others okay

CSAT: all okay

EC150: s8 bad, others okay

Gill 2D: all okay (but need to add to qctables)

Rad: all okay 

Soils: all okay

Victron: s17 not reporting – probably usb rules setting (but can't log in to fix), others okay

mote: all okay, changed "sn" setting on s4 to report serial numbers

Webplots/qctables now up

Isabel, Jacquie, and I have all worked to get the R/json-based webplots and qctables working.  I just added the usual link to these in the top wiki page.  Note the different qctables colors, which are hopefully easier to read!


Some things that still need work:

  • labels for the 2D plot panels 
  • winds in geo coordinates
  • reordering of plots and qctable data to get the station sequence 1-18 DONE (plots and qctable)
  • placeholders for totally missing data in qctables DONE
  • add Spd, Rfan to qctables DONE


...to pick up my removal of ".tip" in the name of Rainr...

s14 status

Today, the crew appears to have installed s14.  From the data:

Ott: working, but for some reason data aren't being parsed by barolo.  The first character seems to be 0x00, sometimes followed by 0xff, before the good message.  Other Otts don't have this.

TRH: okay

P: okay

CSAT/EC150: okay

Gill 2D: okay

Rad: okay

Soils: okay

Victron: not reporting


s8 solar panel

Kurt tells me that the bottom panel in the solar panel rack here has corrosion damage (presumably due to being submerged either in CHEESEHEAD or VERTEX) and is presently unusable.  Thus, this station is running on only one panel.  The power estimate spreadsheet says that it will now take 11 days, rather than 3 days, to fully charge a dead station – clearly longer than we want.  Thus, we can expect this station to lose power in cloudy conditions.  Obviously, we can replace this panel or rack if there is a spare.  I don't know from Kurt's description if it will be possible to fix on-site.


Quickie status

s4, s8, s10, s15 all coming in.  s17 had a few samples come through Sat afternoon, but nothing recent.

s8 IP address: 166.255.144.44

Ott: all okay, reporting 0.

TRH: s8 TRH is reporting -99 for Rfan, which is supposed to mean that the fan is not turning.  Others okay.

P: all okay

CSAT: all okay

EC150: s8 all bad values (including Tirga and Pirga)

Victron: all okay (need to look at parsed DIDs 61-65 to get all fields)

Gill 2D: all okay

NR01: all okay

Tsoil: all okay

Gsoil: all okay (all same sign, so far)

Qsoil: all okay

TP01: all okay

** Need boom directions for CSATs and Gills to process directions properly


NCharts up for SWEX

The SWEX project and noqc_instrument dataset have been added to ncharts:

http://datavis.eol.ucar.edu/ncharts/projects/SWEX/noqc_instrument

The data should update in real time, now that statsproc is working to generate the 5-minute data.


The s4 tunnel was not connecting, and when I logged into it after discovering it's IP address, I discovered a very strange problem.  According to git there were about 50 modified files in the field project configuration, ~daq/isfs/projects.  It turns out all those files had zero length, and some of them were python files needed by the tunnel script.  As near as I can tell, none of those changes were intentional, so did a hard git reset to restore the configuration.  Now the s4 tunnel is working, same as s10 and s15.

I can guess that maybe the project config was updated at some point when the filesystem was full, since sometimes that causes files to be created but then they cannot be written.  That's just a guess though.  Right now the /home partition is 2%.

Besides the tunnels, it is possible to log into the DSMs with their IP addresses.  These are the ones I know so far:

DSMIP
s4166.255.144.46

s10

166.167.71.190
s15166.255.153.9
Victron problem

Dan tells me that the s10 and s15 Victrons are hooked up and talking over their Bluetooth app.  I tried logging into s10 and get the message that the port "isn't open yet".  In a rash attempt to try to reset this (USB) port, I rebooted.  s10 is sending data again (still not from the Victron), though I've now also lost the ability to log in via the tunnel.  I note that the tunnel just didn't respond for about 5min, then finally started giving "connection refused" messages...


daq@s10:~/isfs/projects/SWEX/ISFS/config $ rs /dev/ttyPWRMONV

connecting to inet:s10:30002

connected to inet:s10:30002

sent:"/dev/ttyPWRMONV

"

line="OK"

parameters: 19200 none 8 1 "\n" 1 0 prompted=false

:Take it easy Stumpy! s10:/dev/ttyPWRMONV is not open (yet)\n


If it helps, I also did "lsusb":

daq@s10:~/isfs/projects/SWEX/ISFS/config $ lsusb

Bus 001 Device 008: ID 0781:5575 SanDisk Corp. 

Bus 001 Device 007: ID 0403:6015 Future Technology Devices International, Ltd Bridge(I2C/SPI/UART/FIFO)

Bus 001 Device 012: ID 1410:9030 Novatel Wireless 

Bus 001 Device 006: ID 0bda:5411 Realtek Semiconductor Corp. 

Bus 001 Device 005: ID 0403:6011 Future Technology Devices International, Ltd FT4232H Quad HS USB-UART/FIFO IC

Bus 001 Device 004: ID 0403:6011 Future Technology Devices International, Ltd FT4232H Quad HS USB-UART/FIFO IC

Bus 001 Device 003: ID 0424:ec00 Standard Microsystems Corp. SMSC9512/9514 Fast Ethernet Adapter

Bus 001 Device 002: ID 0424:9514 Standard Microsystems Corp. 

Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub



Quick Status

From the data, all data coming in appear to be okay.  Just 2 notes:

  • s4: the PTB210 probably is set to E71.  As soon as it allows ssh in, we can change this.
  • s10 and s15: no Victron data coming in.