You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 5 Next »

These are the serial numbers of the Diamond GPIO-MM-12 cards used in each pressure system, along with the clock rates that were determined for each card.  Following the tables is a discussion of the clock rate calibration procedure. Plots of the clock calibrations are viewable as attachments.

Pressure1

board

SN

clock rate measurements

clock rate used

0

302859

20,000,292.88

20,000,293

1

302828

20,000,450.86

20,000,451

2

302832

20,000,284.54

20,000,285

3

302871

20,000,072.36
20,000.073.21

20,000,073

4

303835

20,000,241.98
20,000,242.41

20,000,242

Pressure2

board

SN

clock rate measurements

clock rate used

0

?

20,000,312.39
20,000,315.54

20,000,314

1

?

20,000,363.83
20,000,364.94

20,000,364

2

?

20,000,352.76
20,000,355.89

20,000,354

3

?

20,000,342.3
20,000.342.521

20,000,342

4

?

20,000,347.18

20,000,347

Clock Calibrations

Prior to AHATS, a corrected, more precise rate of the 20 MHz clocks on each of the diamond GPIO-MM-12 cards was determined using the rubidium referenced signal generator in the NCAR EOL sounding lab.

The reference frequency was stepped in 10 KHz increments from 110 to 300 KHz approximately every 10 seconds. The GPIO cards were configured to count the number of the onboard 20 MHz clock tics while counting 10000 input cycles. These counts are repeated every 0.1 seconds.  The resulting frequency measurements were then compared against the reference frequencies.

See the attachments for plots of the calibrations.

Upper left shows the linear relation between the error in the frequency measurement and the reference frequency.

The slope of the line is a correction factor for the 20 MHz clock on the GPIO-MM:

corrected clock = 20 MHz / (1 + slope)

The upper right plot is the measurement error after applying the corrected on-board clock rate to the data.

Lower left is a plot of various measurement errors. The top black points are the frequency discretizaion due to a difference of one 20 MHz clock tic. As you can see, above 200 KHz input frequency the resolution is coarser than 1 part in 106.

*max(abs(fcor-fref))*is the maximum absolute difference of the measured, clock corrected frequency from the reference frequency for the 300 points at each reference. This maximum error seen in the test stayed below the 1:106line except for one point at 290 KHz. The maximum error also stayed below the frequency discretization level, suggesting that after the clock is corrected, the remaining error is due to the discretization.

mean(fcor-fref) are the mean measurement error  points at each frequency.

 The lower right plot shows the number of samples at each frequency. The tests were sometimes performed more than once, or with more than one input pulse counter, hence we usually have more than the 100 samples expected over 10 seconds.

  • No labels