Here is a sample CSV file from a lightning storm that passed through my area on August 2, 2011 at around 5:30PM.
Here is a sample piece of that CSV file:
1312198123.414 | 8 | 187.975 | 8 |
1312198123.416 | 0 | 0.002 | 8 |
1312198123.430 | 16 | 0.014 | 23.8 |
1312198123.433 | 0 | 0.003 | 23.8 |
1312198123.437 | 6 | 0.004 | 29.8 |
1312198123.439 | 13 | 0.002 | 42.7 |
1312198123.441 | 0 | 0.002 | 42.7 |
1312198123.447 | 23 | 0.006 | 65.7 |
1312198123.450 | 0 | 0.003 | 65.6 |
1312198123.454 | 11 | 0.004 | 76.6 |
1312198123.456 | 40 | 0.002 | 116.6 |
1312198123.458 | 0 | 0.002 | 116.6 |
First column is the time in Python time (seconds since Jan 1, 1970)
Second column is the reading from the detector.
Third column is the difference in time since the last result.
The last column is the result of the following algorithm. Sum the new raw value with this running sum, and subtract a gain (10) times the deltaTime from the sum. Do not allow the sum to go less than zero.
You can see large blocks of time (deltaT in the hundreds or thousands) where nothing happens. Then you get the occasional spark that is detected, but is washed away from the decay part of the algorithm. Only when a large number of sparks come in at one time, does the sum reach a warning threshold (2000) and then a danger threshold (3000). From this data I plan on refining the algorithm. Stay tuned!