tetsujin

03-04-2011, 08:08 AM

http://lbp.me/v/yhysx8

I guess building analog signal probes has become something of an obsession with me. Normally I take kind of a pragmatic approach - I initially scoffed at the idea of providing even four digits of precision (and for what it's worth, though I was skeptical at times, I do respect what others have accomplished with LBP2 logic. So, shouts out to Balorn, Phort, Tygers, zermatten and rtm223.) but gradually I've begun including more digits.

So, here's the thing: All the logic probes I've seen previously, and all the ones I've built myself, they've all used some kind of direct base-ten decoding in the sequencer - and then to get more digits of precision, the input signal would be decremented by the value of the digits read by the first sequencer, multiplied by a power of ten, and run through another sequencer. This creates precision error, since tenths and hundredths can't be represented exactly as floating-point numbers. So I created a signal probe that measures the signal strength in binary, and then converts the binary result to decimal. To do the conversion I used a variation of the double dabble method used by zermatten in his 8-bit digital readout. (The regular double dabble algorithm is for integers - some of the steps had to be reversed in order to use the method for floating point values.)

Of course, using three sequencers and two signal scalers to produce 24 bits of data, and then using about 2500 logic gates to turn that into BCD (and then more logic to turn that into 7-segment) does carry a cost. The meter takes up about 1/2 of the thermometer. This isn't a very practical thing. :)

But it does appear to be quite accurate: its reading off a 7-step counter with a count of 1 was 0.1428571343421936: exactly what you'd expect for 1/7 represented in a 24-bit significand.*

I don't know if there's a better way to convert binary floats to decimal - obviously in a practical sense this thing isn't worth the thermo cost, and a more direct decimal sampling on the sequencer is a lot more generally useful. It's exceedingly rare that you'd need more than a few digits of precision in practice, and of course it's nice to be able to instantiate a couple probes without blowing the thermo... And seeing the exact value is often not as useful as seeing the value the way it's represented elsewhere in the game. But after seeing zermatten's 8-bit integer display I had to try it, just to see if it would work...

(EDIT):

* That's "exactly what you'd expect for 1/7 represented in a 24-bit significand" only if the most-significant bit is 2-1. In floating-point, the most-significant bit of the value 1/7 would be 2-3 - so the value I get from my 24-bit A-to-D is actually not the full 24-bit significand of the source, it's a fixed-point representation of the source value, which is accurate to within 2-24. So for any input less than 2-n, (n) bits of the source value are lost in the A-to-D conversion... There's more on this matter further down in the thread.

I guess building analog signal probes has become something of an obsession with me. Normally I take kind of a pragmatic approach - I initially scoffed at the idea of providing even four digits of precision (and for what it's worth, though I was skeptical at times, I do respect what others have accomplished with LBP2 logic. So, shouts out to Balorn, Phort, Tygers, zermatten and rtm223.) but gradually I've begun including more digits.

So, here's the thing: All the logic probes I've seen previously, and all the ones I've built myself, they've all used some kind of direct base-ten decoding in the sequencer - and then to get more digits of precision, the input signal would be decremented by the value of the digits read by the first sequencer, multiplied by a power of ten, and run through another sequencer. This creates precision error, since tenths and hundredths can't be represented exactly as floating-point numbers. So I created a signal probe that measures the signal strength in binary, and then converts the binary result to decimal. To do the conversion I used a variation of the double dabble method used by zermatten in his 8-bit digital readout. (The regular double dabble algorithm is for integers - some of the steps had to be reversed in order to use the method for floating point values.)

Of course, using three sequencers and two signal scalers to produce 24 bits of data, and then using about 2500 logic gates to turn that into BCD (and then more logic to turn that into 7-segment) does carry a cost. The meter takes up about 1/2 of the thermometer. This isn't a very practical thing. :)

But it does appear to be quite accurate: its reading off a 7-step counter with a count of 1 was 0.1428571343421936: exactly what you'd expect for 1/7 represented in a 24-bit significand.*

I don't know if there's a better way to convert binary floats to decimal - obviously in a practical sense this thing isn't worth the thermo cost, and a more direct decimal sampling on the sequencer is a lot more generally useful. It's exceedingly rare that you'd need more than a few digits of precision in practice, and of course it's nice to be able to instantiate a couple probes without blowing the thermo... And seeing the exact value is often not as useful as seeing the value the way it's represented elsewhere in the game. But after seeing zermatten's 8-bit integer display I had to try it, just to see if it would work...

(EDIT):

* That's "exactly what you'd expect for 1/7 represented in a 24-bit significand" only if the most-significant bit is 2-1. In floating-point, the most-significant bit of the value 1/7 would be 2-3 - so the value I get from my 24-bit A-to-D is actually not the full 24-bit significand of the source, it's a fixed-point representation of the source value, which is accurate to within 2-24. So for any input less than 2-n, (n) bits of the source value are lost in the A-to-D conversion... There's more on this matter further down in the thread.