PDA

View Full Version : Big Dumb Probe



tetsujin
03-04-2011, 08:08 AM
http://lbp.me/v/yhysx8

I guess building analog signal probes has become something of an obsession with me. Normally I take kind of a pragmatic approach - I initially scoffed at the idea of providing even four digits of precision (and for what it's worth, though I was skeptical at times, I do respect what others have accomplished with LBP2 logic. So, shouts out to Balorn, Phort, Tygers, zermatten and rtm223.) but gradually I've begun including more digits.

So, here's the thing: All the logic probes I've seen previously, and all the ones I've built myself, they've all used some kind of direct base-ten decoding in the sequencer - and then to get more digits of precision, the input signal would be decremented by the value of the digits read by the first sequencer, multiplied by a power of ten, and run through another sequencer. This creates precision error, since tenths and hundredths can't be represented exactly as floating-point numbers. So I created a signal probe that measures the signal strength in binary, and then converts the binary result to decimal. To do the conversion I used a variation of the double dabble method used by zermatten in his 8-bit digital readout. (The regular double dabble algorithm is for integers - some of the steps had to be reversed in order to use the method for floating point values.)

Of course, using three sequencers and two signal scalers to produce 24 bits of data, and then using about 2500 logic gates to turn that into BCD (and then more logic to turn that into 7-segment) does carry a cost. The meter takes up about 1/2 of the thermometer. This isn't a very practical thing. :)

But it does appear to be quite accurate: its reading off a 7-step counter with a count of 1 was 0.1428571343421936: exactly what you'd expect for 1/7 represented in a 24-bit significand.*

I don't know if there's a better way to convert binary floats to decimal - obviously in a practical sense this thing isn't worth the thermo cost, and a more direct decimal sampling on the sequencer is a lot more generally useful. It's exceedingly rare that you'd need more than a few digits of precision in practice, and of course it's nice to be able to instantiate a couple probes without blowing the thermo... And seeing the exact value is often not as useful as seeing the value the way it's represented elsewhere in the game. But after seeing zermatten's 8-bit integer display I had to try it, just to see if it would work...

(EDIT):
* That's "exactly what you'd expect for 1/7 represented in a 24-bit significand" only if the most-significant bit is 2-1. In floating-point, the most-significant bit of the value 1/7 would be 2-3 - so the value I get from my 24-bit A-to-D is actually not the full 24-bit significand of the source, it's a fixed-point representation of the source value, which is accurate to within 2-24. So for any input less than 2-n, (n) bits of the source value are lost in the A-to-D conversion... There's more on this matter further down in the thread.

rtm223
03-04-2011, 09:38 AM
obviously in a practical sense this thing isn't worth the thermo cost

Yeah, but it's kewl :hero: Not to mention a very nice solution to the problem at hand ;)

Ayneh
03-04-2011, 12:15 PM
Jesus christ.







I love you.

tetsujin
03-04-2011, 03:06 PM
I love you.

My heart belongs to another. Ours would be a forbidden love...

Tygers
03-06-2011, 11:39 AM
Very clever, and cool! I had no idea the double dabble could be modified in such a way to produce results that aren't a whole number.

A couple things that could cut down the thermo though... With binary representing fractional values, it basically works out to being that each bit requires one decimal digit of output to get the exact representation of what that binary number is in decimal. Since you already don't have that, you could cut it back more to, say, 8-12 digits, and still get enough coverage for the unique values it could be.

Second, in an attempt to make a decimal display using your method with fewer digits, I decided it would be easier to try and reproduce it myself rather than to try and place yours in a level. (It just takes too long to load it in.) In producing the logic for the 3-subtracter, I accidentally cut out 2 gates from your design. I was able to use most of the bit 2 calculation as a partial result for bit 3; replacing the final XOR from bit 2 with an AND with the same inputs produces the same output as yours for all valid inputs. (It produces a different output for the input of 15, but that's not valid.) It probably won't be a HUGE thermo saving, but it's not nothing.

tetsujin
03-06-2011, 04:36 PM
Cutting from 16 to 12 digits wouldn't really save on much logic: You'd be cutting from the top of the logic "pyramid", where the rows are shorter. For a number of input bits (b) and a number of output digits (d), the bottom row will have (b) stages, the top row will have (b - (d - 1)) stages. The leftmost stage of any row is a dummy, since it only takes one bit in (the only cases we care about for the dummy ae input = 0 and input = 8, so the dummy stages contain no logic) - and the top row could omit bit zero at a savings of one XOR gate per stage...

So the total number of real stages s(b,d) for a meter of (b) bits and (d) digits is ((b - 1) + (b - 1 - (d - 1))) * (d/2) = (2b - d - 1) * (d/2)

s(24, 16) = 248
s(24, 12) = 210
s(24, 8) = 156

So assuming the double dabble logic is the main reason for the thermo-heavy nature of the meter, an 8-digit readout with this approach would still take over a quarter of the thermo, unfortunately.

The problem is that the low-order digits that would be cut off aren't affected by as many input bits, and don't have to go through as many iterations as the higher-order digits. So you have to cut pretty deep into the size of the output in order to significantly cut the thermo cost.

I think the most promising route for optimizing this thing with the current algorithm would be to integrate neighboring stages together and try to optimize that logic. I haven't worked out the logic tables for that, though, so I haven't determined if there's a good moderately-sized case where this would work out to a significant savings in logic.

One limitation of this approach that I realized after playing around with this thing some more: reading 24 bits from the signal isn't actually enough for full precision. Consider a value of 0.3: the highest-order bit in the binary version would be 2-2, so the lowest-order bit of a single float representation would be 2-25. But all the doubler stages take (1 - ((1-x) - x))) - so it may not be possible to get more bits, since (1-x) will likely be on the order of 2-1...

Python code for verifying Big Dumb Probe outputs:


import math
def single(x):
return ((int(math.floor(x * 2 ** 24)) * 1. / 2**24), (int(math.ceil(x * 2 ** 24)) * 1. / 2**24))


(That yields both rounded-up and rounded-down versions of x.)

I don't know what other floating-point-to-decimal algorithms exist, and whether any of these might be well suited to LBP2... I guess one possibility would be to approach it kind of like the base-ten sequencer methods: compare the 24-bit input to 0.1, 0.2, 0.3, etc. represented in 24 bit fixed-point, and subtract the highest value of those that fits in the input. But that would probably be a lot of logic, too:.. Whole lot of big binary adders, plus logic for each digit to recombine the ten different logic branches over however many bits of input are still being considered...

Tygers
03-06-2011, 05:19 PM
I see your point on cutting out digits.

I was thinking about the precision problem as well last night after noting that 1/7 is, in fact, 0.1428571492433547 when done natively as a single precision floating point number. You are correct, however, that the NOT doing 1.0 - x is what is killing the precision. (I had more in my post about this topic before, but the forum ate it, so I figured I'd stick mainly to saving 2 gates in the 3-subtract logic).

One idea I had which would hopefully get at least one more bit of precision out of it, would be to cut the NOT out of the direct path. Really, the NOT is serving the place of (100% battery - x) in the equation. If you replace (100% battery) with some other value, it's subtracting from something smaller which won't force as high order a bit on the floating point number. The problem is figuring out what value to use. Ideally it should be close to 2x in size, without being over. I tried taking the output from the normal doubler, and then using that as the basis for the actual doubler... (w = 1 - (1 - x - x); x' = w - (w - x - x)) but the problem with that (Confirmed by a small test doing it base 10, and my simulation program) is that the resulting value is actually smaller than the true answer, which means the math goes negative.

tetsujin
03-06-2011, 06:53 PM
I see your point on cutting out digits.

I was thinking about the precision problem as well last night after noting that 1/7 is, in fact, 0.1428571492433547 when done natively as a single precision floating point number.


Yeah, my initial post was incorrect on that point: the value I get for 1/7 corresponds to (float(0.5 + 1/7) - 0.5) - that is, a fixed-point 24-bit representation of the value, with the highest-order bit as 2-1. But if the phrasing was misleading, this much at least is true:
The output of the 24-bit A-D is what is expected, given that it's 24-bit fixed point rather than floating point
The result of the decimal representation of that 24-bit value is what is expected


As for performing the doubling with a smaller value as the inversion constant: it only gains you one extra bit, but you could use 0.5. Since 0.5 was just subtracted if a corresponding bit was set in the A-D result, it's guaranteed that the result of that subtraction is less than 0.5. One could preserve more bits of the source value by subtracting multiple bits before doubling. For instance (doing 4 bits at a time):

x = x - (b3? 0.5:0) - (b2? 0.25:0) - (b1? 0.125:0) - (b0? 0.0625:0)
x = (0.0625 - ((0.0625 - diff) - diff)
x = (0.125 - ((0.125 - diff) - diff)
x = (0.25 - ((0.25 - diff) - diff)
x = (0.5 - ((0.5 - diff) - diff)
(x is then ready to be the input for another 4-bit A-D)

I hadn't actually thought of all this when I built the A-to-D: I had considered making each battery on the sequencer put out an analog value corresponding to its bit value (this would have required extra logic, of course) - but it was easier to just make them all output 0.5 and then scale up after subtracting each bit away...

You could take this quite far, of course. But you'd have to provide analog constants for powers of two less than 2-2 - which can't be specified by batteries... So you'd need AND gates and counters for the constants... Going this route you could protect precision out to values as small as will show up on your first sequencer...

Tygers
03-07-2011, 06:46 AM
You can't just go the cheap way to reclaim 1 bit and use 0.5, because the value you need will need to be larger than the result of the multiplication. For example, say that you have a value of 0.4; this is smaller than 0.5, so the initial subtraction would be 0.5 - 0.4, resulting in 0.1. But to double it you need to subtract it again, so you end up with -0.3. Fed back into the subtraction to get the result out, keeping in mind that combiners use the absolute value, you get 0.5 - 0.3, resulting in 0.2, rather than the correct answer 0.8.

So the value used as the base for subtraction should be close to the expected answer, but not below it. That's why I tried first calculating based on the result from using 1.0, but that resulted in a value smaller than desired meaning that the math went negative and screwed up.

However, if a way to come up with a number that meets the needed criteria exists, it would have the bonus of being able to add more than just one bit precision; applied correctly, theoretically it could get the full precision of a single precision floating point, plus or minus one bit. (But what a pain to wire all that up, thermo nightmare!)

tetsujin
03-07-2011, 01:32 PM
You can't just go the cheap way to reclaim 1 bit and use 0.5, because the value you need will need to be larger than the result of the multiplication. For example, say that you have a value of 0.4; this is smaller than 0.5, so the initial subtraction would be 0.5 - 0.4, resulting in 0.1. But to double it you need to subtract it again, so you end up with -0.3. Fed back into the subtraction to get the result out, keeping in mind that combiners use the absolute value, you get 0.5 - 0.3, resulting in 0.2, rather than the correct answer 0.8.

Ah, whoops... Clumsy me...

Well, I almost had it: Let's try it again:

(Four-bit A-D case...)
x = x - (b3? 0.5:0) - (b2? 0.25:0) - (b1? 0.125:0) - (b0? 0.0625:0) (x is now less than 0.0625)
x = (0.125 - ((0.125 - diff) - diff) (doubled x is less than 0.125, so this is safe...)
x = (0.25 - ((0.25 - diff) - diff) (doubled x is less than 0.25....)
x = (0.5 - ((0.5 - diff) - diff) (doubled x is less than 0.5...)
x = (1 - ((1 - diff) - diff) (doubled x is less than 1.0)

So with this approach (applied with as many bits as your first sequencer A-D) you can preserve all the source bits if the value you're examining shows up as nonzero on the first sequencer. (In the example above - let's say the four bit A-D yielded 0001 - so the high bit of the source value was 2-4 and the smallest bit of the source value is 2-27 When you strip away the bits you've detected and then subtract from 2-3, you get a value whose highest-order bit is 2-4 (unless x was zero, in which case there's no data being lost) - which means the low-order bit 2-27 is retained...

So you can pull out 23+b bits of fractional source value, where (b) is the number of bits on your first A-D sequencer.

(EDIT):As for determining the approximate magnitude of a number: you can do that with just a bunch of (n-x) subtracters, see how small (n) has to be before the result is negative. Of course, you wouldn't want to do too much of that because you need either sequencers or sackbot controllinators to turn the analog results into a digital condition...