Analogue Logic 2 - Sorting & Addition
by, 10-11-2010 at 11:46 PM (21537 Views)
Ana.logue Signal Processing
All Your Sums Are Belong To Us
Part 2 - Sorting and Addition
Analogue processing is all about arithmetic - carrying out mathematical operations of numbers in a continuous spectrum. Last installment we briefly (heh) covered the basic processing units provided by the in-game tools. These boil down to
This isn't really a very wide toolset for arithmetic and there are clearly some concepts that are obviously missing - most notably addition, which is probably one of the most basic operations you may want to carry out on numbers.
In this article, we will cover "sorting" functions as an extension of min() and max() (possibly not that useful, but an interesting design exercise nonetheless and it gives me the excuse to use the word "minimax",* which is somewhat satisfying) and then follow on with quite an in-depth look at addition, which is surprisingly more complex than you might think. Or at least it is when I get my hands on it....
1. Sorting Functions
One fundamental ability when presented with a set of numbers (read: analogue inputs) is to order them by value - otherwise known as sorting. max() and min() enable us to find the highest and lowest values from a set of numbers, which is a first step in sorting, but what about everything in between?
1.1. Finding the Second Largest Value:
The method of finding the second largest value involves the following process:
- Find all pairs of signals
- Get the min() value for each pair.
- Take the max() of these min()s.
The idea behind this is that if you take all possible pairs, you are guaranteed to have one pair that consists of the highest value and the second highest value. This particular pair will return a min() value equal to the second largest value in the set and none of the other min()s can return a value greater than that.
For the example of 3 inputs, the circuit below is the implementation:
fig 1.1.1. Obtain the Second Largest Value of Three Inputs
First we identify the pairs of signals, and their minmum values to give the three internal signals:Then we find the maximum of those signals:s1 = min ( i1, i2 )
s2 = min ( i1, i3 )
s3 = min ( i2, i3 )o = max ( s1, s2, s3 )
Just to demonstrate how this works, the following table is the result of plugging some values into the system and showing the internal signal values and the overall output:
Note that when duplicate values are input, the system does not actually find the second largest value. It's more accurate to say that the signals are ordered by descending magnitude and the ouput is equal to the value of the second index of that ordered list. Most of the time this is fine and is the correct response for enabling sorting. However, simply note that the term "second largest value" is slightly misleading.Code:input || internals || output | 1 | 2 | 3 || 1 | 2 | 3 || | ------|------|------||------|------|------||--------| 30 | 60 | 20 || 30 | 20 | 20 || 30 | 50 | 50 | 60 || 50 | 50 | 50 || 50 | 60 | 30 | 60 || 30 | 60 | 30 || 60 | -50 | 70 | -40 || -50 | -40 | -40 || -50 |
Signed values are handled in the standard LBP2 min() and max() manner - comparison is carried out on absolute values, but it is the full signed value that is output.
1.3. Finding the Second Smallest Value:
Much in the same way that we can find the second largest value, we can also find the second smallest by replacing the max() function for a min() and the min()s for max()s, as shown below:
fig 1.3.1. Second Smallest Value of Three Inputs
Of course, with three inputs, this is redundant - the second largest value will be the same as the second smallest, but this does becomes more useful if we up the number of inputs.
1.4. Extending for More Inputs
The method works fine for any number of inputs, the only thing to bear in mind is how you generate sets of pairs. In the three input version there were, conveniently, just 3 pairs to deal with. However, with four inputs we have 6 pairs:Indeed, this progression goes up as the number of inputs goes up, following the formula:( i1, i2 ), ( i1, i3 ), ( i1, i4 ), ( i2, i3 ), ( i2, i4 ), ( i3, i4 )whereP = [ N ( N-1 ) ] / 2Which is an order-2 polynomial (related to the square of N) - so the method becomes significantly more complex as you increase the value of N.P is the number of pairs required
N is the number of inputs
Combining this with our ability to find the minimum value, maximum value and second smallest, we can create a microchip to take 4 analogue inputs and then output the same four values in descending order. Woop (or no woop maybe, IDK why we want this TBH).
1.5. Finding the nth Largest / Smallest Value
Yep, in typical switchGeek manner, we don't stop at a sensible point, and it's also possible to select the value that is nth largest / smallest value at will. For finding the 2nd largest value, we grouped the values into pairs, for 3rd largest we group them into triplets and for the nth largest we group into n-tuplets (groups of size n).
Again, this bumps up the complexity significantly, but it is possible to obtain a complete ordered list of values or simply pick out a specific value from that list. The obvious application of the more extended versions of this is obtaining the Median value from a set.
A more gameplay-oriented application could be displaying a leaderboard from custom logic scorekeeping (maybe displaying how much health each player has, or some notion of kills : deaths), though this would require extra systems to match specific values back to players - but it's certainly not impossible and it would be kinda cool.
1.6. Alternative methods
It's almost certainly possible to order a set of inputs using an electronic implementation of the Bubble Sort or Quick Sort algorithms (or one of any number of sorting algorithms. These might actually work out to be more efficient than what I've presented above, I'm not 100% sure. Ironically, I didn't think of using them until after writing out all of the above, which is gonna be a complete kick in the teeth and make me look foolish if it does turn out to be a better method....
For reference, the methods rely on a "compare and swap" sort of algorithm, which is easily implemented as follows using a min and max function. Technically you aren't comparing or swapping - just forcing the ordering on pairs using min() and max():
fig 1.6.1. Simple Compare and Swap Operation
However, these alternatives do not lend themselves to finding a the value of a specific index in the list, without actually ordering the entire thing, which is what you want if there is a specific element, such as the Median, that is of interest to you.
2. Analogue Addition
We saw last time that the combiner tools give us a neat method of subtracting unsigned values from one another, but there is no built in method for analogue addition in LBP2 or even for signed subtraction (which would inherently open up signed and unsigned addition, of course). However, we can calculate addition using the functions provided to us, if we do a bit of strange algebra.
2.1. Basic Addition
Given that the following is true:It's clear we can add numbers together, using nothing but the in-built subtraction function. To clarify, rather than adding both numbers together, we subtract them both from 100, then subtract the answer from 100. So given values of 23 and 34, normally our calculation would be something like:o = i1 + i1 = 100 - ( 100 - i1 - i2 )Using the equation above:o = i1 + i2
o = 43Obviously this looks long-winded when written down like that, but this is what happens when your toolsets are limited. You have to be a little less direct with your solutions. The circuit for this equation is shown below (note that the battery in the image is set to 100%):o = 100 - ( 100 - i1 - i2 )
o = 100 - ( 100 - 23 - 34 )
o = 100 - ( 100 - 57 )
o = 100 - ( 43 )
o = 57
fig 2.1.1. Basic Adder using 100% Battery Offsets
However, if you remember that NOT gates perform the analogue function ( 100 - |i| ), we can simplify this to give the following circuit for basic addition with 2 inputs:
fig 2.1.2. Basic Adder Using Inverters
Which can then be extended for any number of inputs. Note that the inversions only need to be carried out at the beginning and end of the calculation - everything in between is simple subtraction:
fig 2.1.3. Basic Adder Using Extended to Sum 4 inputs
This circuit will return the arithmetically correct answer for the sum of its inputs if the following conditions are met:
- All inputs are greater than zero.
- That the sum of the inputs is smaller than, or equal to 100.
The first assumption is forced by the nature of subtraction using combiners in LBP2 - it just doesn't handle subtraction of negatives. The second assumption is due to the fact that values over 100 are out of range.
2.3 Dealing with Signed Values
Assuming you can be certain (due to knowledge of the system) that both of the above conditions can be met, the simple adder is a perfectly fine circuit to use. However, it is possible to extend the technique to handle signed addition. Suppose we have two signed inputs and wish to add them together, remembering that our basic addition function can only work with unsigned values:o = i1 + i2
o = ( i1+ + i2+ ) - ( i1- + i2- )
Spoiler - Unnecessary Scariness WithinOr, if you want to describe the entire circuit in terms of raw functions:Note that all operations are subtraction again, which is kind of how we make it work. But it's probably best to ignore this expression, it's not really going to help and the previous expression, with the addition operations is sufficient.o = [100 - ( 100 - i1+ - i2+ ) ] - [100 - ( 100 - i1- - i2- ) ]
So all we need to do is to split the values, add up all of the positive components, add up all of the negative components and then subtract the latter from the former:
fig 2.2.1. Signed Adder With Splitting of Input Components
As you can see in the diagram above, there are two adder networks and a final subtraction at the end. Much like with the simple addition above, this is extensible to as many inputs as you want, but retains the caveat that the sums must be smaller than or equal to 100. In this case, that isn't the final answer - both the sum of the positive components and the sum of the negative components must also be smaller than or equal to 100, else the individual adders will break.
2.4. Handling Overflow
The assumption we made above is that the correct output of our additions would always be in the range 0-100. If the sum is greater than this, we hit an overflow condition, which can cause some serious problems. Here we will look at how overflow manifests itself, why it is a problem and tactics for dealing with it.
Simplifying things by going back to the basic (unsigned, two input) adder for a moment fig 2.1.2. we can know that the first input is always in the range 0-100, so our first step in the calculation (the NOT gate) will always return a value that is range. It is at the second step in the combiner device. that we may hit trouble.
If the second subtraction takes us below zero, then we hit an issue as we do the final stage of the calculation. The final NOT gate will take the absolute value and subtract it from 100. If this value is negative, then our little formula for addition breaks down. To clarify, the following graph demonstrates the relationship between correct result and output result:
fig 2.4.1. Erroneous Signals Caused by Adder Overflow
As you can see, up until we hit the 100% max value, the output value is equal to the correct answer, but above 100%, the output goes a bit mental and actually starts to reduce. The main problem here is that it is an erroneous answer but looks like a valid response.
2.5. Basic Overflow Handling
Often, the fact that we can't represent numbers above 100 isn't such a problem, we can just limit the answer to 100 if it goes over (as shown by the dotted line in fig 2.4.2. To achieve this, we add an extra stage before the NOT, to split out the positive signal value and the negative signal (overflow), to give the following:
fig 2.5.1. Unsigned Adder with Prevention of Overflow Error
Note that once the maximum value of 100 is reached, the output will fix at 100 (there will be no positive component coming out of (x) and (100 - 0 = 100). This gives us a far more reasonable answer from our adder when it goes out of range. In addition, we can actually recover a numeric value for the overflow, if desired, by simply taking the value of the negative output as shown in the circuit diagram above. The graph below shows the improved response of the sum output (black line) and the captured overflow value (red):
fig 2.5.2. Capturing of Overflow Value
2.6. Overflow with More Inputs
Note that if you have more inputs, then you could overflow at each stage, so remember to strip out the positive values after each subtraction. For the example of four inputs, we have three overflow signals, which should follow a similar pattern to the main overflow:
fig 2.6.1. Capturing of Multiple Overflows
However, they don't. Each overflow output represents the overflow contribution at that stage of the calculation. If you wished to determine overflow value in a useful manner, you would need to sum these 3 using an additional adder, which would lead to 2 further overflow signals, which would also need to be summed by a further adder.... Your answer would consist of 4 analogue signals (one answer and 3 overflows). Obviously, recovering all of this information is probably not that useful in most cases, but if you ever need to, that's how we do it
fig 2.6.1. Unsigned 4-input Adder with Multiple Overflow
See what I mean about it being a little bit complex? Well apparently it's not nearly complex enough, so we shall continue...
2.7. Signed Overflow
Of course, so far we can have signed values with no possibility of overflow, or overflow with non-signed values. To handle both starts to bring in a whole world of hurt, as you need multiple positive and negative overflows combined with an answer value anywhere in the range of +/- 100.
The most correct answer would of course be one of the following:
- An answer in the range +/- 100 and no overflow.
- An answer of 100, with positive overflow.
- An answer of -100, with negative overflow.
Using our standard method for handling signed values, we would add all positives together and all negatives together and we also know how to capture the overflow for both. But life isn't that simple, as demonstrated by the following example:I know, as do you, that the answer is 82, because we have calculators. However, our adder circuit doesn't, it produces positive and negative components of:(-75) + 90 + 87 + 50 + (-70)Note that in the above I've shown the overflow values in brackets - the numbers outside of brackets are the main answer / sum values.positive: 100 (+100 + 27)
negative: 100 (+45 + 0)
Subtract the negative sum from the positive sum and you get 0, yet you've still got positive and negative overflow. Which isn't entirely useful.
Once again, recovery of data is possible - which is why we capture overflow in the first place - and the method is (conceptually) quite simple. Subtract all parts of the negative answer from the corresponding part of the positive answer, and then add those 4 results together again as signed values. This will always result in the output being in one of the three valid states listed above. It's also a horrendous mess of circuitry, as you basically have repeat the entire signed addition (which consists of 6 adders anyway), as shown below:
fig 2.6.1. Full, 2-stage Addition of Four Signed Analogue values.
Anyway, this is all getting a bit convoluted for what should be a nice simple function (addition of 4 values) and as I stated before, it's absolutely best if your system can completely avoid overflow. Hopefully this section has convinced you of the truth in that statement!
I would like to remind you of fig 2.1.2, where the addition circuit was nice and simple and that is really what we should be using most of the time.
That said, I think it's interesting that despite this hard limit on the range of analogue values, we can still implement methods whereby calculations can go well outside of these bounds and yet still recover the data to pull things back to a sensible result.
It's probably also worth noting that one of those uber-complex signed adders with overflow only requires 1/10 of a bar of thermo... So whilst it might be complex, it's not prohibitively so. I'm also pretty sure I can reproduce that full signed adder mechanic more efficiently, but currently I'm having an issue understanding exactly why the other method works (or even proving that it does for all cases, without iterating through all 1.6 billion test cases) - it's a bit weird, so I'll have to get back to you on that one.
2.8. Overflow Avoidance
One thing you can do to optimise, is to re-order the operations to remove the chance of overflow at early stages. If you are adding 4 inputs together and you know that i2 + i4 can never be greater than 100, then do this addition first - you won't have any need to capture overflow lines. Similarly, if you are doing signed addition, some of your inputs might be unsigned - don't bother splitting them and thus you can shorten the negative component adder. Remember as well that you can re-order the negative tree and the positive tree independently of one another.
3. Summary (Lame Puns FTW)
So that's a couple of extra functions and methods for you to play around with. Hopefully there is enough in there to show that analogue system design is quite a complex and involved thing. There are a lot of limitations to what exactly you can do, but we can work around them, with a decent understanding of how the components work and some creative re-arranging of equations. And with that in mind, I'll leave you with:
3.1. Subtraction with Signed Values
Now we can now achieve signed addition, we can use this to implement signed subtraction. This is quite simple and just involves reformatting the subtraction to into a signed addition operation and use the techniques from above. So,Problem solved -you simply route the negatives components to the positive addition tree and vice versa. You can even do arbitrary combinations of addition and subtraction in the same device.o = i1 - i2 - i3 = i1 + (-i2) + (-i3)
*Actually, during the editing process I wrote out every other mention of the terms maximin and minimax.... However, I still got them into the intro... And this footnote... So I guess that ticks the "use the term minimax" box.