Two complements what?

29 January 2026

As we all know by now, in computers things must be stored as bits. Image, text, sound... It does not matter the type of data. Someone must have come up with a way of converting it to 1s and 0s. It is equally well known that numbers are a central part of computers. And these too must be stored and manipulated as bits. This may seem trivial. If we man

Lets start by looking at natural numbers (i.e. $0, 1, 2, 3$...). In this case the solution seems simple: change to base $2$. In base $10$ we have ten symbols (i.e. $0, 1$...$8, 9$), and a symbol at position $i$ is multiplied by $10^{i-1}$. So 38 is interpreted $3 \cdot 10^{1} + 8 \cdot 10^{0}$. If we move to base $2$ we have two symbols (i.e. $0,1$) and a symbol at position $i$ is multiplied by $2^{i-1}$. So $38$ becomes $100110$, interpreted:

$$1\cdot2^5+0\cdot2^4 +0 \cdot 2^3 + 1 \cdot 2^2 + 1 \cdot 2^1 + 0 \cdot 2^0$$

The representation for the same number is much longer, but it does only contain 1s and 0s. That is good progress, but we are missing one last piece. In computers, the number of bits a number representation occupies is generally fixed (e.g. 32, 64 bits). To keep things simple here, lets say we decided that our natural numbers occupy $8$ bits. Then we represent them in base $2$ as before, and add enough leading 0s. And that is it, $38$ would be stored as 00100110.

Representing integers

That was relatively easy. Lets move now to the integers (i.e. ...$-2, -1, 0, 1, 2$...). We encounter here the symbol "$-$" that we will have to encode. The solution seems clear: for an $n$ bit integer representation, use the first bit to encode the sign and the remaining $n-1$ bits to encode the number's magnitude. With $8$ bit integers, $-1$ would be 10000001, $-38$ would be 10100110 and $38$ would remain 00100110. It is a bit of a sacrifice. Before, with $8$ bits we could store from $0$ (i.e. 00000000) to $255$ (i.e. 11111111). Now only $7$ of those bits encode the number's magnitude, and so the range is reduced to $[-127,127]$, leaving $255$ and all the others as overflow.

Ok, the method involves some sacrifices but we know how to store integers. Right? Not quite. The intuitive solution presented above is valid. It is called sign-magnitude representation. But it is not the way computers store integers. Instead they use a method called two's complement. It works like this. Assume we have $n$ bit integer representations and, as before, half of the representable numbers should be positive and half negative. The lower half (i.e.$[0,2^{n-1}-1]$) represents positive integers the same way we did for naturals or sign-magnitude integers (i.e. in base 2 and enough leading 0s). The upper half (i.e. $[2^{n-1},2^n-1]$) will contain negative integers as the binary representation of $2^n - x$, where $x$ is the absolute value of the negative integer and always in the range $[1,2^{n-1}]$. Since in binary only the upper half has the first bit as 1 (you may ponder why), the first bit still encodes the sign. Lets see how this applies to our $8$ bit example. $38$, since it is a positive integer, remains unchanged. To represent $-38$ we first compute $2^8 - 38 = 218$ and represent it binary, which leads to 11011010. Hmm... not very intuitive.

So what is the reasoning behind this? In a way, we are representing negative integers as "wholes" of the same magnitude. This becomes clearer if we apply the same idea in base 10. Lets say there is no "$-$" symbol and we have only 3 digits to represent numbers, the equivalent situation we face in computers but in decimal. Because of the latter limitation we are working modulo $10^3$. This should not be a foreign idea to anyone, daytime works modulo $24$, starting back at $0$ anytime $24$ is reached (e.g. $23:00 + 8:00 = 7:00$ instead of $31:00$). So, how would you represent a negative number in this situation, lets say $-38$? The complements idea is to "carve a whole" of that magnitude on $10^3$, so when added to a number this magnitude will be "removed". Lets see it by representing $-38$ in this way and then computing $50 - 38 = 12$. First, $-38$ becomes $10^3 - 38 = 962$. Then we compute $50 + 962 = 1012$, which modulo $10^3$ is $12$. We have "removed" $38$ from $50$ by using them to "fill up" the $10^3$. Two questions may remain. First, how do we know that $962$ represents a negative number and not itself? Just like in binary, and because of this "whole" idea, any number in the upper half of the range (i.e. $[500,999]$) is a negative number. Secondly, what if we add two negative numbers? Then the addition produces a "whole" (i.e. a negative number) of their combined magnitude. Lets see it by computing $-38 + (-2) = -40$. This becomes $962 + 998 = 1960$, a whole of size $40$ or in other words $-40$.

The advantages

"Alright", you may think, "that is very clever, but what are the advantages to the simpler sign-magnitude method?". There are two main advantages. Firstly, I invite you to encode $0$ in sign-magnitude using our $8$ bit representation. I presume you would say 00000000, and that would be correct. But what about 1‍0000000. The sign bit is on and the magnitude is $0$, so that is $-0$, which is also $0$. Oh man... we have two zeros. This is of course not ideal. For instance, the simple and ubiquitous operation of comparing numbers $A$ and $B$ must include an extra step. If $A$ is 00000000 or 1‍0000000, then it should output "equal" if $B$ is 00000000 or 1‍0000000. Only after this check could the operation move to comparing each bit. Not ideal.

There is a second advantage: arithmetic (i.e. addition and subtraction) is much simpler. This is slightly harder to grasp, because mathematically and psychologically integer addition is terribly simple and intuitive. $485 + 12 - 560$ can be computed through many different mental tricks or intuitions. And the addition of $485 + 12$ is not different from the addition of $497 - 560$, since subtraction is just the addition of a negative number. But computers do not have an "intuition" and require perfectly specified steps for every different scenario they may encounter. In this regard, a computer is closest to your 4 years old self who is just learning arithmetic. So lets see the scenarios the computer may face when adding or subtracting $A$ and B. In the table below s$\textbf{A}$ and s$\textbf{B}$ stand for sign of operand $A$ and sign of operand B. We ignore the case were the operation is "$-$", since this involves changing the sign of $B$ and performing one of the four "$+$" rows.

Operation s$\textbf{A}$ s$\textbf{B}$
+ + +
+ - -
+ + -
+ - +
- ... ...

How would a computer that encodes sign-magnitudes integers handle this? In cases where signs are equal, it can pass the operands to a logical circuit that performs a binary sum (which works similarly to how you added when you were 4). The result will have the same sign as the operands. In the cases where signs are different it identifies which operand is larger. Then pass them to a logical circuit that performs a binary subtraction, so it subtracts the smaller from the larger (you won't remember, but when you were 4 you only knew how to subtract a smaller quantity from a bigger one). The result will have the sign of the larger operand. Finally, it checks if the operation produced overflow: a result that cannot be represented.

Now, how would a computer that encodes two's complement integers handle it? It would pass the operands to a logical circuit that performs a binary sum. Finally, it would check for overflow. That's it! Revisit the paragraph where we explore the complements idea in base 10 if you are not convinced.

To grasp the difference between the two methods lets enumerate what we are "saving" in the two's complement scenario. Sign-magnitude involves two conditionals: whether $A$ and $B$ have the same sign, and if they do not whether $A$ or $B$ is larger. It also involves 4 or 5 steps, for same and different sign respectively, to compute the operation. And finally, it requires a logical circuit (i.e. an extra piece of hardware) that performs binary subtraction. In contrast, two's complement requires no conditionals, 2 steps and saves you the need for a subtracting circuit altogether. Good deal.

Conclusion

We could now move on to real numbers (e.g. -1.253, 4770.423524), a very cool topic as well. But I think we had enough.

Hopefully you saw how quickly representing numbers as bits went from trivial to tricky. Yes, it is simpler to convert numbers to 1s and 0s than it is to convert image, audio or video. But given their fundamental role (i.e. they are involved when manipulating any other data type), it is very important to chose the best possible representation. In any case, I hope you enjoyed this neat idea and shout-out to the engineer or inventor that came up with it.