What if mantissa was never discovered?

What if the mantissa had never been discovered? One might think that it would be impossible to represent extremely large or small numbers (however, even with mantissa we are still bad at this), without the mantissa we would be stuck with fixed-point arithmetic, and it would be a major setback for computing. This is a self-terminating thought.

In such a scenario, what alternatives might we have?

Let's temporarily set aside other known methods such as the logarithmic number system, the residue number system, and unums. While these systems are fascinating and are worth exploring, today I wanted to try something else.

Boundaries ignite creativity–it is the foundation. My focus turns towards numbers that lie strictly between 0 and 1 in absolute value: 0 < |n| < 1. Numbers that one may see in vector embeddings.

I wondered, what if a number isn't fixed on the number line but instead is an approximation? For instance, it is well known that Achilles will eventually overtake the tortoise: 1/2 + 1/4 + 1/8 + ... = 1, right?

Say a decimal represented like this:

011011 = 1/2 + 1/4 + 1/16 + 1/32 = 0.84375

Here, the most significant bit (MSB) indicates the sign, and the others represent powers of 2. Interestingly, if I encode signs as bits, where 0 is "-" and 1 is "+", the MSB could become redundant, leading to something like a geometric progression:

10011100 = + 1/2 - 1/4 - 1/8 + 1/16 + 1/32 + 1/64 - 1/128 - 1/256 = 0.22265625

A simple question–what are the properties of such numbers? What does it take to add or multiply them? The answer wasn't immediately obvious to me, and I had to write things down on a piece of paper.

Let's try to find the sum of two numbers: 10011100 and 01101001 (a negative number, but that is perfectly fine):

  10011100  |   0.22265625
+ 01101001  | - 0.17578125
  --------  |   ----------
  10000110  |   0.04687500

Simple addition rules:

A) 1 + 0 = x (Ex. 1/2 - 1/2 = x)
B) 0 + 1 = x
C) 1 + 1 = Carry 1 + x (Ex. 1/4 + 1/4 = 1/2)
D) 0 + 0 = Carry 0 + x

Here, "x" represents a "hole".

Holes are quite interesting. It is impossible to represent them in such binary notation, so we need a few more rules:

Back propagation:
E) x1 = 10 (Ex. 1/4 = 1/2 - 1/4)
F) x0 = 01

Forward propagation:
G) 1 = 10(1) or 11(0) (Ex. 1/2 = 1/2 - 1/4 + 1/8 + ...)
H) 0 = 01(0) or 00(1)

Compact:
I) 10... = 1...
J) 01... = 0...

Let's break down an example from above:

Step 1, rules A-D:

   10011100
 + 01101001
   --------
  1111100
  1000001x1

Step 2, rules E-H:

  1000001x1 = 100000110

Step 3, rules I-J:

  100000110 = 10000110

Answer:

  10000110

Now let's consider what it takes to multiply two numbers:

  10011100  |   0.22265625
* 01101001  | - 0.18359375
  --------  |   ----------
  01111010  | - 0.0391387939453125

Multiplication rules:

K) 1 * 1 = x1 (Ex. 1/2 * 1/2 = 1/4)
L) 1 * 0 = x0
M) 0 * 1 = x0
N) 0 * 0 = x1

Let's break this down:

Step 1, rules K-N:

(multiplication goes from left to right)

   10011100
 * 01101001
   --------   precision loss
   x0110100 | 1
 + xx100101 | 10
 + xxx10010 | 110
 + xxxx0110 | 1001
 + xxxxx011 | 01001
 + xxxxxx01 | 101001
 + xxxxxxx1 | 0010110

Step 2, rules A-D:

   x0110100
 + xx100101
 + xxx10010
 + xxxx0110
 + xxxxx011
 + xxxxxx01
 + xxxxxxx1
   --------
      0
    1 0
   xxx0x1x1

Step 2, rules E-H:

  xxx0x1x1 = xxx0x110
  xxx0x110 = xxx01010
  xxx01010 = xx011010
  xx011010 = x0111010
  x0111010 = 01111010

Answer:

  01111010

The multiplication process is akin to an art form, as if multiple canceling and amplifying waves are performing a dance in the moonlight.

However, some information is inevitably lost along the way, as we are working with only 8-bits:

10011100 * 01101001 = 01111010 = -0.04296875

0.22265625 * -0.18359375 = -0.0391387939453125

-0.04296875 - -0.0391387939453125 = -0.0038299560546875

My bold statement is that the level of information loss is comparable to that in regular binary notation without an exponent. I didn't bother figurying out what this means in terms of density. However, an interesting property of this system is that the sign is a built-in feature and doesn't require a dedicated bit, we are getting it for "free".

After reaching this point, I found myself attempting to generalize algorithms and ended up with an equation that doesn't make sense until it does:

1 + 2^2 + 2^3 + 2^4 + ... = -1

It seems that I am about a century late in rediscovering p-adic numbers. While this revelation feels useful, I am wishfuly continuing to investigate its full impact.