Bit patterns and precision

Something you usually do not need to think about, but which can bite you if you're not aware of it, is the way integers and floats work internally and what numbers they can express.


An integer is stored as a bit pattern which represents, well, an integer in binary notation. You can look at the bits and convert them to decimal for a direct readout. –"Bit" means, after all, "BInary digiT".

The highest bit is used as the sign.

So with 32 bits, that gives you 31 binary digits to express your number, for a maximum of 2147483647 (and a minimum of -2147483648; see the sidebar).

Going above that simply blows up. LSL will store it as -1, but that is merely a quirk of LSL. Other languages behave differently.

Floating points

Whereas integers represent a specific number inside the possible range, floating points use an entirely different approach.

The best way to think about them is to compare them to the scientific notation of large numbers, such as 5.3 * 10^12; that is, a combination of a value and an order of magnitude.

Floats use 23 bits for the value (aka. significand), 1 bit for the sign, and 8 bits for the order of magnitude (aka exponent).

Notice how this gives a floating point less bits than an integer to express the value. That is, a float is less precise than an integer, only carrying what roughly amounts to 7 significant digits in decimal, but in return, it can move the decimal point to represent very small or very large numbers.

This has a couple of important implications:

If you try to assign a number with more significant digits than a float can handle, it will be rounded off.

Try this script:

        integer i = 1234567890;
        float f = i;

This'll output 1234567936.000000. We're in the vicinity of the intended number, but some precision is lost.

Notice also that it is the binary representation which is rounded off, so it ends up looking like a fairly random number when shown in decimal.

One consequence is that if you try to add a very small number to a very large, the small will just disappear in round-off. This is usually not much of a problem in LSL, where floats are typically used to represent a position in a sim, but if you're trying to, say, simulate planetary systems by calculating the force and the small steps to add as the planet rotates around the sun, this problem will show up.

It has, however, a more insidious effect. What we consider a "nice round" number in decimal may in fact require more precision than we think, because the binary representation does not match up with the decimal. A floating point only carries the closest approximate representation it can express, and the round-off may happen at – to humans used to thinking in decimal – rather surprising times.

In fact, something as simple as 0.1 cannot be represented accurately in binary at all, no matter the precision. So doing 0.1 + 0.1 - 0.2 will not be zero, only very close to it.

So practically, you cannot rely on a float to be exactly what you think it is after some calculations; only "very close". Doing direct comparisons, like if (myFloat == 100) is dangerously error-prone, and it is good practice to work with a margin of tolerance instead, like if(llFabs(myFloat - 100) < 0.0001).