Many years ago, one of my first tasks as a developer was to make some changes to an application that my company's salesmen used to estimate the price for customer projects.
After I started digging around, the second thing I noticed (after seeing all the comments were in German) was that the prices of everything (hardware, software, engineering time) were stored as integers and were divided by 10,000 when presented to show the price in £ (British pounds).
I asked my boss about this, and I was told that "this was how sales and accounting software deals with money".
After learning this, I quickly worked out why.
Why not float (or double, or real)?
Almost all modern computers store data as binary information, and perform binary mathematics. This is fine for whole numbers (up to a limit), since any number can be represented as a combination of various powers of 2. For example, the value 23 is stored in 8-bit memory as
00010111, which is
16 + 4 + 2 + 1 or
2^4 + 2^2 + 2^1 + 2^0.
This would be great if all numbers are whole numbers. However, we needed to be able to perform mathematics and handle numbers that have fractional components - such as, 0.1, 1/3, sin(45°), or the square root of 2. There were a few different approaches to this problem, but since the 1980s almost every computer and programming language uses the same solution - the IEEE 754 standard - and this is the basis for the float and double types used in various programming languages, also known as floating-point types.
Floating-point types are not perfect, the limitations of binary storage creates some minor issues, but for most applications they are good enough.
Consider this, though:
echo 0.1 + 0.2; // 0.3 echo (0.1 + 0.2) === 0.3; // nothing
console.log(0.1 + 0.2); // 0.30000000000000004
These seem a little harmless, and could be fixed by using rounding functions, although this can be a little cumbersome:
echo round(0.1 + 0.2, 1) === 0.3; // 1
console.log(Math.round(10*(0.1 + 0.2))/10); // 0.3
There can also be problems with rounding some numbers:
echo round(0.4999999999999999); // 0 echo round(0.49999999999999999); // 1 echo 0.4999999999999999; // 0.5 echo 0.49999999999999999; // 0.5
console.log(Math.round(0.4999999999999999)); // 0 console.log(Math.round(0.49999999999999999)); // 1
These discrepancies may seem small, but do enough financial mathematics - add and subtract values, perform discounts and apply tax calculations - and eventually you'll get tripped up, and receive a call or email from your head of finance or clients or customers (or maybe all three).
Some languages, including SQL, have a fixed-point type. These allow for decimal values that are store with the same precision as integers.
Number.toFixed() method is for presentation and outputs a string), these fixed-point types will be cast to floating-point types, and end up with the same precision errors you are trying to avoid.
They also have limitations in that you have to specify the size and precision of the fixed-point numbers you want to use. For example, in MySQL, setting a column to
DECIMAL(5,2) will let you store values from -999.99 to 999.99, but you will encounter errors if you try to set the field to 10,000 or 0.001. This means that you need to be aware of the range and precision you need before using it.
Money as an integer
So, to avoid all these floating-point precision issues, you should store and work with money values as integers.
There's a couple of ways to do this, such as having a dollars and cents column (or pounds/pence or whatever). However, the simplest approach is to have a single column/value and use a precision factor to convert the number as a money value for presentation.
Depending on how you're accessing the database you can cast the value into a simple Money object, which handles the presentation and so on. You could even wrap or extend this with Currency object that would handle currency symbols and localised formatting (sometimes the currency symbol precedes the value, sometimes it comes after - some countries write numbers like
1 234.56, and others