Programming

Floating-point number

In programming, a floating-point number, or simply float, represents a number with a fractional part that has a variable precision.

The term 'floating-point' comes from the fact that the decimal point keeps on 'floating', i.e. it isn't fixed at a particular point. For example, in some number, the decimal point might have five digits after it whereas in another number it might have ten digits. The idea of 'floating' arises from the way a floating-point number is typically represented internally — using scientific notation.

This is contrary to how a fixed-point number works, wherein the decimal point is always fixed, having a fixed precision before and after it.

Discussed on

External resources