A method of representing numbers inside the computer in which the decimal point (more correctly, the binary point) is permitted to “float” to different positions within the number. Some of the bits within the number itself are used to keep track of the point’s position. Compare fixed-point notation.