Normal number (computing)
Encyclopedia
In computing
Computing
Computing is usually defined as the activity of using and improving computer hardware and software. It is the computer-specific part of information technology...

, a normal number is a non-zero number in a floating-point representation
Floating point
In computing, floating point describes a method of representing real numbers in a way that can support a wide range of values. Numbers are, in general, represented approximately to a fixed number of significant digits and scaled using an exponent. The base for the scaling is normally 2, 10 or 16...

 which is within the balanced range supported by a given floating-point format.

The magnitude of the smallest normal number in a format is given by bemin, where b is the base (radix) of the format (usually 2 or 10) and emin depends on the size and layout of the format.

Similarly, the magnitude of the largest normal number in a format is given by
bemax × (bb1−p),


where p is the precision of the format in digit
Numerical digit
A digit is a symbol used in combinations to represent numbers in positional numeral systems. The name "digit" comes from the fact that the 10 digits of the hands correspond to the 10 symbols of the common base 10 number system, i.e...

s and emax is (−emin)+1.

In the IEEE 754 binary and decimal formats, p, emin, and emax have the following values:
Formatpeminemax
binary16 11 −14 15
binary32 24 −126 127
binary64 53 −1022 1023
binary128 113 −16382 16383
decimal32 7 −95 96
decimal64 16 −383 384
decimal128 34 −6143 6144


For example, in the smallest decimal format, the range of positive normal numbers is 10−95 through 9.999999 × 1096.

Non-zero numbers smaller in magnitude than the smallest normal number are called denormal (or subnormal) numbers. Zero is neither normal nor subnormal.
The source of this article is wikipedia, the free encyclopedia.  The text of this article is licensed under the GFDL.
 
x
OK