Oracle Internals Notes

Numeric datatypes

Oracle allows several datatype specifications for numeric columns. The following table illustrates most of the alternatives, except that DEC may be used as an abbreviation for DECIMAL, and INT may be used as an abbreviation for INTEGER.

SQL> 
SQL> 
SQL> 
SQL> 
SQL> create table numbers (
  2   number_u       number,
  3   numeric_u      numeric,
  4   decimal_u      decimal,
  5   integer_u      integer,
  6   smallint_u     smallint,
  7   number_p       number(9),
  8   numeric_p      numeric(9),
  9   decimal_p      decimal(9),
 10   number_ps      number(9,2),
 11   numeric_ps     numeric(9,2),
 12   decimal_ps     decimal(9,2),
 13   number_s       number(*,2),
 14   numeric_s      numeric(*,2),
 15   decimal_s      decimal(*,2),
 16   float_u        float,
 17   float_p        float(30),
 18   real_u         real,
 19   double_u       double precision); 

Table created.
SQL> select column_name, data_type, data_precision, data_scale
  2  from user_tab_columns where table_name = 'NUMBERS';

COLUMN_NAME                DATA_TYPE DATA_PRECISION DATA_SCALE
-------------------------- --------- -------------- ---------- 
NUMBER_U                   NUMBER
NUMERIC_U                  NUMBER                            0
DECIMAL_U                  NUMBER                            0
INTEGER_U                  NUMBER                            0
SMALLINT_U                 NUMBER                            0
NUMBER_P                   NUMBER                 9          0
NUMERIC_P                  NUMBER                 9          0
DECIMAL_P                  NUMBER                 9          0
NUMBER_PS                  NUMBER                 9          2
NUMERIC_PS                 NUMBER                 9          2
DECIMAL_PS                 NUMBER                 9          2
NUMBER_S                   NUMBER                            2
NUMERIC_S                  NUMBER                            2
DECIMAL_S                  NUMBER                            2
FLOAT_U                    FLOAT                126
FLOAT_P                    FLOAT                 30
REAL_U                     FLOAT                 63
DOUBLE_U                   FLOAT                126

18 rows selected.

For NUMBER, NUMERIC and DECIMAL columns the scale fixes the position of the least significant digit. If specified, it must be in the range of -84 (84 digits to the left of the decimal point) to 127 (127 digits to the right of the decimal point). Digits beyond the scale are rounded prior to storage. Of course, the scale is implicitly zero for INTEGER and SMALLINT columns. Similarly, it defaults to zero for NUMBER, NUMERIC and DECIMAL columns for which a precision has been specified and for unconstrained NUMERIC and DECIMAL columns, but it remains unset for unconstrained NUMBER columns.

For NUMBER, NUMERIC and DECIMAL columns the precision is the maximum number of significant decimal digits permitted. If it is specified, the precision must be in the range of 1 to 38 decimal digits. An ORA-01438 error is returned if an attempt is made to store a value with more digits left of the (possibly implied) scale than the specified precision. If a precision is not specified, stored values are truncated to the maximum precision supported by the implementation. Oracle's implementation allows for up to 40 decimal digits of precision (more). This is the implicit precision of INTEGER and SMALLINT columns, as well as that of unconstrained NUMBER, NUMERIC and DECIMAL columns.

As is evident from the USER_TAB_COLUMNS query above, Oracle uses its NUMBER datatype internally to represent all of these datatype specifications.

FLOAT, REAL and DOUBLE PRECISION columns differ only in that ANSI requires their precision to be specified in terms of binary bits, not decimal digits. The precision of FLOAT columns defaults to 126 bits, and the precision of REAL and DOUBLE PRECISION columns are fixed at 63 and 126 bits respectively. Oracle uses its NUMBER datatype internally to represent these datatype specifications as well. This can be demonstrated by inserting the same value into sample NUMBER and FLOAT columns and then using the dump function to verify that the datatype number and the bytes stored are identical.

SQL> insert into numbers (number_u, float_u) values (99.99, 99.99);

1 row created.

SQL> select dump(number_u), dump(float_u) from numbers;

DUMP(NUMBER_U)                 DUMP(FLOAT_U)
------------------------------ ------------------------------
Typ=2 Len=3: 193,100,100       Typ=2 Len=3: 193,100,100

Binary precision columns are only reported as FLOAT in the USER_TAB_COLUMNS family of views to preserve the semantics of their data precision. It is done based on the fact that these are the only columns have a precision, but no scale. This can be seen in the decode expression used to translate datatype number 2 into a datatype name in the view text for say USER_TAB_COLUMNS in catalog.sql.

decode(c.scale,
       null, decode(c.precision#, null, 'NUMBER', 'FLOAT'),
       'NUMBER'),
Although the binary precision is preserved in this way, when manipulating such numbers Oracle actually uses the next greatest decimal precision instead. Because Oracle's implementation stores one pair of decimal digits per byte (more) the formula required to convert a binary precision into the next greatest decimal precision is as follows.
decimal_precision = ceil(binary_precision * log(10, 2))
For example, a FLOAT column with a binary precision of 2 bits, is implemented as a NUMBER column with 1 decimal digit of precision and no fixed scale. Thus the number 7, which requires 3 binary bits, will nevertheless be stored exactly, whereas the number 11, which has two decimal digits, will be rounded to 10 because only one decimal digit of precision is allowed.
SQL> create table float_check (f float(2));

Table created.

SQL> insert into float_check values (7);

1 row created.

SQL> insert into float_check values (11);

1 row created.

SQL> select * from float_check;

         F
----------
         7
        10
It is legitimate for a database implementation to use higher precision than requested in this way. Therefore, database applications should always round data values explicitly when required, and should not rely on the precision of the datatype to round data implicitly.


Ixora Pty Ltd.   All rights reserved.
12-Oct-2007 22:22
Search   Questions   Feedback   Up   Home