# Interview Question of the Week #058 – What is the Difference Among DECIMAL, FLOAT and NUMERIC Datatype?

Question: What is the Difference Among DECIMAL, FLOAT and NUMERIC Datatype?

The basic difference between Decimal and Numeric :
They are the exact same. Same thing, different name.

The basic difference between Decimal/Numeric and Float :
Float is Approximate-number data type, which means that not all values in the data type range can be represented exactly.
Decimal/Numeric is a Fixed-Precision data type, which means that all the values in the data type can be represented exactly with precision and scale.

Converting from Decimal or Numeric to float can cause some loss of precision. For the Decimal or Numeric data types, SQL Server considers each specific combination of precision and scale as a different data type. DECIMAL(2,2) and DECIMAL(2,4) are different data types. This means that 11.22 and 11.2222 are different types, though this is not the case for float. For FLOAT(6) 11.22 and 11.2222 are same data types.

Well, this is a very simple interview question and I honestly expect everyone to know this as this is basic of computer science than a SQL Server specific question. With that said, I also have the opinion that this kind of question does not judge people’s SQL skills.

Reference: Pinal Dave (https://blog.sqlauthority.com)

## SQL SERVER – Function to Calculate Simple Interest

#### 1 Comment.Leave new

• Actually in the ANSI ISO standards. This is a little more complicated. There is technically a difference between a float and real because we were defining the standards in the late 1980’s vendors had had various implementations. The IEEE floating-point standards did not exist at the time.

There is also a difference between numeric (S,P) and decimal (S,P). This had to do with the existence of BCD numbers versus COBOL style picture representations in computer hardware. The decimal data type has to have at least the specified number of decimal places, but it can have more; this was for BCD. Numeric, on the other hand, had to have exactly the specified number of decimal places to match COBOL.

Microsoft simply picked a subset of the allowable available implementations. This is why numeric and decimal are the same in SQL Server and there is no difference between real and float since the IEEE standards got put into hardware.