Select appropriate difference between decimal, float and double data type in C#?
i. Float and Double are floating binary point types while decimal is a floating decimal point type.
ii. Precision difference for float is '7' digit for double is '15' to '16' digit and for decimal is '28' to '29' digits.
iii. Some values which cannot be exactly represented hence for those values float and double are more appropriate.
A. i
B. i, iii
C. i, ii, iii
D. ii, iii
Answer: Option C
Related Questions on Basic Syntax and Data Types in C Sharp
What is the correct syntax to declare a variable in C#?
A. num = int;
B. var num;
C. num int;
D. int num;
What is the purpose of the 'var' keyword in C#?
A. Declares a constant
B. Converts a variable to string
C. Implicitly declares a variable
D. Defines a method

Join The Discussion