Discuss the difference between the precision of a measurement and the terms single and double precision, as they are used in computer science, typically to represent floating-point numbers that require 32 and 64 bits, respectively.

What will be an ideal response?

The precision of floating point numbers is a maximum precision. More ex-
plicity, precision is often expressed in terms of the number of significant digits

used to represent a value. Thus, a single precision number can only represent
values with up to 32 bits, ? 9 decimal digits of precision. However, often the
precision of a value represented using 32 bits (64 bits) is far less than 32 bits
(64 bits).

Computer Science & Information Technology

You might also like to view...

System units are typically composed of metal because ________

A) metal protects the storage media from magnetic fields B) metal is lightweight C) metal is durable D) metal is inexpensive

Computer Science & Information Technology

The two operations defined by Dijkstra to be performed on a semaphore are ____.

A. P and V B. WAIT and SIGNAL C. test and set D. check and update

Computer Science & Information Technology