The precision of the result of an operation depends from the precision with which we know the data.

*Example:* Suppose we know the dimension of a rectangle with a precision of only
one decimal digit after the point. Then, the area of the rectangle cannot have
a higher precision, and hence it makes no sense to consider the second decimal
digit as significant:

9.2 * 5.3 = 48.76 (the second decimal digit is not significant)

9.25 * 5.35 = 49.48 (here it is)

This is not caused by the representation of numbers in a programming language, but by the limitations on our knowledge about the input values of a problem.