Well, clearly we have good consensus among scientists... but I'm curious about the mathematician's perspective.

Scientists are trained that 3.2 is 3.2??? not 3.2000 or anything else. Depending on how much you know about the next digit (via statistical analysis) you might know that 3.2 is actually 3.18 to 3.22 or something. Personally I prefer +/- notation for that, but it does get you back to why significant figures are important.


My experience is also that one doesn't always wind up the with precisely the same result, depending upon WHEN you round, which is why one does carry an additional (and sometimes two additional) significant figures through calculations, rounding at the end so that you don't accidentally introduce rounding errors as you go. So that is WHY you add that zero, basically. Kind of. It's certainly why I taught college students in STEM to use that subscript designation. In my classes students would have been using the following notation:

3.14160
+2.71828
5.85988

because neither the zero (artificially used as a placeholder) nor the 8 in the resultant sum is significant.





This method does mean that you have to understand how to manage significant digits through all kinds of calculations and transforms, so that you don't wind up confused about how many are legit in a final result. Addition is the easiest case, clearly.

It just seems better to introduce some ideas early on-- like the notion that unknown values aren't necessarily zero if they haven't been determined.





Schrödinger's cat walks into a bar. And doesn't.