You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Python's decimal.Decimal defaults to scientific notation for representing very small float values. Since the docs suggest using Decimal for retaining float precision, I think it makes sense to also mention this aspect, because depending on the developer's usecase this may or may not be a problem.
I believe that has to do with display of the value and not specifically to do with how it is represented internally. We can add a note to the documentation, however, to clarify.
Sorry, yes, I actually meant to refer to the string representation.
Thanks, a note would be great!
Bit more background. When I extract data from a database (which wasn't Oracle up till recently) e.g. to load into other systems, I rarely fiddle with how data is written out, because Python and/or the DB client takes care of it. In this case using Decimal may set the developer up for an unexpected failure - but I can also see why it's a great solution to retain precision. I just noticed there was an attempt to use Decimal as default, which was reverted for the same reason :)
https://github.com/oracle/python-cx_Oracle/blob/master/doc/src/user_guide/sql_execution.rst#fetched-number-precision
Python's
decimal.Decimal
defaults to scientific notation for representing very small float values. Since the docs suggest using Decimal for retaining float precision, I think it makes sense to also mention this aspect, because depending on the developer's usecase this may or may not be a problem.Add a code snippet and explanation similar to the above to the corresponding documentation section.
The text was updated successfully, but these errors were encountered: