Description
To allow such smaller tolerances, the code has to be reworked.
Since the code relies on differences between dual gaps, which are nearing the number 1,
Then to get a tolerance of 10^-15, we require 15 digits of precision, this is the maximum given by
the double type. Therefore, when looking for such high precision numbers, the rounding errors
will start to dominate and weird behaviour is expected. (For instance, obtaining dual gaps below 1,
which by theory should never happen after the first iteration)
Expect also weird behaviours for input tolerances "close to" 10^-15 too, since the computations are
quite convoluted and rounding errors might accumulate in unexpected places.
Python has no general setting to increment the precision of the used numbers. The alternatives to fix this issue are:
- Change all number to
decimal
type, which allow much higher precision values. The main drawback of this approach is that either at the beginning all numbers are set to decimal, which would sub-optimize allnumpy
matrix multiplications asnumpy
does not recognizes "decimal", as such, all inbuilt optimizations will be deactivated. - After matrix multiplications convert the outputs to
decimal
. Maybe it is possible to cast after the matrix multiplications the outputs to decimal. This alternative might work without much suboptimization (or memory waste), but certainly it would contaminate the code withdecimal
casting in many places. - Maybe, it can just be avoided by substracting "1" to the obtained dual gaps, then the floating point numbers would be allowed to get smaller digit precision. But likely the rounding errors happen earlier than at this output point.