0

Compiling TOUGH3 source code using quadruple precision

My understanding is that when using double precision, it is generally best practice to avoid models where permeabilities vary by more than ~8 orders of magnitude (defualt `dfac`?). This threshold is typically linked to the square root of machine precision (the square root of epsilon, where 𝜖≈10−16). 

However, if a specific application requires overcoming this limitation, it might seem logical to simply compile the source code using quadruple precision. Is this something that the development team has done and can give guidance on?

1 reply

null
    • kenny
    • 17 hrs ago
    • Reported - view

    To my knowledge, we have not attempted to compile TOUGH using quadruple precision. In TOUGH, the primary limitation is not floating-point roundoff alone, but rather Jacobian conditioning and the robustness of the linear solver. When permeabilities differ by more than ~8 orders of magnitude, flux terms and the resulting Jacobian entries become poorly scaled. The use of different DFAC values is intended to prevent the Jacobian from becoming numerically singular, not merely to avoid floating-point underflow.

    For these reasons, quadruple precision does not fundamentally resolve the permeability-contrast problem in TOUGH. Even if TOUGH3 were compiled in quadruple precision, third-party linear solvers would remain the main bottleneck, and the TOUGH source code itself is not quad-clean, as REAL*8 is defined and assumed throughout.

Content aside

  • 17 hrs agoLast active
  • 1Replies
  • 15Views
  • 2 Following