I’m not sure if my question will be properly formulated. I will try with the aid of an example.

I have read in the manual that Calculix uses double precision. Apart from that I understand that any arithmetic operation, factorization,…etc… has associated some round-off error.
I would like to have an idea of ¿which is the minimum/maximum allowable value for a number involved in an analysis with Calculix and when we should consider we have reached some limit value that could be significantly affected by round-off errors?.

As an example, I have defined a simple cube with one face fixed and I’m increasing pressure on the other side. I have found that VonMisses results are proportionally increasing up the point where I apply a pressure of 1E39 Pa in which I obtain suddenly a completely wrong result of 5.79E23Pa.

The Creep Norton LAW directly depends on VonMisses Stress . If we expect a value of q around 100E6 Pa and n=5 , q^5 = 1E40 which apparently is an operation that would be outside the Calculix “numerical” safety range .

i’m not sure this will help or not. however, i write a code that uses double precision. i have found that the stated significant digit limit of 1e-15 does not hold up in reality. it is impossible to know how much error will end up in the final calculations. it depends on the inputs and number of calculations, as well as the type of calculations. so anyone’s code, including CCX, is going to have the same issues. it is just a problem with how computers work right now. it’s not so much as round off error. there are a lot of errors that happen. it’s because computers can not represent certain numbers accurately. this issue will grow depending on what’s happening. so in my code i get about 1e-12 as a limit. rather than 1e-15. when i switch to quad precision (stated limit of 1e-33) i can compute to 1e-28. however, quad precision is very very slow because the current cpus don’t actually support it. so what i can gather from this is that even though a code is written in double precision you can assume it will only be accurate to single precision (stated limit of 1e-7). however, i’m not sure there is a way to know how much error a given code or calculation is going to have. but it should be good to at least single precision.

some codes now are written in single precision to get more speed. however, this is a terrible idea. it’s going to be incredibly inaccurate.

1E39 is just over the maximum value for single precision (3e38). The output is in single precision, so I guess it’s not possible to have an output that high.

Sorry about my ignorance on computer science but ¿do you mean we should avoid passing to the solver any value that would require operations with an expected result bigger than 3e38?.
¿Including intermediate operations like the one described for the NORTON LAW?
¿Does this extend to 3e-38 or is there a different lower limit?

i should clarify, based on victor’s reply. i was referring to the significant digits limit. victor is referring to something else. in any event, to me a number like 1e+/-38 is basically infinity or zero. you might want to look into why you have numbers so big or so small. are they even real? or just numerical garbage. most people don’t experience numbers of those magnitudes in manufacturing.

The frd file for the simple cube shows a 3.49986E+32 against the previous INF (Infinite I guess). Of course those are MPa now.

So, I suppose we should be very careful when selecting the unit system for certain problems. Creep problems for example where q^n can reach huge values. Ideally the developers implement the Normalize q/q0 formulation.

Would be nice if ZFDong could see the post too and give it a try to its file with a different unit system.

If as said above by @vicmw the issue is related to the printout, then why numbers not affected by that are not the same figure but in Pa in place of MPa? Intermediate results seem not to give same values in both analyses, not only the final printout of results.

Thanks for responding. Here are the two versions of the inp files.
The difference is Node coordinates from m to mm and material properties and *DSLOAD from Pa to Mpa.

I have used a pressure just above the limit as per Victor comment: -3.5E+38

Using ccx v2.19 with pastix (prepomax compilation) I get different values so I guess fixing the INF issue does not guarantee that the solution is right.

Ok, I see the problem, the other stress values are “ZERO” even if its value is xxE+13 because they’re compared to a very big value of E+32…that means that the error is propagated to the exponent too. The value 3.4998 is the correct one, as this problem is linear (exponent value depends on pressure input). Quite interesting…

I see same issue with displacements so it has to happen at solution stage or before… I guess at solution stage because otherwise one should expect issues when solving the linear equations.

right, even unit system are free to chose many prefer N.mm and Lbf, inch or Kgf,cm units for commonly cases. try to avoid large number of floating as input values since it may lead to numerical roundoff during solver ran cause of numerical operation is different order in advanced solver (Taucs,Pardiso,PaStiX).

¿Do you mean CalculiX is not double precision with Pardiso and PaStiX as solvers?
Those differences between equivalent files with different Unit systems detected by JuanP74 arise also with much smaller pressure values.

i mean: even it use double precision, roundoff error may raised due to accumulate in the process of solver running. many developer notify, a large floating number input values is the initiated source of a problem.

I think the issue is that you should never use numbers bigger than 1/macheps, (better 1 or 2 orders of magnitude lower). If your calculations involve such differences guess calculations will be wrong.

for double precision macheps=1e-16 aprox, despite the system can represent bigger numbers all operations will involve errors of macheps order.