Preformance problems due to high iterations

Hello everyone!

I’m working on an analysis about heat transfer by way of thermal radiation. CalculiX has been a huge help with this. Currently I am attempting to decrease the amount of time it takes for the simulation to run. My current solver is PaStiX which seems to work great, although it is not using the GPU because it seems the problem isn’t big enough from that. However the solver only takes up 25% of the total calculation time. The rest is taken up by CalculiX itself.

I’m currently speculating that the main cause of high calculation times is because of the dynamic timesteps feature. This feature is enabled to make sure the simulation converges. I believe this was necessary for the program to run, as the radiation based nature of the problem makes it highly non-linear. So far I have seen a ~10% improvement by decreasing the maximum increment size. However that seems to have diminishing returns as a maximum increment size too small again increases the total amount of increments needed, thus increasing runtime.

I would appreciate any and all advice regarding this issue. Have you encountered methods of optimizing Calculix’s iteration system?
I’m hesitant to change but of the model because that woudidn’t get a lot wiser from the documentation.ld compromise its ability to reflect reality, but for now the about of nodes is already quite low.

Thanks in advance,

Koala Kerel

Can you try with Pardiso ?

How many nodes do you have ?

Is it pure heat transfer analysis ? Radiation to ambient or surface-to-surface ? Any other advanced features ?

I haven’t tried with Pardiso yet, but I also haven’t compiled that particular solver into calculix yet. I’ll put that on the to-do list. Do you think that one would work better?

Currently my model has 3100 nodes.

It’s just heat transfer analysis yet. Surface to surface. The model is a box with a simple heater object, a sensor and a product inside. Inside the box should be seen as a vacuum. An advanced feature I’ve built into it is a PID controller that increases the temperature of the heater object based on the temperature of the sensor object, in order to guide the temperature along a designated path.

Pardiso is definitely the most robust/reliable solver and usually also the fastest one (although sometimes PaStiX seems faster, e.g. with contact problems).

This isn’t a lot. Likely even too few to use parallelization.

Then maybe the view factor calculation is computationally expensive here.

So you are using a subroutine ?

Did you try solver= iterative Cholesky?

In some edge cases during topology optimization I got far better convergence on a simple steady state heat transfer analysis.

Cheers,

Stoli

@Calc_em I’ll see if I can examine the time that goes into the view factors. And yes I am using a subroutine in order to have a control system in the model. I’ve built it into the dflux subroutine.

@stoli I have tried iterative Cholesky. So far PaStiX was about 15% faster than that one.

Something about that should be printed to the log, but you could also test it with the *VIEWFACTOR keyword that allows you to reuse the view factors from a previous run.

Perhaps that’s the bottleneck then. Did you try without it ?

For this problem size it makes sense that the linear solver takes a relatively small amount of the total runtime. This will probably not change a lot with a different solver. PaStiX is already a very good choice, but you will only benefit from the speedups (and reduced memory usage) for a larger problem size (very roughly starting with 1e5 degrees of freedom, it also depends on the problem type).

As already mentioned, the view factor calculation could be a bottleneck. Make sure to run CalculiX in multi threading (if your hardware supports it), since the view factor calculation is well parallelised. Maybe more relevant: also check if any of the options in *VIEWFACTOR apply to your use case (READ or NOCHANGE).