Optimizing Performance for Simulation with Large Input Data

Hello,

I am working with an input that has 16,955,568 nodes, 16,758,266 elements, and 266 steps. My machine has 40 cores, and I’ve set OMP_NUM_THREADS=38. However, the simulation is taking a considerable amount of time, and it has been running for the past 4 hours, only completing step 2. How can I solve this issue and improve the simulation’s performance and speed?

Thanks,
Bhavita

Decrease the size of the model for an initial run, and after that refine on the parts or zones were they realy need more elements. Have you removed small radius and features of your CAD model, use simmetry and shell/beams elements were it could be used?

2 Likes

It’s a massive model with almost 17 mln nodes and hundreds of steps. Better save restart data because you don’t want to run this again after it fails in step 158 for some reason.

Are you sure the mesh has to be so large ? Is it a result of a convergence study or experience with the same kind of models or just a guess ?

yes, We are experimenting with a particular mesh size and that is why the model has so many nodes

I’ve heard of poor performance with a high number of threads like that and that the maximum speed is reached around 8 threads. Maybe there’s some slightly higher peak between 8 and 38 but it’s not clear where.

How many iterations did it do for each of those two steps? That can tell you if there are gains to be made from making the solver run faster (if 1 or 2 iterations) or making the model easier to converge (if it’s 10+ iterations).