CCX memory usage during "Determining matrix structure..."

I have a multi body model, made up of 6 node shell, and 3 node beam elements. 2 of the bodies have rigid boundary conditions fixing them to the ‘ground’, the 3rd body is tied to the other 2 with equations, and there are equations tying certain nodes between the other 2 bodies as well.

During the “determining matrix structure…” step the memory usage is ~4 times as high as during the actual matrix solve using Spooles. I have renumbered all the nodes and elements, so there are no gaps in the numbering. It’s hitting close 26 gigs of memory at the peak, but only uses ~7 gigs during the actual matrix solve.

Is there anything else I can do to reduce this peak memory usage?

Thank you,

-Dave

Spooles can be very slow in some cases (especially when there are many elements), I would switch to PaStiX or Pardiso.

1 Like

It’s not so much the speed as it is the memory usage - and the peak memory usage is during the “determining matrix structure” step which is before the spooles solve. I presume this stage would be done for pardiso and pastix as well?

Hello ,

In case you have some BC applied to a surface I suggest you change it and apply it directly to the nodes. I have notice that when the boundary conditions are imposed on the surface instead of the set of nodes on it , the matrix assembly sometimes struggles in Calculix.

Particularly, I have experience it when doing nonlinear axisymmetric, applying the BC directly to the nodes does the initial assembly much faster than imposing frictionless support on the surfaces. I’m using calculix 2.18 win version (mainly Pastix) and MECWAY.

That is maybe related with the algorithm to extract the nodes belonging to the surface or maybe computing the normal, not sure.

1 Like

I agree, there is no reason anyone should be using spooles apart for quick testing… Way too slow for most models.

To look at detailed memory allocation, try exporting this variable and re-run:

export CCX_LOG_ALLOC=1

2 Likes

I am applying pressure DLOADS to the shell elements, so perhaps this is what is causing the large spike in memory usage? I can try switching to nodal forces instead.

I’ve looks at the detailed memory usage, per the export flag… and it’s during the matrix structure step, I can see exactly what routines it’s allocating that memory, tho it’s unclear at this point what is driving it as the code is less than obvious to follow what exactly it’s doing at that point. It’s in the insert.c code… where it looks to be resizing an array and biffs it.

Oh…and while spooles may be slower… at this point it’s not speed that’s the issue. It’s memory. I will happily compile against pardiso if that’s going to take the code down some other path and not use as much memory, but that seems unlikely since it’s before the spooles solve that this issue pops up. Thank you!

If you can share the model we can try and debug further…

Unfortunately I can’t share the model, but thank you for the offer!