Cannot provide enough memory for ccx

CCX runs OK with my input file until I start to use a finer mesh. The run stops with a “*ERROR in arpackbu: info=-9…free(): corrupted unsorted chunks” error or a similar one. I suspect that the program lacks memory, because when I try to run it at feacluster’s web site, it runs with 120 GB memory and ends in 10 min (I believe it’s the time limit for the cluster with this memory) but fails in 5-6 min on clusters with lesser memory.

I have a windows 10 machine with 64GB RAM. I use wsl2, so I run ccx_2.19 (with PARDISO) or ccx_2.18_MT in Ubuntu. I use the following in .wslconfig file: memory=400GB swap=250GB . When I check memory with free -mh in Ubuntu, I get total mem 62Gi, total swap 250Gi.

The input file is too big to provide here (about 80MB). If necessary, I could provide it on my web site.

I would appreciate your advice.

@akhmeteli How many DOFs are we talking about?

I guess the following excerpt from the output answers your question. Looks like it is about 6.4 million DOF. It’s a 3D model of a pretty thin spherical sandwich shell.

The numbers below are estimated upper bounds

number of:

nodes: 2070761
elements: 1875000
one-dimensional elements: 0
two-dimensional elements: 0
integration points per element: 8
degrees of freedom per node: 3
layers per element: 1

distributed facial loads: 187500
distributed volumetric loads: 0
concentrated loads: 0
single point constraints: 16566
multiple point constraints: 1
terms in all multiple point constraints: 1
tie constraints: 0
dependent nodes tied by cyclic constraints: 0
dependent nodes in pre-tension constraints: 0

sets: 187504
terms in all sets: 4312509

materials: 187501
constants per material and temperature: 9
temperature points per material: 1
plastic data points per material: 0

orientations: 187500
amplitudes: 204066
data points in all amplitudes: 204066
print requests: 0
transformations: 0
property cards: 0

Sorry, 6.2 million DOF. I miscalculated.

on my laptop, to keep the model in memory it’s about 700,000 nodes per 16gb of memory. so for 64gb i would estimate about 2.8million nodes. or about 8.4million dof. it depends on contact and other things though. this would be for a simple model with no contact.

oops; i should say that’s for the pardiso solver

Thank you for this information. I wouldn’t think though that removes my suspicion that there is not enough memory, as the figures are not radically different (2.1 versus 2.8 million nodes). What I am trying to understand is why the program cannot use the swap file.

hi, i’ve seen people state that they can’t get out of core to work before. there is supposedly a patch floating around to do it. in my experience, it has always seemed to start working. however, my ssd is so slow that i always kill it. instead of a 5 min run it will keep going for hours. so i don’t know if it will ever finish or not. however, i have seen pastix and pardiso both start using the ssd once memory is full. perhaps someone else knows more.

1 Like

My experience is not quite the same, but I run a lot of problems with around 2,000,000 nodes on a 64 GB machine. I am using the windows version from Mecway with CCX 2.17 or 2.18 pardiso or MKL, also works with Pastix if compiled with 8 bit integers (fea_cluster did that for me). To provide enough memory i just have a very large pagefile set up in a very fast SSD. Can run up to about 166% of core size committed memory before execution time gets unreasonable and borders on thrashing. I did run a 4,000,000 node linear problem directly in windows Mecway spooles where Victor told me about a cfg file addition.


There may be something similar for CCX as Mecway just passes an inp file to CCX after it creats the file.

3 Likes

Thank you. I might have resolved the problem with memory by changing the Pardiso parameters iparm in ccx pardiso.c file to allow Pardiso out-of-core computation. As a result, the program has been running for 8 hours already. I am not sure it will take a reasonable time to finish, but this is a problem for another day:-)

1 Like