When I was running calculix in linux system, my program was forced to end because of the killed in the process, was it caused by calculix or was it caused by insufficient memory? (killed at the end, no other error message.)
That is how it responds to too big of a model. I don’t think you can really not have enough memory though. It also crashes like that for when using certain combinations of features.
When running the simulation, keep an eye on the amount of memory used. Most desktop environments and window manager can show that.
Also check the return value of ccx
, with e.g. echo $?
what is meaning of too big of a model?
The size of my model is 2m*3m.And I only used nlgeom and *restart, read.
I’m giving all three directions to each node, is that what’s causing the memory to crash?
Since I am using calculix to solve the fluid-structure coupling problem, when I run my program, whether it is the fluid part or calculix, %MEM accounts for 90% or more, which indicates that it is caused by excessive memory consumption? So what’s the solution other than replacing a computer with more memory to do the calculations?
And how to check the return value?
Sorry, by too big, I mean too many DOFs and connectivity between them. If it’s over a few hundred thousand nodes, it could be trouble.
But you can’t really run out of memory because the OS will use the disk and it’ll just go slowly. So it’s probably more like the limitations of the CCX build or solver option you’re using. There’s an ILP64 version somewhere that’s good for big model. Also maybe PastiX?
I always thought it was the cause of insufficient memory, but I also killed myself when I used a supercomputer calculation just now, so far I don’t know what to do.
this is my code
first:
*INCLUDE, INPUT=Node.msh
*INCLUDE, INPUT=Element.nam
*INCLUDE, INPUT=Elset.nam
*INCLUDE, INPUT=fixed1.nam
*Material, Name=Material
*Elastic
46500000, 0.35
*Density
700
*Shell section, Elset=Elall, Material=Material, Offset=0
0.001
*Restart, Write
*Step, Nlgeom, Inc=1000
*DYNAMIC
0.001,0.01
*Boundary
fixed1,1,6,0
*Cload
Nall,1,0.0
Nall,2,0.0
Nall,3,0.0
*Node file
U
*End step
second:
*Restart, Read
*Step, Nlgeom, Inc=1000
*DYNAMIC,DIRECT
0.08,4
*Boundary
1,1,6,0
3,1,6,0
6,1,6,0
7,1,6,0
9,1,6,0
12,1,6,0
*Cload, op=New
*Dload, op=New
*Cload
*INCLUDE, INPUT=force.txt
*Node file
U
*End step
There are about 2800 nodes in total,and 5000+ triangle。
The force.txt file contains the forces for each node in each direction.
Can you help me see if I have any obvious mistakes here? Or is there anything that could be improved?
I can’t see anything wrong with it. You can isolate the cause of the problem by removing things until it works. That’s my common troubleshooting technique for mysterious failures.
Doesn’t seem nearly enough nodes for memory to be an issue at all. No idea why it would be 90% used.
I have this prompt in the process of running, do you know how to modify this?
You are using an executable made on Tue Jul 4 16:39:43 CST 2023
*INFO: restart file yezizz.rin
has been opened for reading.
it was created with CalculiX Version 2.20
The numbers below are estimated upper bounds
number of:
nodes: 365457
elements: 5528
one-dimensional elements: 0
two-dimensional elements: 5528
integration points per element: 9
degrees of freedom per node: 3
layers per element: 1
distributed facial loads: 0
distributed volumetric loads: 0
concentrated loads: 68760
single point constraints: 401952
multiple point constraints: 672202
terms in all multiple point constraints: 717490345
tie constraints: 0
dependent nodes tied by cyclic constraints: 0
dependent nodes in pre-tension constraints: 0
sets: 4
terms in all sets: 33452
materials: 1
constants per material and temperature: 2
temperature points per material: 1
plastic data points per material: 0
orientations: 5528
amplitudes: 2
data points in all amplitudes: 2
print requests: 0
transformations: 0
property cards: 0
*ERROR in u_calloc: error allocating memory
variable=nodempc, file=ccx_2.20.c, line=272, num=-2142496261, size=4
A lot of those messages can be hard to interpret. Not sure what error allocating memory really means, nor the “upper bounds” of MPCs/nodes/etc. I tend to ignore the content of a lot of those messages and just delete things to find out what’s causing the problem.
I’d say the problem is related to this…check your MPCs
*ERROR in u_calloc: error allocating memory
variable=nodempc, file=ccx_2.20.c, line=272, num=-2142496261, size=4
This seems like a very high value:
717,490,345
I agree with you @JuanP74… It must be related to the expansion of the 2D elements and the associated MPCs.
Can you expand these elements to solids and apply proper boundary conditions?
This is the result of running *restart,read many times, as shown in the two inp files I mentioned earlier. How can I check the MPC you mentioned? I don’t know much about it.
I’m actually using the shell section in 3D, and the element is S3. I can’t turn it into solid section at the moment, so don’t I have any other solution?
with the info available is difficult to figure out. They’re created automatically for CCX shells (search for knots in the manual) but considering the number of elements I can´t imagine why there are so many in your model. You should check your model first in a different solution sequence (e.g. static) and see how it goes.
You can look at the two inp files I mentioned earlier, the first inp file only runs one piece, and then the second inp file is run every time the force-txt file is updated, and it took about dozens of times to run the second inp file before this error occurred. Is it because many times *restart, read caused by the number of elements accumulated a lot?
Check the output evolution from the first restart analysis to see if it makes sense
Can you try expanding the mesh using gmsh or something similar? Just to avoid the expansion part.