Simulation crashed with big model

Hello everyone, I am working on a big model (2000000 nodes) and in local pardiso crashed. Is it a problem of hardware or software itself?

Other CalculiX solvers quite often fail with large models but Pardiso is usually the most reliable. Are you using ccx 2.21 ? How many CPUs ? Did you try with PaStiX ? Can’t the size of the model be reduced ?


CalculiX Version 2.21, Copyright(C) 1998-2023 Guido Dhondt
CalculiX comes with ABSOLUTELY NO WARRANTY. This is free
software, and you are welcome to redistribute it under
certain conditions, see gpl.htm


You are using an executable made on Sat Jul 29 17:24:57     2023

  The numbers below are estimated upper bounds

  number of:

   nodes:      2486120
   elements:       560654

 number of equations
 number of nonzero lower triangular matrix elements

maybe you don’t have enough memory, 32GB are required per million equations says the manual

1 Like

Can be stretched 30-60% with a large pagefile in a fast SSD.

1 Like

do you need a special configuration or is just managed by the OS?

when the model is too large to solve, the analysis can be start with coarser mesh, then use sub-model for every location interest.

I just used the OS. Not the same sort of problem you are doing but up to 4,000,000 nodes. For Pastix I had to get a version compiled in I8 for much beyond 500,000 nodes and for lots of nodes, beyond 1,000,000 or so it is slower. Apparently the index numbering on an array goes beyond 4i at about that point. I also have run 1 problem “Out of Core” which requires some set up. I routinely run 2,000,000 nodes, but I have 64GM of ram. Thinking about what to do for my next PC, it needs more memory, but also more memory channels as for some of these big problems it is just too slow, and each channel can’t effectively feed more than 3 cores. (I run an AMD 3700x with 64 GB).

Is about a similar issue of what it takes to do a big problem half a year ago. You might inquire of the person that started that thread how it came out. I don’t know what sized computer he used, but he managed to run a non-problem of 7,800,000 nodes in about 2 hours and was going to get a 1 TB of memory machine to see how that worked.