CalculiX with Intel's MPI cluster sparse solver

I think it looks correct. I get the same files with differences. I think most are due to differences in the eigensolver and natual frequencies. See discussion on that topic here:

What kind of google cloud machines are you going to run on? To see speedup you will need to use two dedicated machines with all the available cpus on each .

1 Like

Thank you very much for the explanation.

As for the google cloud machines, I have yet to decide what exactly I need. So far I just wanted to try your install procedure, so I set up an instance group where the number of instances could change from 1 to 10. The instances were E2 machines, the boot disk was 50 GB (oneAPI Base required at least 23 GB). At the moment the execution time is not the priority, but I need a lot of memory. My desktop at home has only 64 GB of memory, and it is not enough for my problem.

Any plans to port it to v2.20?

Only works with v2.18 . Will update it for newer versions if there is interest.

No plans yet unless someone has a compelling reason :slight_smile: Were you able to run this on a cluster and see any speedup?

1 Like

So I have run a buckling job in a google cloud cluster at last: one instance, custom, N2 machines, 8 vCPU, 120 GB RAM, 96 GB boot disk, at least 95 GB RAM was used, and it took about 100 minutes.

Thank you very much for the script and your help!

1 Like

That is good. But if just running on one instance then probably no need to run the mpi version. The regular executable with pardiso solver should work just as well.

1 Like

It does not matter that there are multiple processors?

Both the MPI and non-MPI version can use all the processors. The only difference is the MPI version can use multiple hosts also. So if you install option 2 below then you are limited to all the processors on one host.

(1) Spooles ( Not recommended. 2-3X slower than Pardiso and can not solve models with more than a million degrees of freedom )
(2) Pardiso ( Must have the Intel compiler. If not, it is available for free from Intel.com. Does not require administrative privelages )
(3) Pardiso MPI ( Same requirements as above, but needs HPC kit also. Only works with v2.18 . Will update it for newer versions if there is interest. )

1 Like

I might note I have run problems requiring more memory than I have by increasing my available page file size and using an SSD for the page file. I have a very fast SSD, but watching I/O bandwidth has indicated that is probably not necessary. Most of the FEM problems only access modest portions of the memory at one time or at nearly the same time. For my 64 GB machine working virtual memory set sizes up to 160% of physical memory have been usable. Larger starts to have a lot of paging slow things down a lot. Pastix does not use memory as efficiently as Pardiso, so problems don’t work as well when they get larger than physical memory, and they reach that limit sooner. I have not been using 18 yet.

1 Like