Hello,
I would like to give a try to run CalculiX on a distributed computing environment (i.e Cluster). It seems there are some source files (v.2.17) with definitions to include the MPI version of spooles and to initialise the MPI protocol. These defines can thus be included on the compilation process by setting the flag -DCALCULIX_MPI on the makefile.
As I could not find any official instructions, I would like to know if someone knows about the intention of all this, and/or if is possible to run CalculiX with MPI?
Further, I managed to compile the code by setting this flag (see attached makefile), but had trouble on running the mpi version.
To run the mpi version I tried as a first test, to run CalculiX on a single node, by issuing the command:
>$ mpirun -np 2 ccx_2.17_MT_MPI spirello_model
The output from this command is pasted below. It shows that the program did not run because it got stuck on reopening the *.dat file (presumably by the second concurrent process). This *dat file seems to be always generated even when there is no *PRINT input card).
Any feedback/help would be greatly appreciated!
Thank you in advance,
Regards,
Jorge
PS: Could not attach the Makefile.1st time I am using the forum so it might be wrong doing, but it seems really odd that the interface does not allow to include text files… I included thus a jpeg snapshot of it
============================================================
$ mpirun -np 2 ccx_2.17_MT_MPI spirello_model
*ERROR in openfile: could not delete file spirello_model.dat
CalculiX Version 2.17, Copyright(C) 1998-2020 Guido Dhondt
CalculiX comes with ABSOLUTELY NO WARRANTY. This is free
software, and you are welcome to redistribute it under
certain conditions, see gpl.htm
You are using an executable made on Mi 12. Mai 23:27:22 CEST 2021
Primary job terminated normally, but 1 process returned
a non-zero exit code. Per user-direction, the job has been aborted.
The numbers below are estimated upper bounds
number of:
nodes: 74813
elements: 13716
one-dimensional elements: 0
two-dimensional elements: 0
integration points per element: 8
degrees of freedom per node: 3
layers per element: 1
distributed facial loads: 0
distributed volumetric loads: 0
concentrated loads: 0
single point constraints: 5667
multiple point constraints: 2
terms in all multiple point constraints: 150
tie constraints: 17
dependent nodes tied by cyclic constraints: 0
dependent nodes in pre-tension constraints: 0
sets: 81
terms in all sets: 57557
materials: 4
constants per material and temperature: 9
temperature points per material: 1
plastic data points per material: 0
orientations: 9124
amplitudes: 5
data points in all amplitudes: 5
print requests: 0
transformations: 0
property cards: 0
*WARNING in usermpc: node 38426
is very close to the
rotation axis through the
center of gravity of
the nodal cloud in a
mean rotation MPC.
This node is not taken
into account in the MPC
STEP 1
Static analysis was selected
*INFO in gentiedmpc:
failed nodes (if any) are stored in file
WarnNodeMissTiedContact.nam
This file can be loaded into
an active cgx-session by typing
read WarnNodeMissTiedContact.nam inp
mpirun detected that one or more processes exited with non-zero status, thus causing
the job to be terminated. The first process to do so was:
Process name: [[52683,1],1]
Exit code: 201