CalculiX with Intel's MPI cluster sparse solver

I have completed the integration of Intel’s cluster sparse solver into CalculiX. To install , run the install script from here and select option 3:

wget https://feacluster.com/install/install
perl install

The prerequisite is you must have Intel’s oneAPI compiler with base and HPC components. This is now free for everyone and does not require root privileges to install.

This graph shows the speed-up you can expect:

All verification models except for one did not converge ( oneel201fi3.inp ) . I believe it is due to round-off error as the model seems to generate very small numbers from the factorization.

Example commands to run on two hosts, where each host has 15 cpus:

export OMP_NUM_THREADS=15
mpirun -np 2 -ppn 1 -hosts=host1,host2 ./ccx_2.18_MPI input_deck

There are some things to be aware of when using it. For best results, keep the number of MPI processes ( -np ) a multiple of the power of 2, i.e 2, 4 or 8 . Using -np 3 can be slower than -np 2 or -np 4 .

Also, you can experiment with more than one MPI process per host and reducing the number of OMP_NUM_THREADS per process accordingly.

For example, on a 40 cpu machine you can try:

export OMP_NUM_THREADS=10
mpirun -np 4 ./$executable input_deck

Some models may crash due to a known bug with Intel’s cluster sparse solver. If that happens, try setting this environment variable:

export PARDISO_MPI_MATCHING=1
3 Likes

Thank you for sharing this!

Wondering if anyone had a chance to test this out? Since I wrote it I only had one person ask about it. My guess is folks are not running CalculiX on a cluster. Seems majority run it on Windows on local machines?

I haven’t tested it yet- running on local machine… not cluster.

1 Like

Hey,

I am trying this out currently on Debian 11 cloud Server @ Hetzner.
Started with the ccx_2.20 currently provided form dhondt.de and noticed it is only spooles which cannot do MT…

From what I tested so far your executable is running smooth but I have to test more…

2 Likes

I tried to use your install script with Pardiso MPI (option 3 in your script) under Ubuntu (under WSL). I am trying this on a local machine so far, but I intend to use it on a Google cluster. The executable was not created. I was not able to find the log.txt file. The end of the output is provided below. I installed Intel oneAPI Base and HPC.

I would very much appreciate your advice.

In file included from /opt/intel/oneapi/mkl/2022.2.1/include/mkl.h(30),
from ccx_2.18.c(29):
/opt/intel/oneapi/mkl/2022.2.1/include/mkl_trans.h(171): error: identifier “size_t” is undefined
size_t rows, size_t cols,
^

In file included from /opt/intel/oneapi/mkl/2022.2.1/include/mkl.h(30),
from ccx_2.18.c(29):
/opt/intel/oneapi/mkl/2022.2.1/include/mkl_trans.h(173): error: identifier “size_t” is undefined
float * AB, size_t lda, size_t ldb,
^

In file included from /opt/intel/oneapi/mkl/2022.2.1/include/mkl.h(30),
from ccx_2.18.c(29):
/opt/intel/oneapi/mkl/2022.2.1/include/mkl_trans.h(173): error: identifier “size_t” is undefined
float * AB, size_t lda, size_t ldb,
^

make: *** [Makefile:8: ccx_2.18.o] Error 4
make: *** Waiting for unfinished jobs…
cp: cannot stat ‘./CalculiX/ccx_2.18/src//ccx_2.18_MPI’: No such file or directory
chmod: cannot access ‘ccx_2.18_MPI’: No such file or directory

** Something went wrong. Executable not created. Email info with the contents of the log.txt file **

1 Like

Not sure offhand . Perhaps some compiler mismatch. I tested on 2021.4.0 and seems you are using 2022.2.1 .

Are you able to compile this simpler version of the cluster sparse solver program ( see first two comments below for how to compile and run ):

#include <stdio.h>
#include <stdlib.h>
#include <math.h>
#include "mpi.h"
#include "mkl.h"
#include "mkl_cluster_sparse_solver.h"

// mpiicc -g -DMKL_ILP64 -L${MKLROOT}/lib/intel64 -lmkl_intel_ilp64 -lmkl_intel_thread -lmkl_core -lmkl_blacs_intelmpi_ilp64 -liomp5 -lpthread -lm -ldl test_cpardiso_2.c
// mpirun -check-mpi -np 2 ./a.out

void  dummy_cluster_sparse_solver();

int main (void)
{

/* -------------------------------------------------------------------- */
/* .. Init MPI.                                                         */
/* -------------------------------------------------------------------- */

    /* Auxiliary variables. */
    int     mpi_stat = 0;
    int     argc = 0;
    int     comm, rank;
    char**  argv;

    mpi_stat = MPI_Init( &argc, &argv );
    mpi_stat = MPI_Comm_rank( MPI_COMM_WORLD, &rank );
    comm =  MPI_Comm_c2f( MPI_COMM_WORLD );

//    if ( rank > 0 ) { dummy_cluster_sparse_solver();  }

    /* Matrix data. */

    MKL_INT n = 8;
    MKL_INT ia[9] = { 0, 4, 7, 9, 11, 14, 16, 17, 18 };
    MKL_INT ja[18] = { 0,   2,       5, 6,      /* index of non-zeros in 0 row*/
                         1, 2,    4,            /* index of non-zeros in 1 row*/
                            2,             7,   /* index of non-zeros in 2 row*/
                               3,       6,      /* index of non-zeros in 3 row*/
                                  4, 5, 6,      /* index of non-zeros in 4 row*/
                                     5,    7,   /* index of non-zeros in 5 row*/
                                        6,      /* index of non-zeros in 6 row*/
                                           7    /* index of non-zeros in 7 row*/
    };
   float a[18] = { 7.0, /*0*/ 1.0, /*0*/ /*0*/  2.0,  7.0, /*0*/
                         -4.0, 8.0, /*0*/ 2.0,  /*0*/ /*0*/ /*0*/
                               1.0, /*0*/ /*0*/ /*0*/ /*0*/ 5.0,
                                    7.0,  /*0*/ /*0*/ 9.0,  /*0*/
                                          5.0,  1.0,  5.0,  /*0*/
                                                -1.0, /*0*/ 5.0,
                                                      11.0, /*0*/
                                                            5.0
    };

    MKL_INT mtype = -2;  /* set matrix type to "real symmetric indefinite matrix" */
    MKL_INT nrhs  =  1;  /* number of right hand sides. */
    float b[8], x[8], bs[8], res, res0; /* RHS and solution vectors. */

    /* Internal solver memory pointer pt
     *       32-bit:      int pt[64] or void *pt[64];
     *       64-bit: long int pt[64] or void *pt[64]; */
    void *pt[64] = { 0 };
    void *pt_2[64] = { 0 };

    /* Cluster Sparse Solver control parameters. */
    MKL_INT iparm[64] = { 0 };
    MKL_INT maxfct, mnum, phase, msglvl, error1, error2;

    /* Auxiliary variables. */
    float   ddum; /* float dummy   */
    MKL_INT idum; /* Integer dummy. */
    MKL_INT i, j;

/* -------------------------------------------------------------------- */
/* .. Setup Cluster Sparse Solver control parameters.                                 */
/* -------------------------------------------------------------------- */
    iparm[ 0] =  1; /* Solver default parameters overriden with provided by iparm */
    iparm[ 1] =  2; /* Use METIS for fill-in reordering */
    iparm[ 5] =  0; /* Write solution into x */
    iparm[ 7] =  2; /* Max number of iterative refinement steps */
    iparm[ 9] = 13; /* Perturb the pivot elements with 1E-13 */
    iparm[10] =  0; /* Don't use nonsymmetric permutation and scaling MPS */
    iparm[12] =  1; /* Switch on Maximum Weighted Matching algorithm (default for non-symmetric) */
    iparm[17] = -1; /* Output: Number of nonzeros in the factor LU */
    iparm[18] = -1; /* Output: Mflops for LU factorization */
    iparm[27] =  1; /* Single precision mode of Cluster Sparse Solver */
    iparm[26] =  1; //gf
    iparm[34] =  1; /* Cluster Sparse Solver use C-style indexing for ia and ja arrays */
    iparm[39] =  0; /* Input: matrix/rhs/solution stored on master */
    maxfct = 1; /* Maximum number of numerical factorizations. */
    mnum   = 1; /* Which factorization to use. */
    msglvl = 1; /* Print statistical information in file */
    error1  = 0; /* Initialize error flag */
        error2  = 0; /* Initialize error flag */

/* -------------------------------------------------------------------- */
/* .. Reordering and Symbolic Factorization. This step also allocates   */
/* all memory that is necessary for the factorization.                  */
/* -------------------------------------------------------------------- */
    phase = 12;

//    MPI_Send(&phase, 1, MPI_LONG_LONG, 1, 0 , MPI_COMM_WORLD);
    cluster_sparse_solver ( pt, &maxfct, &mnum, &mtype, &phase,
                &n, a, ia, ja, &idum, &nrhs, iparm, &msglvl, &ddum, &ddum, &comm, &error1 );

//    MPI_Send(&phase, 1, MPI_LONG_LONG, 1, 0 , MPI_COMM_WORLD);
    cluster_sparse_solver ( pt_2, &maxfct, &mnum, &mtype, &phase,
                &n, a, ia, ja, &idum, &nrhs, iparm, &msglvl, &ddum, &ddum, &comm, &error2 );

    printf ("\nReordering completed ... ");


    if ( rank == 0) {
    printf ( "pt1 rank 0\n" );
    for ( int i=0; i<64; i++ ) { printf ("%i\n" , pt[i] ); }
    }
    else {
    printf ( "pt1 rank 1\n" );
    for ( int i=0; i<64; i++ ) { printf ("%i\n" , pt[i] ); }
    }

/* -------------------------------------------------------------------- */
/* .. Back substitution and iterative refinement.                       */
/* -------------------------------------------------------------------- */

   /* Set right hand side to one. */
    for ( i = 0; i < n; i++ )
    {
        b[i] = 1.0;
                x[i] = 0.0;
    }
    printf ("\nSolving system...");

    phase = 33;
//    MPI_Send(&phase, 1, MPI_LONG_LONG, 1, 0 , MPI_COMM_WORLD);

    cluster_sparse_solver ( pt, &maxfct, &mnum, &mtype, &phase,
                &n, a, ia, ja, &idum, &nrhs, iparm, &msglvl, b, x, &comm, &error1 );

    printf ("\nThe solution of the system is: ");
        for ( j = 0; j < n ; j++ )
        {
            if ( rank == 0 ) { printf ( "\n x1[%lli] = % f", (long long int)j, x[j] ); }
        }
/* -------------------------------------------------------------------- */
/* .. Termination and release of memory. */
/* -------------------------------------------------------------------- */

    phase = -1; /* Release internal memory. */
//    MPI_Send(&phase, 1, MPI_LONG_LONG, 1, 0 , MPI_COMM_WORLD);

    cluster_sparse_solver ( pt, &maxfct, &mnum, &mtype, &phase,
                &n, &ddum, ia, ja, &idum, &nrhs, iparm, &msglvl, &ddum, &ddum, &comm, &error1 );

/* -------------------------------------------------------------------- */
/* .. Repeat phase 33 for second matrix */
/* -------------------------------------------------------------------- */

    phase=33;
   /* Set right hand side to one. */
    for ( i = 0; i < n; i++ )
    {
        b[i] = 1.0;
                x[i] = 0.0;
    }

//    MPI_Send(&phase, 1, MPI_LONG_LONG, 1, 0 , MPI_COMM_WORLD);
    cluster_sparse_solver ( pt_2, &maxfct, &mnum, &mtype, &phase,
                &n, a, ia, ja, &idum, &nrhs, iparm, &msglvl, b, x, &comm, &error2 );

    printf ("\nThe solution of the system is: ");
        for ( j = 0; j < n ; j++ )
        {
            if ( rank == 0 ) { printf ( "\n x2[%lli] = % f", (long long int)j, x[j] ); }
        }

    phase = 1;

/* -------------------------------------------------------------------- */

final:
        if ( error1 != NULL || error2 != NULL )
        {
            printf("\n TEST FAILED\n");
        } else {
            printf("\n TEST PASSED\n");
        }
    mpi_stat = MPI_Finalize();
    return error1;
}

////////////////////////////////////

void dummy_cluster_sparse_solver() {

    int     mpi_stat = 0;
    int     argc = 0;
    int     comm, rank;
    char**  argv;

    mpi_stat = MPI_Comm_rank( MPI_COMM_WORLD, &rank );
    comm =  MPI_Comm_c2f( MPI_COMM_WORLD );

   /* Matrix data. */
    MKL_INT n;
    MKL_INT *ia;
    MKL_INT *ja;
    MKL_INT mtype;
    MKL_INT nrhs;

    double *a, *b, *x;
    void *pt[64] = { 0 };

    /* Cluster Sparse Solver control parameters. */
    MKL_INT iparm[64] = { 0 };
    MKL_INT maxfct, mnum, msglvl, error;
    double ddum; /* float dummy   */
    MKL_INT idum; /* Integer dummy. */
    MKL_INT phase;

MPI_Recv(&phase, 1, MPI_LONG_LONG, 0 , 0, MPI_COMM_WORLD, MPI_STATUS_IGNORE );

while(( phase != 1 )){

printf ( "\nEntering phase %i while loop\n", phase );

cluster_sparse_solver ( pt, &maxfct, &mnum, &mtype, &phase, &n, a, ia, ja, &idum, &nrhs, iparm, &msglvl, &ddum,&ddum, &comm, &error );

MPI_Recv(&phase, 1, MPI_LONG_LONG, 0 , 0,  MPI_COMM_WORLD, MPI_STATUS_IGNORE );

} // end while

mpi_stat = MPI_Finalize();
exit(0);

} // end function

2 Likes

Thank you very much indeed!

I tried to do that, but compilation aborted. Below is the beginning and the end of the message.

icc: remark #10441: The Intel(R) C++ Compiler Classic (ICC) is deprecated and will be removed from product release in the second half of 2023. The Intel(R) oneAPI DPC++/C++ Compiler (ICX) is the recommended compiler moving forward. Please transition to use this compiler. Use ‘-diag-disable=10441’ to disable this message.
icc: warning #10315: specifying -lm before files may supersede the Intel(R) math library and affect performance
In file included from /usr/include/stdio.h(43),
from test_cpardiso_2.c(1):
/usr/include/x86_64-linux-gnu/bits/types/struct_FILE.h(95): error: identifier “size_t” is undefined
size_t __pad5;
^

In file included from /usr/include/stdio.h(43),
from test_cpardiso_2.c(1):
/usr/include/x86_64-linux-gnu/bits/types/struct_FILE.h(98): error: identifier “size_t” is undefined
char _unused2[15 * sizeof (int) - 4 * sizeof (void *) - sizeof (size_t)];
^

In file included from test_cpardiso_2.c(1):
/usr/include/stdio.h(52): error: identifier “__gnuc_va_list” is undefined
typedef __gnuc_va_list va_list;
^

In file included from test_cpardiso_2.c(1):
/usr/include/stdio.h(292): error: “size_t” is not a type name
extern FILE *fmemopen (void *__s, size_t __len, const char *__modes)
^

In file included from test_cpardiso_2.c(1):
/usr/include/stdio.h(298): error: “size_t” is not a type name
extern FILE *open_memstream (char **__bufloc, size_t *__sizeloc) __THROW __wur;
^

In file included from test_cpardiso_2.c(1):
/usr/include/stdio.h(309): error: “size_t” is not a type name
int __modes, size_t __n) __THROW;
^

In file included from test_cpardiso_2.c(1):
/usr/include/stdio.h(315): error: “size_t” is not a type name
size_t __size) __THROW;
^

In file included from /opt/intel/oneapi/mkl/2022.2.1/include/mkl.h(30),
from test_cpardiso_2.c(5):
/opt/intel/oneapi/mkl/2022.2.1/include/mkl_trans.h(54): error: “size_t” is not a type name
double * AB, size_t lda, size_t ldb);
^

In file included from /opt/intel/oneapi/mkl/2022.2.1/include/mkl.h(30),
from test_cpardiso_2.c(5):
/opt/intel/oneapi/mkl/2022.2.1/include/mkl_trans.h(58): error: “size_t” is not a type name
size_t rows, size_t cols,
^

In file included from /opt/intel/oneapi/mkl/2022.2.1/include/mkl.h(30),
from test_cpardiso_2.c(5):
/opt/intel/oneapi/mkl/2022.2.1/include/mkl_trans.h(58): error: “size_t” is not a type name
size_t rows, size_t cols,
^

In file included from /opt/intel/oneapi/mkl/2022.2.1/include/mkl.h(30),
from test_cpardiso_2.c(5):
/opt/intel/oneapi/mkl/2022.2.1/include/mkl_trans.h(60): error: “size_t” is not a type name
MKL_Complex8 * AB, size_t lda, size_t ldb);
^

In file included from /opt/intel/oneapi/mkl/2022.2.1/include/mkl.h(30),
from test_cpardiso_2.c(5):
/opt/intel/oneapi/mkl/2022.2.1/include/mkl_trans.h(60): error: “size_t” is not a type name
MKL_Complex8 * AB, size_t lda, size_t ldb);
^

compilation aborted for test_cpardiso_2.c (code 4)

Are you sure you are sourcing the intel script, i.e something like:

source /opt/intel/oneapi/setvars.sh

If so what is the output of:

[feacluster@instance-3 ~]$ mpiifort --version
ifort (IFORT) 2021.4.0 20210910
Copyright (C) 1985-2021 Intel Corporation.  All rights reserved.
2 Likes

I just started testing this, and I got this message after installing it:

Can't open ccx_2.18step.c: No such file or directory at ./date.pl line 18.
icc: remark #10441: The Intel(R) C++ Compiler Classic (ICC) is deprecated and will be removed from product release in the second half of 2023. The Intel(R) oneAPI DPC++/C++ Compiler (ICX) is the recommended compiler moving forward. Please transition to use this compiler. Use '-diag-disable=10441' to disable this message.

So far am doing the test suite and will report shortly.
For reference, OS=Ubuntu 22.04 LTS (WSL2) and compiler = Intel’s oneAPI HPC Toolkit (version 2022.3.1).

Thank you!

I have the following version:

a@DESKTOP-7B3I8SK:/mnt/e/q9$ mpiifort --version
ifort (IFORT) 2021.7.1 20221019
Copyright (C) 1985-2022 Intel Corporation. All rights reserved.

And I use the following command:

. /opt/intel/oneapi/setvars.sh

Not sure. Seems some compiler conflict perhaps with your system compiler. Try compiling some even simpler hello world kind of program.

/*The Parallel Hello World Program*/
#include <stdio.h>
#include <mpi.h>

main(int argc, char **argv)
{
   int node;
   
   MPI_Init(&argc,&argv);
   MPI_Comm_rank(MPI_COMM_WORLD, &node);
     
   printf("Hello World from Node %d\n",node);
            
   MPI_Finalize();
}
[feacluster@instance-3 ~]$ mpiicc hello_world.c
[feacluster@instance-3 ~]$ ./a.out
Hello World from Node 0
1 Like

If it created the executable then you can ignore those warnings.

1 Like

Looks like it is working just fine! Thank you so much @feacluster !!!

Same problem. The beginning and the end of the message below. I will try this on a Google Cloud machine. Maybe there will not be such a compiler collision there.

Thank you very much indeed!

icc: remark #10441: The Intel(R) C++ Compiler Classic (ICC) is deprecated and will be removed from product release in the second half of 2023. The Intel(R) oneAPI DPC++/C++ Compiler (ICX) is the recommended compiler moving forward. Please transition to use this compiler. Use ‘-diag-disable=10441’ to disable this message.
In file included from /usr/include/stdio.h(43),
from hello.c(2):
/usr/include/x86_64-linux-gnu/bits/types/struct_FILE.h(95): error: identifier “size_t” is undefined
size_t __pad5;
^

In file included from /usr/include/stdio.h(43),
from hello.c(2):
/usr/include/x86_64-linux-gnu/bits/types/struct_FILE.h(98): error: identifier “size_t” is undefined
char _unused2[15 * sizeof (int) - 4 * sizeof (void *) - sizeof (size_t)];
^

In file included from hello.c(2):
/usr/include/stdio.h(52): error: identifier “__gnuc_va_list” is undefined
typedef __gnuc_va_list va_list;
^

In file included from hello.c(2):
/usr/include/stdio.h(292): error: “size_t” is not a type name
extern FILE *fmemopen (void *__s, size_t __len, const char *__modes)
^

In file included from hello.c(2):
/usr/include/stdio.h(298): error: “size_t” is not a type name
extern FILE *open_memstream (char **__bufloc, size_t *__sizeloc) __THROW __wur;
^

In file included from hello.c(2):
/usr/include/stdio.h(309): error: “size_t” is not a type name
int __modes, size_t __n) __THROW;

In file included from hello.c(2):
/usr/include/stdio.h(675): error: “size_t” is not a type name
extern size_t fwrite_unlocked (const void *__restrict __ptr, size_t __size,
^

In file included from hello.c(2):
/usr/include/stdio.h(675): error: “size_t” is not a type name
extern size_t fwrite_unlocked (const void *__restrict __ptr, size_t __size,
^

In file included from hello.c(2):
/usr/include/stdio.h(676): error: “size_t” is not a type name
size_t __n, FILE *__restrict __stream);
^

In file included from /usr/include/stdio.h(864),
from hello.c(2):
/usr/include/x86_64-linux-gnu/bits/stdio.h(39): error: identifier “__gnuc_va_list” is undefined
vprintf (const char *__restrict __fmt, __gnuc_va_list __arg)
^

compilation aborted for hello.c (code 2)

Which gcc version do you have? If old, maybe try upgrading it:

[feacluster@micro ~]$ gcc --version
gcc (GCC) 8.5.0 20210514 (Red Hat 8.5.0-4)

I am not sure if you are asking me or jbr, but, just in case, the answer is below. Does not look like a very old version, but I’ll check.

a@DESKTOP-7B3I8SK:/mnt/e/q9$ gcc --version
gcc (Ubuntu 9.4.0-1ubuntu1~20.04.1) 9.4.0
Copyright (C) 2019 Free Software Foundation, Inc.
This is free software; see the source for copying conditions. There is NO
warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

How about something simpler like this:

[feacluster@instance-3 ~]$ cat hello_world2.c
#include <iostream>

int main()
{
    std::cout << "Hello, World!\n";
}
[feacluster@instance-3 ~]$ icpc hello_world2.c
1 Like

Try also:

sudo apt install build-essential

1 Like

I have managed to install the software in google cloud. Will try to install on the local machine later, but even if I fail, it does not matter.

Thank you so much!

There were some deviations in the file error.14070:

deviation in file beamfsh1.dat
line: 103 reference value: -5.943977e-09 value: -1.334100e-09
absolute error: 4.609877e-09
largest value within same block: 1.276463e-07
relative error w.r.t. largest value within same block: 3.611446 %

beamhtfc2.dat and beamhtfc2.dat.ref do not have the same size !!!

deviation in file beamptied5.dat
line: 13 reference value: 2.004731e+06 value: 1.890823e+06
absolute error: 1.139080e+05
largest value within same block: 2.968844e+06
relative error w.r.t. largest value within same block: 3.836780 %

deviation in file beamptied6.dat
line: 14 reference value: 2.090934e+06 value: 2.005467e+06
absolute error: 8.546700e+04
largest value within same block: 3.071561e+06
relative error w.r.t. largest value within same block: 2.782527 %

beamread.dat and beamread.dat.ref do not have the same size !!!

beamread2.dat and beamread2.dat.ref do not have the same size !!!

beamread3.frd does not exist
beamread4.dat and beamread4.dat.ref do not have the same size !!!

deviation in file induction2.frd
line: 20203 reference value: 2.198320e+01 value: 1.998630e+01
absolute error: 1.996900e+00
largest value within same block: 2.257380e+01
relative error w.r.t. largest value within same block: 8.846096 %

deviation in file membrane2.frd
line: 135 reference value: 2.617620e-08 value: 2.024470e-09
absolute error: 2.415173e-08
largest value within same block: 2.833140e-07
relative error w.r.t. largest value within same block: 8.524722 %

deviation in file ringfcontact4.dat
line: 26 reference value: 2.033520e+07 value: 2.063512e+07
absolute error: 2.999200e+05
largest value within same block: 2.033520e+07
relative error w.r.t. largest value within same block: 1.474881 %

deviation in file segment.frd
line: 5467 reference value: 8.550770e+10 value: -8.550340e+10
absolute error: 1.710111e+11
largest value within same block: 8.551870e+10
relative error w.r.t. largest value within same block: 199.969246 %

deviation in file sens_freq_disp_cyc.frd
line: 3536 reference value: -3.302250e+03 value: -3.253000e+03
absolute error: 4.925000e+01
largest value within same block: 1.069350e+04
relative error w.r.t. largest value within same block: 0.460560 %

deviation in file sens_modalstress.frd
line: 3266 reference value: -1.000000e+00 value: -9.648930e-01
absolute error: 3.510700e-02
largest value within same block: 1.000000e+00
relative error w.r.t. largest value within same block: 3.510700 %

deviation in file simplebeampipe3.dat
line: 131 reference value: 8.716587e+04 value: -2.006259e+04
absolute error: 1.072285e+05
largest value within same block: 8.716587e+04
relative error w.r.t. largest value within same block: 123.016566 %