Problem with Large Model Solution

I used a different tool from Prepomax (Bconverged) and setup SOLVER=PARDISO in text file.
Here the gived error message:

*ERROR in linstatic: the PARDISO library is not linked

So I think that Prepomax see that unlinking and then change solver… Any indication about how link this solver?


We have tested input file on your cluster, and they work without problem, so I think that is a problem of solver.
I suppose that Pastix have problem with model size, while PARDISO are not linked correctly.

If anyone have an indication to solve it I would really appreace it.

All the best


I had a problem with Pastix runs quiting with no messages, and had heard thet Pastix had an issue with using 32 bit integers for the array indices. I asked Rafal Brzegowy about it as I was using his CCX_2.18_ combined which had that “only< 500,000 nodes issue” with Pastix and he made a windows beta compile of his CCX_2.18_static.using the i8 compile directive, which now lets me run larger problems with Pastix. It is only faster than MKL Pardiso for me to 1,000,000 nodes or so. Note Pastix seems to have greater memory requirements than Pardiso and therefore exceds core size sooner, slowing things down a lot. [CalculiX and PaStiX solver Windows version] has the discussion starting about Sept 29. Note a 16 GB system doesn’t look like it would much benefit fron the larger integers, but as core size increases it should be able to handle Pastix runs faster than MKL Pardiso to larger and larger problems. For my 64 GB sustem Pastix runs OK at 1,871,000 nodes, but is decidedly slower than Pardiso.

I saw your response on the other post. Your setup is very similar to mine with 64gb ddr4 and a pcie 4.0 ssd. I’ve been trying to run non linear static analyses on a model consisting of 1m+ nodes using tet10 elements for the past 2 weeks. I have tried iterative cholesky, iterative scaling, spooles, pardiso using ccx17 with MKL libs. None of them worked. Now I’m trying to run it on the CCX_2.18_static i8 with Pastix so fingers crossed we can get some results.

I have not been using tets, all hex. Puts the nodes in consistent locations for the problem I am working on, and allows easy, perfect changes back and forth betweeh Hex 8 and Hex 20, or changing mesh by an easy factor of 2. My problem is a simple extrusion, which I manualy 2d quad mesh in the lowest consistent resolution that works, then extrude, double and redouble as needed, and or shift to hex 20. Mostly a learning exercise for me. This is the concrete section. The nodes are located such that I can add rebar using shared nodes should I need to withough increasing the number of nodes at all. Also no contacts…Generally a simple model. Biggest difficlty is lack of convergance when something is amiss, often a units issue or a slipped decimal point. Ultimate I may use the FEM model to verify a hand (spreadsheet actually) model I have been charged with evaluate the impact load capacity of old bridge rails.

BTW rumor is that if it does not work with Spooles or MKL Pardiso it won’t work with Pastix which tends to be a bit finiky. My problems have always been Me, except the i4/i8 thing. I suspect the static_i8 was beta as Rafal is considereing adding it to his _combined compile, also beta, I think. Track memory use and try using fewer cpu’s. Things truely got worse and worse with more than 6 in my case. Sometimes when I have a difficulty I run with increasing load steps so I can see where things go south. This may require a lot of iterations, i.e., time. My problem has a potential “pop-through instability” starting at a deformation of about 3 inches at the top of rail.

I see, yeah my model is highly randomized as its trying to simulate porous microstructures (sub cubes) for research purposes. The PASTIX solver just finished, after 24hrs. I ended up getting a *Too many cutbacks error, after many consecutive “no convergence steps”. I have got the substep size set to the lowest and i believe the solver automatically readjusts the step sizing as smaller steps tend to help with convergence in difficult models.

Doubling the size of the pores resulted in a successful simulation in the past, which also roughly at half the number of nodes. Unfortunately I have to make it work for this one so I guess my next bet is to refine the mesh further and see if it converges! (from 1m to about 1.8m nodes, mainly reducing the rate of change of elements and a finer mesh near curved surfaces)

Thanks for sharing your findings on the large model performance of Pastix, I will try using pardiso first and see how we go.

If money is no object (ha ha). Doubling or more on the memory size may help reach solutions or errors sooner. At 1.87 million nodes for my 64 GB system Pardiso is faster and Pastix is getting unreasonably slow If I had 128 GB things would speed up a lot, but would probably still be faster with Pardiso. If committed memory is less than 133% of core memory pastix i8 is probably faster than Pardiso.

Michael L. McMullen PE


1 Like

I have not found that the auto convergence is very good. Cuts back too far. If a load of some sort is involved run your problem with half as much load and check which nodes were most nonlinear and look at stresses, temps or whatever your loading with. I think the way it tries to get every node within .000001mm of convergance it could be just one spot, element or node. And it would not be converging due to a negative stiffness (buckling). I have heard the cure for this is to apply incrementally stepped deflections rather than loads, though deflections rather than loads can be a problem if it is the deflection of more than one node. If you can save a partial solution of a stepped problem and search it for large strains that might show your problem area. Right now I am tiptoeing around a lack of convergence in a problem with 1871515 nodes and want to get as close as practical as the area that might not be converging might not be any near the location I am interested in at the moment. (tiptoeing by changing material stress/strain curve). Note materials with a falling stress with increasing strain can be problematic so stress strain curves often do not include the falling branch, but stay at a constant stress rather than falling, or increase stress with a very flat slope. That way you can see the large strains in the model around the problem area rather than having a solution fail due to instability or progressive failure.

Michael L. McMullen PE


1 Like

I have the same problem with Pastix, that quitting without message.
The model that I could run have 1.4M nodes, but I don’t think that is a memory problem, because I have 256gb ram on my workstation.
Where could I find 2.18 beta static version?

All the best

rafal.brzegowy posted a dropbox link of his new i8 compile of ccx_2.18_static_i8.exe

the pastix compile without the i8 seems to fail on problems over 500,000 even with enough core, probably due to the limit on the number represented in 4 byte integers in the programs arrary indexing to about 4GB. The Pardiso compiles I have used don’t seem to have this problem. 256 GB should be enough for about 2,000,000 nodes in Pastix if you have a page file to store the currently unused parts of the program and other data. This is based on my experience relatively simple static structural problems. If you read the papers on pastix though you will note that the amount of speedup varies with the program and data.

Michael L. McMullen PE


Dear Michael,

I try this solver, but I still have the fatal error message.
In fact the analysis terminate without error indication.

I suppose that there’s a more sofisticated problem…

All the best

Pastix is known for this. Pardiso, not so much. Also this type of error is said to usually be a memory problem.

Task manager can help tell you if the program simply can’t get enough memory. Under “committed “ it shows the memory used /virtual memory available. This computer uses about 111GB when running this problem and some other minor stuff, though there is only 63.9 BG of physical memory available. As the usage gets bigger computation slows more and more as your HD or SSD is taking the place of memory and they are 10 to 100 times slower. Disk 2 is the place my page file is located and it is a relatively fast nvme SSD.

Michael L. McMullen PE


the analysis failed after 30 second and no resources are used (processor, ram, etc).
I try to use different solvers (exodus, static, dynamic, i8, etc) and add the dll file as reported here Propeller Hub - Forum but I give always the same results…

All the best

Might be a problem with the problem then, or in your computer hardware. Run a long memory check, and then look through your problem for some spot that dos not have sufficient stability restraint. Could be a small area, or even a lose element too small to see. Try some of the example problems and see if they get further.

Michael L. McMullen PE


1 Like

hi stefano,

i think that’s an old mecway thread. the new info for ccx_dynamic.exe v2.18 is the following dll files:

intel mkl files used by ccx_dynamic.exe pardiso (v2.18):

libiomp5md.dll (doesn’t come with the mkl, have to install one of the compilers)
mkl_avx512.1.dll (provides a big speedup to version 2.18)

copy the above files to the same folder as ccx_dynamic.exe

depending on your cpu you may need one of the following two files, instead of mkl_avx512.1.dll:

if you have to use mkl_sequential.1.dll it will run slower. I’m seeing a speed up from this computer using mkl_avx512.1.dll. so it may run a little slower if you have to use mkl_avx2.1.dll. in any event, you can find out which dll file you need by moving them in and out of the folder with ccx_dynamic.exe.

pardiso will exit with no indication of what dll is missing. i’ve had that happen too.


Dear Antony,

thanks for your indication. I’ve tried to copy these file in ccx_dynamic folder, but he give the same results.
So I tried to run a smaller model and the analysis works, but I saw that if I select pardiso solver in Prepomax, Pastix is still used.

So I tried with bconverged and to use command SOLVER=PARDISO next to “STATIC” but also in this case Pastix ha been used.

Any suggestion regarding it?

All the best,

hi stefano,

make sure you have the exe from here;

not all of the exe files are the same. some aren’t linked to pardiso others don’t have pastix. the file from the link above can run spooles, pardiso, and pastix.

it doesn’t hurt to make sure you have the latest intel oneapi hpc and base toolkits installed.

i made a simple test case that uses the pardiso solver, however, i can’t find a way to upload it to this forum. so here is a link to the file; Dropbox - Test.inp - Simplify your life
i’ll keep it there for about a month but delete it to save space.

other than that, i don’t have any ideas.


I run your model with bconverged and I give the following error:

*ERROR in nonlingeo: the PARDISO library is not linked.

Of course, I remove SOLVER=PARDISO command, Pastix solver will be used, and the analysis is finished correctly, but my problem with Pastix solver is for large model…

In the following picture the file that you previously mentioned are listed and positioned in cc_dynamic folder…

I don’t have any other idea…


the pardiso not linked is what i’m talking about as far as your ccx_dynamic.exe file is not the right one. some were built not linked to pardiso. unless they changed the one on the ccx website recently, it used to work with spooles, pardiso, and pastix. there are some versions on the forum that are not linked to pardiso.