Strange behavior for windows ccx 2.19 static from dhondt.de

I have a dataset where I use a Mooney-Rivlin material and sliding contact. I use symmetrical layer cake cut where nodes are defined in cylindrical coordinates. I run with the Pastix solver in NLgeom mode and now to the strange happens
I can from the command prompt for ex. run 3 times after each other with an untouched dataset from start to finish without any error and having exact the same iterations steps.
for the above exampel I type the following command “ccx_static.exe is renamed to ccx2.19.exe”

ccx2.19 jobname
all 3 times I get exact the same iterations steps and the solver returns “job finished”

now without change anything thing except of putting -i in front of the jobname

ccx2.19 -i jobname
then the solver stop at app 85% with the output “Solution contains NaN!”

I have 20 GB install memory so it’s not a leak of memory issue. Funny people could then say “why just leave the -i option” but it’s random if the -i option should be there of not.

I have also tested ver. 2.18 from dhondt with same result.

Do anyone have had the same experience or any suggest ? it smells a little like an uninitialized variable or overwritten pointer memory but it’s hard to understand that a dummy argument should cause such incident.

as long as the graphics output looks trustworthy without distorted elements and stress jumps across the element boundaries I can live with the result since I don’t seek an exact solution but only an indication but it’s still very annoying an untouched dataset wont run stable either as a stable “job finished” or as a stable “job failed”.

Any suggest are welcome.
Thanks in advance

That’s indeed a very unusual bug. I wonder which model feature triggers it. Have you tried removing each feature (contact, hyperelasticity, cylindrical coordinates and so on) until the problem disappears ?

Calc_em , You have a point. It will make sense trying to isolate the phenomena and I will give it a try.

I also tried to move the dataset to an old computer I normally only use for streaming. The first 2 subsequent calculation with the dataset failed about 90%. I had for the beginning not set any environment parameter so it was done as a single core. Since it’s a 4-core I then put in the the environment parameter “OMP_NUM_THREADS=4” and surprisingly obtained 2 subsequent run where the calculation ended with “Job finished” at 100%. Then I made a change from 4 to 3 cores and the dataset failed about 70%.

All calculation have been done from a windows command shell without any change of the dataset at all.

In order to reduce the number of freedoms I’m using a pie slice of 10 degree so it will not make sense to remove the cylindrical coordinates but I can try to reverse the boundary condition by fix what is moving and move what is fixed, try some different contact formulations and if I get into the mood set up a hole new model meshed by cubic elements only which ought to be more stable for large strain problems.

I will return if I manage to isolate the phenomena.

Hi ,

I have experienced similar errors mostly with nonlinear and I can’t find a reason. Files that converge didn’t on a different run.

I’m a ccx v2.19 on WIN10 user but also seen on different versions.

I have seen that ccx generates files with common names for the different models that go to the same working directory and are probably shared by different files. I’m referring to the *.nam files for example.

¿Is it possible that some of this common name files could interfere between different files/runs?

I have seen issues in the past in other programs when trying to write on files that for some reason where not properly closed. ¿Does ccx clean and erase all these common files before executing a new run?. Is it possible that any of them remains open and makes ccx fail. Sometimes it didn’t even start the iterations.

Regards

1 Like

Disla , I personal believe the phenomena rely on an uninitialized variable in the windows version of the cxx 2.18 or 2.19 since it’s a pure random phenomena depending on the condition of the memory into where the program is loaded…

My argue for this claim is that I managed to open 2 separate command shell besides each other with location in the same folder. Then I have executed ccx on my dataset with a failed result in one shell. Afterwards I executed exact same ccx command and dataset in the other shell with a successfully result. Going back to the first shell and repeat the former command in first shell can eventual risk to give successfully result.

I found no differences of behavior whether I choose PaStix or Spooles as solver but continued with Spooles since I then could keep the dataset completely unchanged from one test to another.

I have also tested my dataset on Linux with the original ccx2.19 from dhondt.de and that seem to be stable without random behavior. I admire that I could provoke a failed result by change the number of cores for the calculation but it always was ex core=3 that failed no matter how much I shifted between of cores.

With an 8 core CygWin installation it was with 6 and 8 cores options that failed the other number of cores always succeeded .

Finally I also need to come up with some critical issues on my own dataset. I’m using hyper elastic materials. I have a lot of large sliding contact elements. I’m having large strains in combination with tetrahedral elements and when I push the dataset to the edge of limit so then it nearly always will be possible to failure or success depending of the number of cores since it have influence on how the solver deals with the stiffness matrix.

Personal I often forget that now a days with the modern clever mesh generations algorithms tetrahedral elements are very competitive with brick elements but for large strain problems will well dimensioned brick elements normally always be in advantage of tetrahedral element due to better angles between the element edges.

1 Like