*too many cutbacks *STATIC analysis NGLEOM = ON

Has anyone got much experience in dealing with getting large non-linear models to converge? I keep running into the *TOO MANY CUTBACKS error, and if I increase the mesh at the problem node, it increases the LARGEST RESIDUAL FORCE, almost like a singularity.

Firstly, I’m using:
CCX 2.18_dynamic (pardiso)
Nodes >2.4m of TET10

And this is what I’m sending to CCX

*MATERIAL,NAME=Ti
*ELASTIC,TYPE=ISOTROPIC
114000000000.0, 0.34
*SOLID SECTION,ELSET=entity1,MATERIAL=Ti
*MPC
PLANE,freeX
*MPC
PLANE,freeY
*STEP,NLGEOM=YES,INC=1000
*STATIC,SOLVER=PARDISO
.01,1.
*BOUNDARY
fixed_x_nodes,1,0
fixed_y_nodes,2,0
fixed_z_nodes,3,0
force_z_nodes,3,-0.5000
*NODE FILE,GLOBAL=YES
U,RF
*EL FILE
S
*END STEP

The node with the largest residual is marked on the image below.

I have the following which all ended up getting the same result,

  • increased mesh significantly
  • reduced strain significantly (10% of current)
  • Added DIRECT behind static
  • Tried a various range of INC = [100 to 10000]
  • tried 1.,1. / .1,1. / .01,1.
  • tried every solver, most of them reach the same result except for SPOOLES ( likely since the model is too large)

what are the results without nlgeom?
how is the deformation? how are the results?

I cant seem to get it to run without NGLEOM on, whenever i set NGLEOM = OFF, in the solver it turns it back on likely because of the *MPC plane constraint ?
The FRD file is incomplete (no 99999 at the end), every node has 0 displacement in the Z direction except for the first plane where the displacement is imposed on.

Result from the .STA FILE

** convergence Result from the .CVG**
First few lines

Last few lines:

Hopefully that helps

you try these:

*STEP, NLGEOM

*STATIC, DIRECT
1,1

sometimes, it helps to start with a more simple analysis and add step for step more details

Can you share the details about the setup of this simulation - what is the goal of the analysis, what does the whole model look like, how is it constrained and loaded and so on ?

1 Like

I agree with dichtstoff that you should turn off NLGEOM and remove the *MPCs as well if they’re keeping it on. Seeing the linear static solution will eliminate or reveal a whole lot of potential problems.

Some possibilities:

  • Is there a layer of shell elements over the face? Perhaps accidentally left behind by some meshing operation.
  • Is there a zero-stiffness material? Search the whole .inp file(s) for all *MATERIAL definitions in case an unexpected other one is overriding the one you intend to use.
  • Or perhaps the fixed_x_nodes/etc. are actually the entire mesh?

You shouldn’t have to remove constraints from common edges but it also wouldn’t cause this problem, so that’s a separate issue.

1 Like

Couldn’t find any zero-stiffness materials, I only defined and used one material, also checked node selection sets and it seems like they all came out correct. Using mecway i did a quick node selection by equation , selecting all nodes that meet force_z_nodes critera and this node count matched the one i had in this element set. I used Python to do this. I do have a .geo file of this model if that helps.

*MATERIAL,NAME=Ti
*ELASTIC,TYPE=ISOTROPIC
114000000000.0, 0.34
*SOLID SECTION,ELSET=entity1,MATERIAL=Ti
*STEP,NLGEOM=NO
*STATIC,SOLVER=PARDISO
*BOUNDARY
fixed_x_nodes,1,0
fixed_y_nodes,2,0
fixed_z_nodes,3,0
force_z_nodes,3,-0.5000
*NODE FILE,GLOBAL=YES
U,RF
*EL FILE
S
*END STEP

I think there are some commas missing from *BOUNDARY

force_z_nodes,3,-0.5000

should be

force_z_nodes,3,-0.5000

shouldn’t it?

I can’t get it to solve at all with only the single comma. Are you sure the file you posted is the same one seen by the solver?

EDIT: Oh, looks like Discourse is corrupting the text so your file was probably fine with 2 commas. Better use Preformatted text for CCX cards to prevent them being corrupted.

force_z_nodes,3,,-0.5000

force_z_nodes,3,-0.5000

You said linear had the same fault, was that *TOO MANY CUTBACKS? I didn’t think linear would ever have that error. Are you sure NLGEOM isn’t being quietly turned on by something else? Does the output say anything about nonlinear, “Iteration 1” or “Newton-Raphson iterative procedure”?

1 Like

Hi Vic,

My apologies what I meant is that the result came out the same, this obviously did not get the CUTBACK issue. I guess since its not correctly showing in the linear model, I should start by making this work first. Any idea why this came out the way it did?

OK. Yea certainly get linear working first. I can’t think of any other reason displacement would be zero everywhere else though. Could you show a zoomed in deformed view of the displaced face and its transition to the undeformed area? I don’t know what to expect but maybe some clues in there.

It kind of looks like it hasn’t solved at all and the output is just the initial state. I wonder if PARDISO isn’t working or installed? Maybe don’t specify a solver or choose SPOOLES that should always be a fallback, I think.

You can also keep isolating the problem by removing things until it becomes correct. That includes deleting half the mesh repeatedly. That’s what I do for totally mysterious problems when I don’t have a clue.

what about mesh or elements issue!?
the mesh and elements are fine?
sometimes, elements could be a problem!?

Seems like a memory shortage issue. Maybe the matrix solver somehow half-failed and left that wrong solution without completely aborting. Usually it just crashes though. A million nodes is definitely too much for SPOOLES and a lot for the stock CCX PARDISO which would likely need a lot of disk space and/or RAM if it can do it at all.

You can compile CCX to use PARDISO’s Out-of-core mode which should enable that size model with 16 GB RAM and 100 GB free disk space. See option 4 here Improving performance of CCX solver - Forum

1 Like

I use option 4 already (OOC), using MKL PARDISO and the steps and files provided inside of the Mecway directory. I also have a VM with about 64gb of RAM and 350GB of free SSD storage.

The original model I have been trying to run resulted in a FRD file that is 1GB, (touch over 3m Nodes) This might be related to the issue, I reduced the size of the problem to 1.4m nodes and it worked for a linear result using PARDISO on OOC MKL.

This makes me wonder if there is a NODE limit when trying to run CCX when it comes to direct solvers even with OOC capabilities? If so it would be great if there was an error / warning for this.

Next step now is to run the same problem with NGLEOM = yes at a 1,1 (fullstep) and see if this produces a readable result.
Fingers crossed!

There could be a limit around there because of 32 bit integers (LP64). I think you can compile it with full 64-bit everything (ILP64) but never tried.

There was certainly a problem with Pastix due to 32 bit integers that was cured with a version compiled 8i, i.e. 8 byte integers. This kind of problem tends to address locations outside intended locations when an address or array index exceeds the integer format used.

You may be able to get the 64-bit integers with MKL Pardiso by using the following environment variable:

MKL_INTERFACE_LAYER value ILP64

I don’t know if CCX has to be compiled to take advantage of it or not. Something to try though.

Update; tested this with the v2.18 exe and it breaks it. So I guess you would have to compile it and probably use the environment variable too.

No idea about Windows, but on Linux I have solved ~4 million equation models ( ~2 million nodes ). Total run time for linear static run ~2 minutes.

1 Like