I’m developing a code in Golang to:
- Take in STL file.
- Use the signed distance field concept along with a modified version of marching cubes algorithm to generate 10-node tetrahedral elements. Other element types are also possible.
- Give out CalculiX
inp files containing nodes and elements.
I’m developing the code and debugging it. There is one more problem that I need to address.
Detecting almost-flat elements
I have a function that detects the elements if they are almost flat. The mathematics base of my function is in a way that it reliably detects an element if it has an almost flat shape. Flat elements are detected and thrown away. They wouldn’t be included in the output elements.
Detecting zero-volume elements
There are cases that the element shape is fine, i.e. it isn’t flat. However, its volume is extremely small. For example, my tests indicate that an element can have an extremely small volume of
0.0006922827322921379 while still having a fine shape.
My tests indicate that an element with an extremely small volume would be rejected by CCX with an error:
*ERROR in e_c3d: nonpositive jacobian
determinant in element 5781
Detecting bad elements: consistent with CCX
I have come to this conclusion: I need to implement a new function for detecting bad elements. My new function should exactly be like the CCX logic of detecting nonpositive jacobian determinant in elements.
There are some explanations inside the CalculiX solver documentation:
I cannot figure out how CCX is doing the nonpositive jacobian determinant detection in elements. I don’t know its mathematical base and also I don’t know where in the FORTRAN source code I need to look at.
I appreciate if anyone can give me any hint about re-implementing in Golang the exact same nonpositive jacobian determinant CCX logic.
Jacobian is calculated in files like shape10tet.f:
! computation of the jacobian determinant
Then the check is performed in files like e_c3d_rhs_th.f:
! check the jacobian determinant
write(*,*) '*ERROR in e_c3d_rhs_th: nonpositive jacobian'
write(*,*) ' determinant in element',nelem
Why do you not use a standard volume mesher like Netgen or Gmsh?
The objective is to convert STL models to FE models with a higher quality than any other available tool. With an opensource code base of course
Part of the problem is that STL files are only an approximation of the underlying geometry. The quality of that approximation will vary depending on the STL export settings of whatever program created the file.
Second, STL (and other CAD) models often contain lots of small details that are irrelevant for the analysis, but force the use of lots of small elements.
Third, automatic meshers tend to generate tetrahedral elements. If you run analyses on the same shape with tet and hex elements, you can see more artefacts from those tet elements. That’s why the CalculiX manual advises to use second order hex elements.
And while for example
gmsh can recombine tet meshes into hex meshes, that will often result in some elements with a nonpositive Jacobian determinant.
cgx to generate the geometry and mesh (omitting small and inconsequential details) in my experience tends to produce a higher quality mesh, shorter calculation times and better results.
I don’t have much experience with
cgx. I’m curious to know if this concept sounds feasible: calling
cgx programmatically to convert STL triangels to finite elements.
Maybe an algorithm can be developed in a programming language. This algorithm may:
- Take an input STL file.
- Process the STL triangles, somehow.
cgx to convert triangles to finite elements.
- Call may be done on a triangle-by-triangle basis or any other way that makes sense.
This is just an ideation. Does it sound feasible?
I have somethings to say about our current algorithm being developed
There are some major concerns infered from your post:
1. Reducing details
… STL (and other CAD) models often contain lots of small details that are irrelevant for the analysis, but force the use of lots of small elements.
… to generate the geometry and mesh (omitting small and inconsequential details) in my experience tends to produce a higher quality mesh …
Our algorithm is voxel-based and employs signed distance field or SDF. It is able to ignore details by reducing the resolution of voxels. User can adjust resolution to acheive the desired detail level.
2. Hexahedron over tetrahedron
… If you run analyses on the same shape with tet and hex elements, you can see more artefacts from those tet elements. That’s why the CalculiX manual advises to use second order hex elements.
Our algorithm fills the internal voxels of 3D model with hexahedral elements. It only uses tetrahedral ones for the surface voxels intersecting with surface mesh.
Our added value
Looks like our algorithm directly addresses these two major concerns