*ERROR in u_calloc: error allocating memory variable=sideload

Hello,

I am running into issues with memory allocations for the sideload array in ccx_2.1x.c. I have a thermal model with a mesh that contains about 160k elements with lots of surfaces for consideration of radiation and convection. Due to the high number of surfaces and subsequent uses of *FILM and *RADIATE, the _nload variable increases considerably so that the allocation NNEW macro (line 290 in ccx_2.1x.c) passes a number (variable num) to u_calloc that can not be represented via the default size_t data declaration. (num comes in as a negative integer and raises the memory allocation error.)

I was wondering if anyone has an idea of how to fix this? Is there any way to define elemental surfaces more efficiently so that the _nload stays low, or re-write some code in the memory allocation functions to allow passing larger integer numbers?

I am using:

  • ccx_2.16
  • compiled via msys64 (Windows64)

Any help is appreciated!

  • Christian

If you want to send me your deck I can try running it through the GNU debugger…

Hi there,

Yes please, your help would be apprechiated. I have attached the example model; even though the model uses the dflux subroutine, you should be able to recreate the error anyhow since it occurs during allocation.

Up until now, I thought the OP=NEW parameter deletes pre-existing definitions for *DFLUX, *FILM etc. which I thought was somewhat reflected in the source code as well. However, looking at allocation.f I can see that calculix increasingly allocates memory independent of what the OP parameter is. This makes my example of course not the most efficient one.

Here is the link to the model:
https://drive.google.com/drive/folders/1wN0ltO02v0EtYpojYGo49LNkaA9bvt5U?usp=sharing

Please let me know your thoughts.

  • Christian

I think the memory allocation is hitting some 32-bit integer size limit. The largest size sideload can be is 2,147,483,647 bytes. But the value your model wants to allocate for sideload is 20 * 167444624 or 3,348,892,480 bytes.

In the debugger below it prints out some ridiculous number for “num” since the limit has been exceeded I think…

We’ll need to research more to see if this is indeed the issue or something else…

Breakpoint 2, u_calloc (num=167444624, size=size@entry=4, file=file@entry=0x93a425 "ccx_2.18.c", line=line@entry=294,
    ptr_name=ptr_name@entry=0x93a54c "idefload") at u_calloc.c:36
36        if(num==0){
(gdb) cont
Continuing.
ALLOCATION of variable idefload, file ccx_2.18.c, line=294, num=167444624, size=4, address= 23452792877072

Breakpoint 2, u_calloc (num=18446744072763476800, size=size@entry=1, file=file@entry=0x93a425 "ccx_2.18.c", line=line@entry=295,
    ptr_name=ptr_name@entry=0x93a555 "sideload") at u_calloc.c:36
36        if(num==0){
(gdb) n
41        a=calloc(num,size);
(gdb) p num
$3 = 18446744072763476800
(gdb) p size
$4 = 1
(gdb) n
42        if(a==NULL){
(gdb) p a
$5 = (void *) 0x0
(gdb) n
43          printf("*ERROR in u_calloc: error allocating memory\n");
(gdb) n
*ERROR in u_calloc: error allocating memory
44          printf("variable=%s, file=%s, line=%d, num=%ld, size=%ld\n",ptr_name,file,line,num,size);
(gdb) p num
$6 = 18446744072763476800
(gdb) n
variable=sideload, file=ccx_2.18.c, line=295, num=-946074816, size=1
48          exit(16);
(gdb)

I can add that for the i8 2.16 (PARDISO) version, the above-mentioned error does not occur:

Indeed,
I get the exact same numbers, and I can confirm that num is out of what can be interpreted with the default size_t data type and causes the error. I am just not sure how to get around this. It seems like Rafal (post below) could confirm that this problem does not occur with the i8 version of calculix. But I am wondering if there is a way to define facial loads more efficiently since the operator OP=NEW keeps on increasing nload. Or, if changes can be applied to allocate.f to make sure nload is not increasing this much.

Again, any ideas are welcome.

Thanks for your help!

I also can confirm the model runs fine with pardiso on linux. I think simplest solution is to upgrade your installation to 2.18 .

My understanding is the software reads the entire input deck once and stores all the facial loads into that sideload variable. Even though you have OP=NEW, the software treats each step as different loads. So not sure, there is any workaround… Anyways the model only used a few GB of memory, so not something I would worry about :slight_smile:

1 Like