Even if doing it in a loop and only commit batches, it will still be a single database transaction. When working with that many objects, it can cause issues. Is it possible to retrieve the objects in smaller batches from the api and do the actual processing in the mendix process queue? That would guarantee that each batch is committed in a single transaction. (There are also java actions, i think in the community commons, to start and end transactions)
Thank you both very much! I will try Start- and Endtransaction.
I could not vind commitInSeperateDatabaseTransaction. Or is this not a community commons script?
Maximum call stack exceeded usually means that this microflow is recursive (i.e. it has a Call Microflow action which calls itself). Then, if it does not terminate properly, you get this error. Either this microflow should not be recursive, or you need to ensure that it doesn't call itself too many times.
For other people experiencing this problem in the future. We experienced the same problem in our project. In our case there were a lot of object (also 50k plus) were synced to a device. The microflow was called from a nanoflow.
When the microflow was finished, and the changes are executed, the ‘Maximum stack size exceeded’ exception was thrown. In our case it helped to make multiple microflow calls from the nanoflow, to make the total number of objects that are synced in the individual microflows smaller.
So I suspect using the End Transaction and Start Transaction activities would work for the original poster.