Getting out of memory error after upgrading to mendix version 7.18

3
Hi There   I upgraded to mendix 7.18 on Monday  and i am now getting out of memory errors on my long running microflows that run in the background of my application.   I have tried on different machines and reloaded the postgresql data base and increased the maximum of lock per record size to no prevail. I made the mistake of pushing this to production as we didn't pick up the errors as we normally run these microflows a number of times a day to minimise the record list size. I had to go back in history and rerun the information for a complete day and this is when the errors happened. Nothing has changed in the portion of the system except the mendix upgrade. I can opsimize the microflows so they commit less or break the jobs up, but that will take time as all the functionality will have to be tested end to end again. And im not sure that will solve the problem.   Are you guys aware of this problem or is there a setting i need to change?       Stacktrace:   om.mendix.modules.microflowengine.MicroflowException: Failed to commit     at SalesAndCashup.OF_Sales_GetAutomatedUpdates_ForScheduledTask.nested.f94d4b81-62d6-431c-a098-2626adba402c [879 of 959] (Change : 'Change 'Transaction' (Net, VAT, Gross, TransactionQuantity)')     at SalesAndCashup.OF_Sales_GetAutomatedUpdates_ForScheduledTask (NestedLoopedMicroflow : '')     at RestServices_Custom.Update_Tills (SubMicroflow : 'OF_Sales_GetAutomatedUpdates_ForScheduledTask')     at RestServices_Custom.IVK_Cashup_Queue.nested.9da78cdf-595b-4e56-9192-d252d541467b.nested.87617301-85b1-4b83-805d-56bec64df607 [0 of 1] (SubMicroflow : 'Update_Tills')     at RestServices_Custom.IVK_Cashup_Queue.nested.9da78cdf-595b-4e56-9192-d252d541467b [0 of 1] (NestedLoopedMicroflow : '')     at RestServices_Custom.IVK_Cashup_Queue (NestedLoopedMicroflow : '') Advanced stacktrace:     at com.mendix.modules.microflowengine.MicroflowUtil.processException(MicroflowUtil.java:146) Caused by: com.mendix.core.CoreRuntimeException: Failed to commit     at com.mendix.basis.component.CommitHandler.commit(CommitHandler.scala:155) Caused by: com.mendix.core.CoreRuntimeException: com.mendix.core.CoreRuntimeException: com.mendix.systemwideinterfaces.MendixRuntimeException: com.mendix.basis.connectionbus.ConnectionBusException: Exception occurred while updating data. (SQL State: 53200, Error Code: 0)     at com.mendix.basis.actionmanagement.ActionManagerBase.executeInTransactionSync(ActionManagerBase.java:125) Caused by: com.mendix.core.CoreRuntimeException: com.mendix.systemwideinterfaces.MendixRuntimeException: com.mendix.basis.connectionbus.ConnectionBusException: Exception occurred while updating data. (SQL State: 53200, Error Code: 0)     at com.mendix.basis.actionmanagement.ActionManagerBase.executeSync(ActionManagerBase.java:159) Caused by: com.mendix.systemwideinterfaces.MendixRuntimeException: com.mendix.basis.connectionbus.ConnectionBusException: Exception occurred while updating data. (SQL State: 53200, Error Code: 0)     at com.mendix.util.classloading.Runner.doRunUsingClassLoaderOf(Runner.java:36) Caused by: com.mendix.basis.connectionbus.ConnectionBusException: Exception occurred while updating data. (SQL State: 53200, Error Code: 0)     at com.mendix.connectionbus.connections.jdbc.JdbcDataStore.getCorrectException(JdbcDataStore.java:736) Caused by: org.postgresql.util.PSQLException: ERROR: out of shared memory   Hint: You might need to increase max_locks_per_transaction.    Regards, Patrick
asked
3 answers
1

I've encountered this problem with long running microflows and high volumes of objects as well, for example when I'm processing a high volume of objects in batches of 5000 or so. So far I've found two solutions, I prefer using the second one:

  • Scheduled events: I restart long microflow after reaching a batch limit (as mentioned by Ronald). For example, I process 10 batches of 5000 objects, then let the microflow end. Call the microflow again using a scheduled event, and continue processing form where I left off using a 'settings' object that keeps track of which object I processed last. This method is harder to implement, and often also a bit slow, because you have to wait until the scheduled event gets called again.
  • Java actions: I prefer using the CommunityCommons module to execute microflows in separate transactions (e.g. by using the execute a microflow as another user action). This method is relatively similar, I would call a microflow to process 10 batches of 5000 objects, using the java action  "execute microflow as user". Then keep looping over this java action until it has processed all objects. This method is actually the easiest to implement and has always worked for me.
answered
2

This error was due to a change we made in the Mendix runtime, which had a negative side-effect on postgresql. This is fixed in release 7.19, you can find that under ticket references (Tickets 68224, 68847) in the "Fixes" section of the release notes.

Full text copied from there for quick reference: 

  • We reverted the change for improving microflow execution and commit actions by not removing save points, because that caused some databases to run out of limits (for example, out-of-memory on Postgres with the hint to increase max_locks_per_transaction). (Tickets 68224, 68847)
answered
1

Patrick,

I found this post quite usefull: http://www.databasesoup.com/2012/06/postgresqlconf-maxlockspertransaction.html

I assume the long running microflow uses batches? Because then you could try to lower the batch size. Otherwise I would raise a support ticket because Mendix should figure out why this is happening in Postgres.

Regards,

Ronald

 

answered