Database delete rolls back

0
We have an on-premise deployment where we are trying to delete a large dataset of ~350k records with their related data.  The delete process is rather slow (due to the related data).  Our process deletes ~10k records every hour.  At the average estimate we would need ~35 hours to complete our delete.  At 5:30 in the morning, we have a connection pool reset error in the logs and it seems as if the database rolls back after this.   Connection pool error message: 2020-01-28 05:30:51.547 ERROR - ConnectionBus_Retrieve: JdbcDataStore::execRetrieveQuery failed with SQLState: 08S01, error code: 0, message: Connection reset 2020-01-28 05:30:51.547 ERROR - ConnectionBus_Retrieve: com.microsoft.sqlserver.jdbc.SQLServerException: Connection reset at com.microsoft.sqlserver.jdbc.SQLServerConnection.terminate(SQLServerConnection.java:2826) at com.microsoft.sqlserver.jdbc.TDSChannel.read(IOBuffer.java:1979) at com.microsoft.sqlserver.jdbc.TDSReader.readPacket(IOBuffer.java:6369) at com.microsoft.sqlserver.jdbc.TDSCommand.startResponse(IOBuffer.java:7627) at com.microsoft.sqlserver.jdbc.SQLServerStatement.doExecuteStatement(SQLServerStatement.java:857) at com.microsoft.sqlserver.jdbc.SQLServerStatement$StmtExecCmd.doExecute(SQLServerStatement.java:759) at com.microsoft.sqlserver.jdbc.TDSCommand.execute(IOBuffer.java:7240) at com.microsoft.sqlserver.jdbc.SQLServerConnection.executeCommand(SQLServerConnection.java:2869) at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeCommand(SQLServerStatement.java:243) at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeStatement(SQLServerStatement.java:218) at com.microsoft.sqlserver.jdbc.SQLServerStatement.executeQuery(SQLServerStatement.java:688) at org.apache.commons.dbcp2.DelegatingStatement.executeQuery(DelegatingStatement.java:207) at org.apache.commons.dbcp2.DelegatingStatement.executeQuery(DelegatingStatement.java:207) at com.mendix.connectionbus.connections.jdbc.JdbcDataStore.execRetrieveQuery(JdbcDataStore.java:315) at com.mendix.connectionbus.connections.jdbc.JdbcDataStore.execRetrieveQueries(JdbcDataStore.java:195) at com.mendix.connectionbus.connections.jdbc.JdbcDataStore.retrieve(JdbcDataStore.java:155) at com.mendix.connectionbus.connections.jdbc.JdbcDataStore.retrieve(JdbcDataStore.java:150) at com.mendix.connectionbus.connections.DataStoreRetrieveAction.call(DataStoreRetrieveAction.java:29) at com.mendix.connectionbus.connections.DataStoreRetrieveAction.call(DataStoreRetrieveAction.java:11) at java.util.concurrent.FutureTask.run(Unknown Source) at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) at java.lang.Thread.run(Unknown Source) According to this Stack Overflow post, the 08S01 is possibly a badly configured firewall. There is nothing immediately jumping out in the connection pool settings that is a likely culprit (see documentation here). What have we tried so far (without success)? Implemented our delete inside a loop where we delete 1000 records at a time Used the CommunityCommons deleteAll java action Used the CommunityCommons startTransaction and endTransaction inside our loop deleting 1000 records at a time   With the delete process likely taking 35 hours to complete, there is no way that we will be able to delete our records before the next connection pool reset. Has anyone else experienced this before?  Do you have a way around the problem?
asked
2 answers
2

Some options are:

  1. Use the ProcessQueue do delete in batches. This will not increase performance, but you can delete in separated transactions
  2. Do every delete in a separate transaction (there are some javaactions in community commons) – however, be careful when fiddling around with transactions
  3. delete related data in a separate step instead of letting the delete behaviour handle this.

regards, Fabian

  • just deleting in a loop will still run in a single large transaction
  • starting a new transaction within the main transaction only creates a subtransaction
answered
2

You don't need to set up and use ProcessQueue for this necessarily, you can simply use the 'executeMicroflowInBackground' or ‘executeMicroflowInBatches’ java action from the CommunityCommons module. I think the ProcessQueue module is a bit heavy for this if you are not using it already otherwise.

If you use executeMicroflowInBackground, just create your own batching and deletion logic, emulate a loop with a counter variable and allow it to trigger the executeMicroflowInBackground java action as many times as needed. They run asynchronously so they will queue in the background.

Furthermore, consider retrieving and deleting your data manually from the bottom up as opposed to deleting the top objects and letting delete behaviour handle it. You can retrieve and delete lists manually much, much faster than one by one via delete behaviour.

answered