Failed to remove session after a throw exception?

0
Hi guys,   We experienced a strange behaviour in our MxApp, Mar 13 09:40:47.187 - WARNING - Core: Failed to remove session 'ff033945-e753-4599-9f1a-9ef9153fcc77' for user '2395966' because actions are still running for this session. Client access has been disabled. Session will be attempted to be removed again in 300 seconds. Mar 13 09:40:47.187 - WARNING - Core: Failed to remove session 'aad55d8e-6097-4c0f-94a5-9614a32e92b1' for user '2395966' because actions are still running for this session. Client access has been disabled. Session will be attempted to be removed again in 300 seconds. Mar 13 09:40:47.187 - WARNING - Core: Failed to remove session '372557cb-6e00-4a81-a096-bb6b8527f265' for user '2395966' because actions are still running for this session. Client access has been disabled. Session will be attempted to be removed again in 300 seconds.   For almost 24 hours Mx was trying to kill these 3 sessions. When looking in the metrics, running now...we were also not able to kill them manually. When looking to the indicated running now microflow (as noticed in the metrics) a possible cause could be a "throwException" java action, other stuff in the mf is native stuff as retrieve account, create stuff, create stuff..No webservice calls or whatever.   We were also not be able to reproduce it...   So who has any indication for the cause, hereby the following questions come to my mind: 1 - How is it possible that Mx could kill the mf, because it is a sub mf of a higher mf? 2 - Is he looking for an session which isnt there? 3 - If so, wondering how could it be possible to end sessions without informing Mx about it (!)?    
asked
2 answers
1

I have seen this with a session passed to a background process or with long-running scheduled events running as a user.

answered
1

Hi Enzo,

  The platform will not kill that session as long as that microflow is running. You will see that 'Failed to remove session' as long as the running now microflow is running.  It sounds like the real issue is for some reason, a  process is running a lot longer than you think it should and your analysis indicates that the 'throw error' action or sub-microflow is causing some issues.

  My first guess would be is there some infinite loop somewhere?  It's a long shot, but perhaps in the error flow there is a commit on an object with a before commit microflow?  Are you able to increase logging levels for the relevant microflow or throw a breakpoint on the long running flow to get a sense of why it might be hung up?

answered