Teie kommentaarid

Hi @Barney, 


I've logged the following ticket - https://community.warewolf.io/communities/1/topics/1510-can-we-please-confirm-that-the-condition-if-there-is-no-error-on-the-decision-tool-recognizes-and

With regards to ASYNC processing, we've removed it after we experienced a drop-off in records during execution. I'm aware that you have reported it and DEV2 have fixed it, we (Journey Team) haven't enabled or tested it since the fix was made. Something for us to look into.

This is a nice breakdown, thanks Khonzi.

I've been thinking about the issue, and from my experience monitoring journeys and the particular issue, I have the following comments:

1. If an API fails, the current condition in the Decision Tool, "If there is No error", appears to not not work for certain errors like a 504 error. What then happens the customer record gets stuck in the Journey, because it doesn't pack up - due to the logic in the decision failing. Now, because the record is stuck in the journey, it doesn't send an acknowledgement back to the relevant RabbitMQ queue. And because this happens, the next record in the RabbitMQ queue is not being read and processed - we basically have a lock, that only a restart at this stage seems to resolve.


Now knowing what we know, what can we do differently?


2. We can fix the condition, "If there is no errors" in the Decision Tool to recognize failures such as 504.   


or

From the Journey Team - We can alter the condition on the Decision Tool, to rather look if any records are being returned from the API, instead of "If there is no error". If no records are returned then Packup, else continue.


3. We can also implement ASYNC processing on the journeys - this will enable other records to be processed without having to wait for one to finish before the next one is picked up.  

Morning Barney,

Yes, this can work. Would the suggested functionality cater for new nodes that might have been added to the workflow since the tests were written?

Hi Team, we've had quite a few issues with the suspend tool since it was released. It has become clear that we need to include the suspend tool in the Warewolf testing Framework so that we have the ability to easily test it after each release. How far are we with "being able to fully test the suspend tool with the Warewolf testing framework"?