Warning: A non-numeric value encountered in /home/ricston2/public_html/blogarchive/wp-content/themes/Divi/functions.php on line 5766

Today’s guest post is from Stephen Fenech, Consultant at Ricston who talks about his most recent Mule engagement.

A couple of weeks ago I was at SwissCom, the ‘leading Swiss provider of innovative communications and IT solutions’. They are currently using Mule in one of their major projects and we came across an interesting scenario. For this post I have watered down the scenario slightly; to not distract you from the main details while still leaving in all the relevant items.

We had a synchronous request from an external client, which was forwarded in parallel to multiple services. The results from these services were aggregated and then sent back as the response to the client.


	
		
	
	
	
		
			
			
			
			
		
	
	
		
		
	

This is quite a common pattern, however since the requests to obtain this information (the ones to Service1 and Service2) were quite heavy, the results were being cached. The Main service would check if there is a cached result and, if so, will return this result. The way this was solved was to have two routers with filters. If the MainService produces a request, the normal flow will be used but if a result is found in the cache then this will be sent directly to the Asynchronous Reply Aggregator.


	
		
	
	
	
		
			
			
			
			
		
		
			
			
		
	
	
		
		
	

This worked, however, it was slightly inefficient since we use an extra thread to dispatch the message over VM, only to be aggregated by the original thread. We wanted to push Mule to the limit so even minor improvements would make the server handle a larger load.

So, we asked ourselves, what if we could short circuit the flow so that a cached result is sent directly to the client without passing this result over VM and then blocking & waiting for the same result. How can we do this short-circuiting in the most painless way possible?

The secret is in the getResponse method of the aggregator. By default we simply use the method of the parent, which makes use of the response aggregator to block waiting for the responses after which the custom aggregation is called and this method will return the aggregated message. In our case, we want to return the response immediately in certain situation. The getResponse method gets a Mule Message as a parameter. Typically, this message is the one sent out by the outbound router and is used in order to get the info to correlate on. In the case where no outbound router accepts the message returned by the service component this message is passed on to the getResponse method. So all we have to do is filter the cached message on the outbound side so that nothing is sent out. Then in the getResponse method, when we return the cached message or call the normal parent method.

In order to make things a bit more generic, a filter was used to decide if the message is a response or not, making the router configurable. Another advantage of this is that by looking at the configuration, you can tell that there is something different about this aggregation router thus making the config more explicit.

public class CustomAggregator extends ResponseCorrelationAggregator {

	// This filter is used to check if the result should be sent
	// back immediately rather than wait for the aggregation.
	private Filter shortCircuitingFilter;

    @Override
    public MuleMessage getResponse(MuleMessage message) throws
		RoutingException
    {
     if(shortCircuitingFilter!=null&&shortCircuitingFilter.
		accept(message))
        {
        	logger.debug("Short-Circuiting flow.");
        	return message;
        }else
        {
        	return super.getResponse(message);
        }
    }

The configuration is as follows:


	
		
	
	
	
		
			
			
			
			
			
		
	
	
		
		
			
		
	

With this simple 10 line tweak we managed to improve the flow, reducing the complexity of the scenario and making the configuration more elegant.