Load Balancing with Mule

Balancing the load in Mule

Load balancing across multiple server instances is one of the amazing techniques and ways for optimizing resource utilization, maximizing throughput, and reducing latency to ensure high availability of servers in an environment where some concurrent requests are in millions from users or clients and appear in a very fast and reliable manner.

API-led connectivity and CQRS: How Mule supports traditional integration tasks

API led connectivity

There is a lot of interest in how Mule supports emerging patterns like CQRS (Command Query Responsibility Segregation), so I wanted to create a series of blog posts discussing an insightful approach. Over the course of the series so far, we described the initial problem at hand and how to solve it using CQRS and API-led Connectivity. Next, we designed and implemented the synchronous Query API application followed by the implementation of the asynchronous Command API application with a composable API architecture.

TLS improvements in Mule 3.8

As you might have read, Mule 3.8 includes a number of improvements regarding TLS. In this post, we will analyze the TLS environment prior to this release and explore all of the new enhancements in detail so that you can start taking advantage of them.

Connecting anything to anything: How the MuleSoft team got a Commodore 64 to tweet

June 29 2016

0 comments 0

When MuleSoft engineering recently organized a two-day internal hackathon, our team of four:

Custom batch job instance IDs in Mule 3.8

Welcome to the final post in the three post series about batch improvements on Mule 3.8!

The last new feature we have is a simple one which comes quite handy when you need to read through logs. As you know, batch jobs are just programs processed in batch mode, and each time the job is triggered, a new job instance is created and tracked separately. Each of those instances is unique and therefore has a unique ID.

Configurable batch block size in Mule 3.8

Welcome back! Following the series about new batch features in Mule 3.8, the second most common request was being able to configure the batch block size.

What’s the block size?

largeIn a traditional online processing model, each request is usually mapped to a worker thread. Regardless of the processing being synchronous, asynchronous, one-way, request-response, or even if the requests are temporarily buffered before being processed (like in the Disruptor or SEDA models), servers usually end up in this 1:1 relationship between a request and a running thread.

Announcing Mule 3.8: A unified runtime for integration and API management

As part of our upcoming Anypoint Platform June 2016 announcement, we are excited to release Mule 3.8. This release extends the flexibility of Mule, the runtime engine of Anypoint Platform, by unifying integration and API management capabilities into one lightweight distribution. It also significantly enhances core Mule functionality including DNS round robin load balancing, DataWeave Flat File, Fixed Width and COBOL Copybook support, Mule HA improvements, and more.

Unified API Gateway and Mule runtimes

Running Mule as Worker Role on Azure

May 5 2016

2 comments 0

This week’s MuleSoft Champions guest blogger is Ruman Khan, a polyglot programmer who loves to code. 

If you are looking for options to have Mule running in Microsoft Azure to claim PaaS, one of the best ways is by using Worker Roles. Worker Roles are VMs with IIS disabled (this can be enabled if needed) and are generally used to perform any complex processing tasks. We can leverage this ability to run Mule standalone service inside an Azure Worker Role.

Using Anypoint Platform at Accenture

April 26 2016

0 comments 0

We are proud to have a large and active developer community using MuleSoft. Some of those developers become Champions, developers who become an active part of our community. Champions grow their skills, help others, and network with the top MuleSoft developers and advocates around the world. Gennaro Spagnoli, a MuleSoft Champion who works at Accenture, wrote about his experience with Anypoint Platform, and his approach to a particular integration project. 

How to Create and Use OData APIs for Any Connectivity Need

March 16 2016

0 comments 0

In my blogpost last week, I shared how, in just 5 minutes, you can expose MySQL, DB2, SQLServer, Oracle or SAP datasource as an OData API into Salesforce using Anypoint Data Gateway for Lightning Connect.

Data Gateway - Out of the box

But let’s say what Data Gateway offers out-of-the-box is not a perfect fit for what you want to do. Maybe you want to create an OData API for a different datasource, expose a legacy API as an OData API or do data orchestration before exposing data into Salesforce. So what do you do?