API Layer and Grid Processing Architecture – ESB or not to ESB Revisited Part 3

Reading Time: 9 minutes

In this penultimate part of my ESB or not to ESB series I’m going to cover two more architectures; API Layer and Processing Grid providing the benefits and considerations for each.

Previous Posts:

API Layer

An API layer (or Service Layer) provides a set of APIs to access data and/or functionality. Typically implemented using REST or SOAP services over HTTP, it provides decoupling of backend systems and applications from the clients that access them.


API Layer Characteristics

Used to provide programmatic access to data or functionality and API Layer provides a REST or SOAP interface to one or more sources of data such as databases, file systems, legacy applications.

  • Need to make data in databases or file systems available to a wider audience
  • Reference / Lookup data – usually database, flat file or Excel spreadsheet
  • Modernize legacy applications by providing REST access to its data functionality
  • Provide a REST API in addition to an existing SOAP API to cater for a broader range of clients
  • Publish a data source from two or more applications providing a union of the data sets i.e. combine your company employee database with LinkedIn and Facebook data, to provide richer information.

The API Layer architecture pattern needs to take care of access to the APIs including authentication and authorization to specific APIs, access tracking, and monitoring.  Monitoring is particularly important for shared APIs (internal or external) since other user’s applications will depend on the API and will need to have an expectation around SLA for the API.

API Layer Benefits

  • Decouple your clients from the data – promotes good service orientation
  • Allows creation of REST APIs that can be consumed by mobile and other smart devices
  • Private data remain private since the API should only publish a subset of the information. The API controls exactly what data the API publishes
  • An API Layer can easily be integrated into existing access controls and LDAP/AD integration.

API Layer Considerations

Its hard to define an API Layer since there is no defined ‘right way’ to do it.  The developer needs to define:

  1. A URL structure
  2. Authentication mechanism
  3. Define versioning model (many ways of doing this, no obvious winner)
  4. Define Data Transfer Objects – the data that gets published over the wire

It’s also hard to change an API since you often don’t control the clients that access it.  Changing signatures, URL scheme, authentication or versioning model may result in breaking the API for existing clients.

Recommendation

API Layer is used to decouple clients from backend systems, which is good for modernizing legacy applications, refreshing existing SOAP APIs, migration of backend systems and unifying data sources.  API layer is also used to publish data such as reference data, and lookup data form flat files, databases, even Excel spreadsheets. Typically, REST is used for the API Layer, though SOAP provides a WSDL, which defines a contract between client and API.

Processing Grid

The processing grid architecture is a powerful way to perform parallel processing tasks that can scale out. For clarification, scale out refers to the ability to add commodity hardware (nodes) to the architecture for processing more data or events. The processing power must grow linear or near linear as more nodes are added. Grids are typically used for highly computational tasks that can be parallelized such as processing large volumes of market data for trends or collecting device information (such GPS coordinates from phones or cars) for real-time analysis. This architecture is less about integration and more about processing power, however, typically the data came from multiple data sources and funneled into the processing grid.

Processing Grid Characteristics

  • All nodes have the same configuration. Each node will process the same data input the same way as the other nodes.  This mans it doesn’t matter which node in the cluster processes the data
  • There needs to be a load balancing mechanism in front of the grid to feed the nodes. Typically an HTTP load balancer or a JMS or AMQP message queue will be used
  • Grids are generally resilient to failure since nodes are interchangeable
  • Grids are usually stateless. Sometimes the state is managed for ensuring idempotency i.e. duplicate messages don’t affect the outcome.
  • To scale the architecture add more nodes

Processing Grid Benefits

  • Straight forward architecture to build, simple deployment
  • Works well for a small number of integration points (applications)
  • Can be scaled by clustering the hub

Processing Grid Considerations

  • Best for high-transaction or data processing tasks

Recommendation

Good for high-transactional or data processing tasks that can be parallelized. Having ESB-like capabilities can help feed data into the grid from different sources. Typically this type of architecture is used for processing structured data whereas Apache Hadoop would be used for unstructured data, however, processing grid is a different architecture to a Hadoop grid where the grid is highly distributed and processing logic is brought to the data rather than the data being moved to a centralised grid.

To wrap this up, I will be talking about how Mule works with these different architectures with my Choosing the right integration/ESB platform post. If you think I should be including other architecture patterns in this series, please let me know.

Follow: @rossmason@mulesoft


Validating complex XML messages with Mule and AbsoluteRule

Reading Time: 4 minutes

It is pretty common that Mule messages contain XML as a payload and that those messages need to be validated/transformed. XML documents can be automatically validated using XSD, though those validations are structural and sometimes we need to manually code some validation in plain Java (especially in complex scenarios like validating references, existence conditions and value dependencies).

Continue reading

Introducing Mule Query Language

Reading Time: 8 minutes

Working with web APIs, local APIs and different data formats and structures is too damn hard. You have to write painful verbose code to:

  • Query Web APIs and work with the data
  • Enrich and join data from external services with local services
  • Compose RESTful services from existing services
  • Version services and data formats
  • Merge data from different sources into a common data format
  • Sort through sets of data
Continue reading

Get a sneak peek at Mule 3.2

Reading Time: < 1 minute

Mule 3.2 is right around the corner and it is shaping up as the best Mule release ever.

Some highlights include:

  • High availability clustering for mission critical environments
  • A business event analyzer to gain deep visibility into business events for root cause analysis and compliance
  • Drools integration for business rules and complex event processing
Continue reading

Meet Until Successful, Store and Forward for Mule

Reading Time: 8 minutes

In computing like in life, not every attempt is successful the first time. A message delivery to a remote application may be impossible for a while. A particular business action may be impossible due to the temporary unavailability of an enterprise resource. The good news is that these adverse conditions may not last: all what is needed is to retry the failed operation until the issue gets resolved.

This approach is well-known in the industry. Just take a look at how emails operate: delivery between SMTP servers is attempted repetitively until it succeeds. Failure is assumed and dealt with. Following the same principles, we’re happy to introduce the Until Successful routing message processor.

Continue reading

Real-time Web and Streaming APIs

Reading Time: 7 minutes

There was a lot of buzz a few years ago around real-time web and since then it has been bubbling along. I have a financial/enterprise background so real-time has a very different meaning to me; time is measured in microseconds. Web real-time seems to be measured as sub 1 second . My issue with real time web to date is only parts of the web are web-real time.  While the data can be delivered to the browser using push technologies such as comet and web sockets, the vast majority of REST and soap API that provide access to application data still use the HTTP request response model.

That’s starting to change with more public streaming APIs appearing. A streaming API (aka HTTP Push) works by the client opening a socket, providing some criteria of the data it wants to receive and the server will deliver new data as it is received over the open socket. For those familiar with publish-subscribe models of delivering data, this all sounds familiar.

Continue reading
twilio logo

Twilio Cloud Connector for Mule

Reading Time: 7 minutes

I’m pleased to announce the release of a new Cloud Connector for Twilio. If you don’t know what Twilio is, you should definitely check it out!

What is Twilio?

Let me give you a brief introduction to Twilio. In short, Twilio provides a cloud API for voice and SMS communications that leverages existing web development skills, resources, and skills. Twilio offers a pay as you go, affordable no contract plan for your business to make and receive calls and SMS messages. They use your existing web development skills, resources and infrastructure to improve on your marketing campaigns.

What can you do with Twilio Cloud Connector?

Let’s say you want to send SMS messages to your customers to let them know about a new product you are releasing or special discounts you want to offer them. Most likely the customer data will be stored in a database, for this example, I will use a Mongo database (if you don’t know about Mongo, just think of it like any other database). How much code or time you think you need to do this? Less than you think:

So we will be using two connectors actually: Mongo DB and Twilio. You can combine any number of connectors to fit your needs, check our available connectors section often because new ones are being frequently released!

The example is pretty straightforward, let’s go step by step.

First, we declare our Twilio credentials and the Mongo DB configuration settings. (See the MongoDB connector.) These settings are necessary to establish the connection with Twilio server and your Mongo database server respectively. You need to declare them only once, and they will be used for all the connections.

So far so good but I want to send SMS messages! Ok hold on, here it is:

A Mule flow is created to retrieve the customer records from the database and for each one of them call Twilio connector to send an SMS message with a sample text on it. Don’t know what a Mule flow is?. Basically, a Mule flow is a mechanism that enables orchestration of services using the sophisticated message flow capabilities of Mule ESB.

Now that we somehow defined what a Mule flow is and what it’s used for in the example let me explain each line inside of it.

This is a simple one. Using Mongo Connector, we obtain all the data from a collection called ‘clients.’ We could have added search criteria as well to narrow down the results.

This tag sends each member of the client’s collection to the next message processor as separate messages. Check this link for more information on routing messages.

Finally, for each customer, we call Twilio connector to send an SMS message. In the attribute “from” you need to put an SMS enabled Twilio phone number. In the “to” attribute we pick the phone number from the payload which is filled with each customer information by the collection splitter. Likewise, to create the body of the text message we use the name of the customer just to make it more personalized.

Running this example is as simple as typing this URL into your favorite browser http://localhost:9090/send-sms

Want to try out Twilio Cloud Connector?

So, what if you want to use the cloud connector in your Mule app? Really easy, first add the following snippet to your Maven POM:

Could not embed GitHub Gist  1022803: Not Found

then add the connector as a dependency:

and finally in you declare the namespace for Twilio connector in your flow:

If you want to download the source code for this connector checks the Github repository.

Conclusion

This post only shows one of the many cool things you can do with Twilio connector. Besides sending SMS messages, you can query meta-data about your account, phone numbers, calls, text messages, and recordings and do fancy things like initiate outbound calls. For more information check the Twilio connector page.

If you have further questions, you can post a question in the forum.


Fake and Stub objects creation using Groovy

Reading Time: 10 minutes

Automated testing using XUnit style frameworks can be achieved using several techniques. You can test a particular class, group of classes, or all your system at once. This selection will conform your SUT (system under test). You must define which technique or mix of techniques to use depending on your system and what best suits it.

In case you are trying to isolate your SUT form other components, such as collaborators, then you will need to create a “double” of this dependency that can be used instead of the real one. There are several approaches on how to define a “double”: Dummy objects, Fake objects, Stub objects and Mock objects (there’s an interesting article about what a “double” is and what each type of “double” does). Each one enable your code to test different things, so once again, it’s up to you to decide which one to use.

I won’t discuss about how to create mock objects, there are several APIs to use out there well documented. There’s no much to say about Dummy objects so let just focus on creating Fake and Stub objects. For the creation of such kind of objects we are going to compare Groovy with java.
I think Groovy rules over java and the reason is that groovy syntax allow us to create objects, collections and interface implementations with much less code than java. The main groovy feature that allows this is interfaces implementation with closure or map. Using these feature we can create “double” objects to use as collaborators for the SUT. It usage reduces burden of writing an entire class to provide a “double” implementation for the class under test.

Note: Another very useful feature to reduce code during testing is how groovy creates and manipulates collections and objects.

Implementing interfaces using Groovy:

Suppose we have NotificationSender interface and we want to create a Stub object using groovy so we can use it during testing. So we can create an implementation in groovy using a closure or a map.

Implementation using closure

As we are using closure, we don’t have any way to know which method is being executed, so the closure parameters must be an array of Objects. Depending on the parameters we receive we can chose which logic to execute. In general, closure are better for one method interfaces.

Implementation using a map

Maps are better than closure to implement interfaces with several methods, but we can still only declare the method we want to implement in the map. If a call to a not implemented method is done, then a NullPointerException is thrown.

Using a map to implement an interface also allows you to create several implementations reusing behavior for each method. If you define 2 implementations of sendNotification(), 2 implementations of isEnabled() and another 2 implementations of destinations() method, then you can create several instances of NotificationSender mixing those implementations

Creating different interface implementations reusing methods definitions

Creating input method parameters for methods under test is also much easier with groovy since you can set each object property within the object’s creation call

Creating input method parameters

new Notification(type: Type.ERROR, message: "some message", cause: "exception")

Notification must have an empty parameters constructor (if not this won’t work) and using a key value syntax we can set all Notification properties in one line. It’s arguable that you can do the same in java if you have a constructor that declares each property as a parameter but we don’t always want to have such a constructor or we just can’t because we don’t own that code.

Now, let’s compare a test using groovy, and one using java.

Groovy vs Java

In this case we have a Notification feature to test. We have a Notification class representing a notification message. We also have a NotificationManager which responsibility is to manage notification state (delivered notifications count, undelivered notifications count and those notifications objects that were not delivered). It has a collaborator that is represented by the interface NotificationSender. Implementations of NotificaitonSender are in charge of delivery a notification message. As we don’t want to use a real implementation of NotificationSender we are going to use a Stub object. In this case the SUT is composed by Notification and NotificationManager.
You can download example project from here.

Groovy test

Java test

public class JavaNotificationTest {

    private NotificationManager notificationManager;
    private NotificationSender noFailuresNotificationSender;
    private NotificationSender someFailuresNotificationSender;
    private NotificationSender allFailuresNotificationSender;

    @Before
    public void setUp() {
        noFailuresNotificationSender = new NotificationSender() {
            public boolean isEnabled() {
                return true;
            }

            public void sendNotification(String message) throws NotificationSenderException {
            }

            public String[] destinations() {
                return new String[0];
            }
        };
        someFailuresNotificationSender = new NotificationSender() {
            private int currentMessage;

            public boolean isEnabled() {
                return false;
            }

            public void sendNotification(String message) throws NotificationSenderException {
                if (++currentMessage % 2 == 0) throw new NotificationSenderException();
            }

            public String[] destinations() {
                return new String[0];
            }
        };
        allFailuresNotificationSender = new NotificationSender() {
            public boolean isEnabled() {
                return false;
            }

            public void sendNotification(String message) throws NotificationSenderException {
                throw new NotificationSenderException();
            }

            public String[] destinations() {
                return new String[0];
            }
        };
    }

    @Test
    public void testSuccessfulNotifications() throws NotificationException {/* ... */}

    @Test(expected = NotificationException.class)
    public void testNotificationExceptionIsThrown() throws NotificationException {/* ... */}

    @Test
    public void testSomeFailedNotifications() {/* ... */}

    @Test
    public void testAllFailedNotifications() {/* ... */}

    private List buildNotificationList() {
        return Arrays.asList(buildNotification(Notification.Type.MESSAGE, "exception", "some message"),
                buildNotification(Notification.Type.ERROR, "warning", "some other message"),
                buildNotification(Notification.Type.EVENT, "system update", "another message"));
    }

    private Notification buildNotification(Notification.Type type, String exception, String message) {
        Notification notification = new Notification();
        notification.setType(type);
        notification.setCause(exception);
        notification.setMessage(message);
        return notification;
    }
}

Take a look at the burden of creating a Stub implementation of NotificationSender using Java code. 7 lines using groovy against 40 lines using java. (java code can be shrunken wreaking code style but it takes about 25 lines at least).
Create an object as input parameter for methods under test took 5 lines in java code against 1 line in groovy code.
Overall testing using Java needed 112 lines of code against 71 lines of code using Groovy. And this was a very simple test case. Consider what would happened if we need to create an Stub or Fake object for a much bigger interface, what would happen with the test if we only require to implement a couple of those methods, and what would happen if we need to create several Stubs or Fake objects.

Unfortunately life is not all Charleston and cocktails. Unit testing execution should be done very often, an must be part of project lifecycle, so we want it to be as fast as possible (if not, developers will try to avoid them). Groovy performance is not so great. It takes much more time to execute a groovy test case that a java test case (in this case more than 10 times). So you should consider this before moving to groovy for testing.