API Best Practices: Response Handling (Part 5)

Reading Time: 13 minutes

This is part five of the API design best practices series.

Provide Helpful Responses

Building a solid foundation to ensure the scalability and longevity of your API is crucial, but just as crucial is ensuring that developers can understand your API, and trust it to respond with the appropriate header codes and error messages.

In this week’s API best practices, we’re going to cover how to ensure that developers understand exactly what happened with their API call by using the appropriate HTTP Status Codes (something that is often times missed), as well as by returning descriptive error messages on failure.

Use HTTP Status Codes

One of the most commonly misused HTTP Status Codes is 200 – ok or the request was successful.  Surprisingly, you’ll find that a lot of APIs use 200 when creating an object (status code 201), or even when the response fails:

invalid200

In the above case, if the developer is solely relying on the status code to see if the request was successful, the program will continue on not realizing that the request failed, and that it did something wrong.  This is especially important if there are dependencies within the program on that record existing.  Instead, the correct status code to use would have been 400 to indicate a “Bad Request.”

By using the correct status codes, developers can quickly see what is happening with the application and do a “quick check” for errors without having to rely on the body’s response.

You can find a full list of status codes in the HTTP/1.1 RFC, but just for a quick reference, here are some of the most commonly used Status Codes for RESTful APIs:

200Ok
201Created
304Not Modified
400Bad Request
401Not Authorized
403Forbidden
404Page/ Resource Not Found
405Method Not Allowed
415Unsupported Media Type
500Internal Server Error

Of course, if you feel like being really creative, you can always take advantage of status code:

418I’m a Teapot

It’s important to note that Twitter’s famed 420 status code – Enhance Your Calm, is not really a standardized response, and you should probably just stick to status code 429 for too many requests instead.

Use Descriptive Error Messages

Again, status codes help developers quickly identify the result of their call, allowing for quick success and failure checks.  But in the event of a failure, it’s also important to make sure the developer understands WHY the call failed.  This is especially crucial to the initial integration of your API (remember, the easier your API is to integrate, the more likely people are to use it), as well as general maintenance when bugs or other issues come up.

You’ll want your error body to be well formed, and descriptive.  This means telling the developer what happened, why it happened, and most importantly – how to fix it.  You should avoid using generic or non-descriptive error messages such as:

redx   Your request could not be completed
redx   An error occurred
redx   Invalid request

Generic error messages are one of the biggest hinderances to API integration as developers may struggle for hours trying to figure out why the call is failing, even misinterpreting the intent of the error message altogether. And eventually, if they can’t figure it out, they may stop trying altogether.

For example, I struggled for about 30 minutes with one API trying to figure out why I was getting a “This call is not allowed” error response. After repeatedly reformatting my request and trying different approaches, I finally called support (in an extremely frustrated mood) only to find out it was referring to my access token, which just so happened to be one letter off due to my inability to copy and paste such things.

Just the same, an “Invalid Access Token” response would have saved me a ton of hassle, and from feeling like a complete idiot while on the line with support.  It would have also saved them valuable time working on real bugs, instead of trying to troubleshoot the most basic of steps (btw – whenever I get an error the key and token are the first things I check now).

Here are some more examples of descriptive error messages:

greencheckmark   Your API Key is Invalid, Generate a Valid API Key at http://…
greencheckmark   A User ID is required for this action. Read more at http://…
greencheckmark   Your JSON was not properly formed. See example JSON here: http://…

But you can go even further, remember- you’ll want to tell the developer what happened, why it happened, and how to fix it.  One of the best ways to do that is by responding with a standardized error format that returns a code (for support reference), the description of what happened, and a link to the appropriate documentation so that they can learn more/ fix it:

On a support and development side, by doing this you can also track the hits to these pages to see what areas tend to be more troublesome for your users – allowing you to provide even better documentation/ build a better API.

Building an SDK Doesn’t Fix Everything

Last but not least, keep in mind that SDKs or code wrappers/ libraries can be extremely helpful.  However, if you are building a full out SDK instead of a language wrapper that utilizes the hypermedia to handle responses, remember that you are adding a whole new layer of complexity to your API ecosystem – that you will have to maintain.

What SDKs/ Code Wrappers offer is a quick, plug and play way for developers to incorporate your API, while also (hopefully) handling error checks/ responses.  The downside is the more complex your SDK becomes, the more tightly coupled it usually is to your API, making any updates to your API a manual and complex process.  This means that any new features you roll out will receive rather slow adoption, and you may find yourself providing support to your developers on why they can’t do something with your SDK.

When building your SDK you should try to keep it as decoupled from your API as possible, relying on dynamic responses and calls while also following the best coding practices for that language (be sure to watch Keith Casey’s SPOIL talk or read about it here).

Another option is to utilize a SDK building service such as APIMatic.io or REST United, which automatically generates SDKs for your API based on your RAML, Swagger, or API Blueprint spec.  This allows you to offer SDKs, automatically have them update when adding new features (although clients will still need to download the updated version), and offer them without adding any additional workload on your end.

But again, regardless of whether or not you provide an SDK/ Code Library, you will still want to have multiple code examples in your documentation to help developers who want to utilize your API to its fullest capacity, without relying on additional third party libraries to do so.

To recap, use HTTP Status Codes, use Descriptive Error Messages, and having an SDK may be helpful for a lot of your developers – but make sure you take into consideration all of the challenges that come with it, and remember that having an SDK doesn’t replace documentation, if anything – it creates the need for more.

Go to Part 6: API Management →

  • User Authentification, Provisioning, & Throttling
  • The Advantages of using a Proxy for API Management

Json validation using a draft v4 schema? Oh Yeah!

Reading Time: 9 minutes

Sometimes you’re expecting a JSON, specially when publishing or consuming a REST API. But you need to make sure it’s a good JSON, not the kind of JSON that would kill you with a machete. Since the Javascript Object Notation format (JSON for short) can be used to described pretty much anything, validating that the one you received actually complies with what you expected is no simple task. Or at least it wasn’t until the JSON schema specification came out. Just like XSD schemas are XML documents used to describe how a valid XML looks like, a JSON schema is a JSON document used to make sure that yours doesn’t come with a machete. You gotta love the recursion of it!

Mule already supports version 3 of the JSON schema validation through a filter component called <JSON:JSON-schema-validation-filter />. Version 4 is out now and starting with Mule 3.6 we’re adding support for it. However, we decided not to add that new functionality to the existing filter. You see, although filtering when schema is not met can be useful in some scenarios, we realized that most of the time, you actually want to raise a meaningful error that explains exactly why the validation failed. Because the purpose of Filters is to silently discard invalid messages, this can’t be achieved in a natural way using the existing filter (for more background on Mule Filters »).

All hail the Validator!

Long story short, in Mule 3.6 we’re deprecating <JSON:JSON-schema-validation-filter /> and introducing a new element to replace it (the old filter will stick around until Mule 4.0 though). In its simpler form, the new element looks like this:

The validator will look for a JSON in the current message payload and will validate it against the given schema. If the validation is successful then control is passed to the next processor in the chain. Otherwise a JsonSchemaValidationException is thrown containing validation feedback on the message. This exception will not only contain a message with a detailed explanation of what went wrong, it also has a property called “invalidJSON” in which the invalid payload is available on its String representation.

GOTCHA: If you’re trapping that exception with Java, you can get the invalidJSON property by using the getInvalidJSON() method. If you’re doing it with MEL (most likely inside a exception strategy block, then you can simply do e.invalidJSON)

Unlike its deprecated predecessor, this validator can handle both v4 and v3 schemas so that you can use its new goodies without being required to migrate to the new schema specification.

The simple snippet above shows a layout that’s useful and simple enough most of the times, but there’s a lot more you can do with it:

Let’s take a look at what this all means:

schemaLocation

This is the only required parameter in the validator. It points to the location in which the JSON schema is present. It allows both local and external resources. For example, all of the following are valid:

  • This example gets the schema from the internet directly:
  • These two examples get the schema from a local resource in the classpath:

dereferencing

Draft v4 defines two dereferencing modes: canonical and inline. Canonical will be the default option but INLINE can also be specified. When validating a v3 draft this attribute is ignored.

schema-redirect

Because we support referencing the schema from the internet and because schemas might reference each other, it also supports URI redirection for the schemas so that they don’t necessarily got out to the internet:

Accepted payload types

This validator will accept the input JSON to be expressed on any of the following formats:

  • String
  • java.io.Reader
  • InputStream
  • byte[]
  • com.fasterxml.jackson.databind.JSONNode
  • org.mule.module.JSON.JSONData

If the payload is not any of these types, then it will try to transform whatever the payload to a usable type in the following order:

  • org.mule.module.JSON.JSONData
  • com.fasterxml.jackson.databind.JSONNode
  • String

If the payload could not be transformed to any of these types then a JsonSchemaValidationException will also be thrown, although this time the message won’t contain information about the schema validation but an explanation about how the message payload couldn’t be transformed to a type that can be used in validation and therefore it couldn’t be performed. Also, since a JSON document could not be extracted, the invalidJSON property will be null on that exception.

Notice that cases in which validating the JSON requires consuming a streaming resource (InputStream, Reader, etc), the message payload will be changed to the fully consumed JSON.

Takeaways

  • The old schema validation filter is deprecated in favor of this new validator.
  • The validator supports v3 and v4 of the JSON schema specification.
  • The validator also adds cool features like URI redirection.
  • We recommend you to replace the old filter with the new validator in your applications. However do notice that this is not a filter. It will throw an exception if validation fails.
  • For more information in v4 of the JSON schema spec, please follow this link »:

That’s all folks, hope you enjoyed it!

Achieving Digital Transformation Nirvana in Financial Services

Reading Time: 8 minutes

Financial services information technology (IT) has transformed from order taker to strategic business partner. As part of this transformation, IT organizations are finding they must address key challenges with legacy modernization, data management and digital transformation.

MuleSoft has launched a three-part white paper series discussing these challenges and how financial institutions are overcoming them. In the first installment in our Connected Financial Institution white paper series, we discussed how aging back office systems, operational effectiveness and open source adoption are driving legacy modernization initiatives across the financial services industry. The second installment in the series discusses key data management challenges facing firms including an ever-evolving regulatory compliance landscape, deepening customer relationships with a 360-degree view, and improving data driven decision making.

This third and final installment in our Connected Financial Institution white paper series discusses why financial institutions must prioritize their digital transformation strategies in response to challenges with innovation in APIs and apps, the battle over mobile services, and the increasing complexity of omni-channel delivery. We examine how institutions are responding to these business drivers, and propose best practices that can transform organizations and accelerate the pace of change. 

The Rise of APIs and Apps for Digital Transformation

Used by software developers to assemble program components within an application, the new use of APIs is to make business functions available as components on the Internet and serving service-oriented architectures and mobile technologies. By allowing the integration and interaction of software applications, APIs are making it easier for financial services to deliver both traditional applications and mobile apps across multiple delivery channels with a single interaction point. They also provide the flexibility to redirect customer data back to the applications, apps, and devices that a particular consumer prefers to interact with.

The Battle Over Mobile Services

Nearly every industry is connected to mobile services, and everything that’s connected to mobile services is experiencing exponential growth. As mobile continues to increasingly affect every aspect of a consumer’s lives, financial services are having to rapidly expand their digital capabilities to meet the demand. According to Deloitte, mobile capabilities have quickly become table stakes. In fact, today almost all major banks, insurance companies, and investment firms have mobile apps.Whether it’s hardware, software, features, or apps – the mobile banking space has become increasingly competitive. Not only are financial institutions competing head-to-head against one another, but they are also competing against new entrants who are leapfrogging traditional financial services firms.

The Increasing Complexity of Omni-Channel Delivery

It used to be that when financial services firms worried about multi-channel delivery, their focus was on physical locations (branches, offices), call centers, ATMs, and Internet banking. Today, consumer demand for the latest technology is forcing them to look at extending their digital reach and expanding their delivery channels to include mobile, social media networks, and even the latest telematics and wearables (e.g. safe driving monitors, Google Glass, smart watches, wristbands) to engage consumers. However, success in this expanding omni-channel environment requires that financial services do more than simply connect to the Internet of Things (IoT). They must be able to truly engage with customers by creating an ever-present and ongoing dialog with them.

Upcoming Webinar

Integration and APIs have come together as firms need to get data out of legacy systems for consumption by digital devices and apps. MuleSoft’s Anypoint Platform addresses the challenge of quickly exposing legacy data to new digital delivery channels while minimizing changes to your back-end infrastructure. It provides a strategic integration approach that addresses legacy modernization, data management and digital transformation challenges on a single, unified platform. Instead of redesigning applications from the ground up to support digital transformation, the most agile companies are exposing them through APIs designed, tested and deployed using Anypoint Platform. MuleSoft is trusted by many financial services firms and insurance companies, including 4 out of the top 10 global banks, to help them become a “Connected Financial Institution.”

Don’t forget to register for our webinar, “Achieving Digital Transformation Nirvana in Financial Services“ to learn more about digital transformation integration approaches. Register below:

Still interested in more?

Download the white paper, “Achieving Digital Transformation Nirvana in Financial Services” to learn more about how financial institutions taking a holistic view toward digital transformation across APIs and apps, mobile services, and omni-channel delivery have an opportunity to optimize their innovation initiatives across the organization.

You’re into XML? Mule now supports XPath, XSLT and XQuery 3.0

Reading Time: 22 minutes

In spite of JSON’s reign as the king of API data format, XML still remains the exchange data format of choice for a number of systems. Any service exposing functionality through SOAP, and many application built years ago (or even nowadays) still depend on XML to share data – to such an extent that in April 2013 the W3C published a new spec for version 3.0 of the XPath, XSLT and XQuery standards. We decided it was time to update the platform’s support for these standards and fix a couple of things while at it.

At the moment of releasing Mule 3.5.0, we were in a situation in which:

  • We had only partial support for XQuery 1.0 and XSLT 2.0
  • Users had a very inconsistent experience when dealing with XPath:
  • When processing an XSLT template, we supported XPath 2.0
  • When using the xpath() MEL function or the Xpath: expression evaluator, we only supported XPath 1.0
  • The xpath() function and expression evaluator are like a box of chocolates: you never know what you’re gonna get. The return type changes depending on how many results the query finds and whether or not it’s a simple type or a node.
  • The jxpath-filter and jxpath-extractor-transformer elements, which are supposed to only process POJOS, falls back to an actual XPath 1.0 expression through the use of dom4j if the message payload is an XML document

So, mea culpa. This was a mess, no shame in admitting it as long as we go and fix it. And that’s why for Mule 3.6 we aimed to:

  • Provide state of the art, 100% compliance support for XPath 2.0, XSLT 2.0, and XQuery 1.0
  • Provide basic support for version 3.0 of the XML specs
  • Reuse the existing XSLT and XQuery elements and functions we have (xpath-filter, xslt-transformer, xquery-transformer, etc) so that they can be used regardless of the targeted version spec
  • Deprecate our current XPath support and provide a new, more usable and consistent solution allowing to use either XPath 2.0 and 3.0
  • Deprecate all JXPath support in favor of simple MEL expressions.

Before we begin, a few words on the 3.0 spec

The XML specs 3.0 are still a recommendation, not yet approved by the W3C committee. However, they’re on “last call” status, which means that they’re highly unlikely to receive any substantial changes.

About XPath 3.0

XPath 3.0 is backwards compatible with 2.0. However, it’s not fully compatible with version 1.0. Although a compatibility mode exists, it’s doesn’t cover all cases. This is one of the main reasons why although we’ll provide a new API for Xpath processing, we’ll still support the xpath() function which currently works with XPath 1.0 until Mule 4.0

What does basic support means?

Before we used the term “basic support” when referring to the 3.0 specs. By basic support we mean all features which don’t rely on:

  • Schema awareness
  • High order functions
  • Streaming

Improvements on XPath

As previously stated, we found that in our strive to provide the best experience possible we couldn’t leverage Mule’s existing xpath support, reasons being that we had an inconsistent and unusable mixture of Xpath 1.0 and 2.0, and that Xpath 3.0 is not backwards compatible with 1.0.

So, in the spirit of cleaning up we decided to deprecate the following components:

  • xpath: expression evaluator
  • xpath2: expression evaluator
  • bean: expression evaluator
  • jxpath filter
  • jxpath extractor transformer
  • jaxen-filter

Implicit things to take notice on:

  • Because XPath 3.0 is completely backwards compatible with 2.0, this function will also serve those wanting to use 2.0 expressions
  • This doesn’t guarantee support on Xpath 1.0 expressions. The simpler ones will work, but the ones which are not compatible will not. Since XPath 1.0 is dated all the way back to 1999, we consider it deprecated and won’t officially support it. Compatibility mode will be disabled.
  • Because we want this function to have predictable return types, we need to create a new xpath3() function. We considered adding a compatibility flag to the current function, but our analysis indicated that the impact was way too great for that to make sense. Therefore, a new xpath3 function was created and the existing xpath() one is deprecated

The new xpath3() function is of the following form:

xpath3(xpath_expression, input_data, return_type)

Let’s take a closer view:

expression (required String) 

The Xpath expression to be evaluated. Cannot be null or blank.

input (optional Object, defaults to the message payload) 

The input data on which the expression is going to be evaluated. This is an optional argument, it defaults to the message payload if not provided

This function supports the following input types:

  • org.w3c.dom.Document
  • org.w3c.dom.Node
  • org.xml.sax.InputSource
  • OutputHandler
  • byte[]
  • InputStream
  • String
  • XMLStreamReader
  • DelayedResult

If the input if not of any of these types, then we’ll attempt to use a registered transformer to transform the input into a DOM document or Node. If no such transformer can be found, then an IllegalArgumentException is thrown.

Additionally, this function will verify if the input is a consumable type (streams, readers, etc). Because evaluating the expression over a consumable input will cause that source to be exhausted, in the cases in which the input value was the actual message payload (no matter if it was given explicitly or by default), we will update the output message payload with the result obtained from consuming the input.

Output type (optional String, defaults to ‘STRING’)

When executing an XPath expression, a developer might have very different intents. Sometimes you want to retrieve actual data, sometimes you just want to verify if a node exists. Also, the JAXP API (JSR-206) defines the standard way for a Java application to handle XML, and therefore, how to execute XPath expressions. This API accounts for the different intents a developer might have and allows choosing from a list of possible output types. We consider this to be a really useful features in JAXP, and we also consider that many Java developers that are familiar with this API would appreciate that Mule accounts for this while hiding the rest of the API’s complexity.

That is why there’s a third parameter (optional, String), which will allow specifying one of the following:

  • BOOLEAN: returns the effective boolean value of the expression, as a java.lang.Boolean. This is the same as wrapping the expression in a call of the XPath boolean() function.
  • STRING: returns the result of the expression converted to a string, as a java.lang.String. This is the same as wrapping the expression in a call of the XPath string() function.
  • NUMBER: returns the result of the expression converted to a double as a java.lang.Double. This is the same as wrapping the expression in a call of the XPath number() function.
  • NODE: returns the result the result as a node object.
  • NODESET: returns a DOM NodeList object. Components like the foreach, splitter, etc, will also be updated to support iterating that type.

Query Parameters

Another XPath feature that will now be supported is the ability to pass parameters into the query. For example, consider the following query which returns all the LINE elements which contains a given word:

//LINE[contains(., $word)]

the $ sign is used to mark the parameter. As for the binding, the function will automatically resolve that variable against the current message flow variables. So, if you want to return all the occurrences of the word ‘handkerchief’, all you have to do is:

<set-variable variableName="word" value="handkerchief" />
<expression-transformer>
  xpath3('//LINE[contains(., $word)]', payload, 'NODESET')
</expression-transformer>

NamespaceManager

Unlike its deprecated predecessor, the xpath3 function will be namespace-manager aware, which means that all namespaces configured through a namespace-manager component will be available during the xpath evaluation.

For example, suppose you want to do an XPath evaluation over this document:

As you can see, that document has a lot of namespaces which the XPath engine needs to be aware in order to navigate the DOM tree. You can easily configure that like this:

Because we aim for consistency, this also affects the xquery-filter element, which means that some applications might have issues if they were using expressions with custom namespaces without specifying the namespace manager correctly. That can be fixed by either declaring the manager or using wildcard expressions (e.g.: use *:/title instead of book:/title).

Improvements on XQuery

We also managed to maintain the same syntax already present on the xquery-transformer element and the XQuery version is selected through a declaration on the XQuery script. If a version is not specified then it will default to 3.0, since per the spec, all 1.0 queries are valid in 3.0 and must return the same result.

However, unlike its XSLT cousin, the xquery-transformer now has some new tricks under its sleeve. But let us first take a quick peek on what new things you can do with it now.

Support for multiple inputs

Before Mule 3.6, there was no way to use the xquery-transformer to evaluate an XQuery script which operates over multiple documents at the same time. This partially because of limitations on the underlying engine, and partially because of limitations on the transformer which only made it possible to give the script parameters which were simple types (strings, numbers, etc).

Now we added support for passing DOM documents and nodes (instances of org.w3c.dom.Document or org.w3c.dom.Node). For example, consider a simple query which takes two XML files (one with cities and one with books) and mixes the title of the box with the name of the city:

The $cities and $books variables hold documents or nodes that were passed as context properties. Also, because we now support XQuery 3.0, the same can be achieved by providing the path to the actual XML document and the engine would generate the document itself:

In this case the flowVars only contain the path to the xml documents on disk and the fn:doc function inside the query takes care of the parsing.

Try..Catch blocks

You can now use try..catch blocks on your statements. This simple example shows a script which will always fail and consistently return an error tag:

Switch statements

Plain old switch blocks for everyone! The example below will always return <Quack />

Group By

Just like in XSLT, grouping is now a thing:

The query above produces this output:

Return type improvements

This is not an improvement of XQuery itself, but something that was not good in our implementation and we took the opportunity to fix it. By default, the XQuery transformer only returned the first result, unless an array is specified in the returnClass attribute, in which case it returned all the matches in an Object[] (even if the return type was set to X[]). This means that by default, the transformer did not return all results. If the user did specify a return value, but no results were found then it returned NullPayload. If it came back with only one, then it returns that one element, even if you asked for an Array.

Although this is clearly a bug and a usability pain, fixing this could break some applications which are taking this bug as a feature. Thus:

  • By default, the xquery transformer will return a java List
  • That list will contain all the results, even if only one was found
  • If no results found, then the list will be empty
  • If the user did specified a return class then we will fallback to the old behaviour (array, one element, or null), allowing users to have a quick fallback option.

Improvements in XSLT

The xslt-transformer element we currently have remains unaltered from a behaviour and syntax standpoint. However, under the hood it now supports XSLT 3.0. Which version of XSLT will be used to evaluate the stylesheet will depend of the XSLT version declared on the XSL template. Any templates declaring version 2.0 will maintain its current version. Those declaring 3.0 will benefit from this new features. One quick example of XSLT’s new found power is that you can now use group-by expressions when iterating over a set of nodes. For example consider the following XML listing cities of the world:

Suppose we want to convert that to an HTML table which shows the countries with all their cities comma separated and the sum of their populations. You can do that like this:

The output would be something like:

<The End/>

Well, this is the end of this post. It sounds cliche but I really hope you like these improvements. When I first started working with Mule I wasn’t a code contributor but just a user, and these issues with the XML support really used to bother me. So for me personally, it’s really great to finally have fixed this. I hope you get to enjoy just as much and remember that feedback is always welcome!

Thanks for reading!

Become an Integration Hero

Reading Time: 11 minutes

MuleSoft got its start years ago with a great open source project, Mule ESB.

Today we continue to be big believers of open development and that’s why every single line of the community edition source code is publicly available on Mule.

We leverage many open source technologies everywhere within Anypoint Platform, from the core integration engine, Mule ESB; to hundreds of connectors, tooling, and plugins of all kinds.

Openness means nothing without community. At the same time, community is not made up of lines of code. Community is people. Vincent Hardy, Director of Engineering at Adobe Inc. has eloquently described in his article, “Why Do Developers Contribute to Open Source Projects?” what motivates people to contribute to open source projects, and more precisely, how collaboration benefits the contributors — worth checking out!

Growing with open source

Let me briefly describe my own personal experience with open source by emphasizing some of the points in Vincent’s article. After a long lasting relationship with communities for instance, founding linux local user groups or even contributing to big projects like the KDE software compilation, I found myself working in a professional project using Mule ESB.

My relationship with Mule could have ended there, just do the work with it and jump to the next thing. Yet, I found the integration problem so exciting that I decided to go further. This drove me to the MuleSoft forums, and the forums eventually led me to reporting bugs and improvement suggestions on the issue tracker.

I’m a developer, and we developers find joy in fixing thing — not just pointing out the problem. As a result, I started sending patches that fixed those problems to MuleSoft.

At that point, I had already grown within my company thanks to this attachment to the source. It didn’t stop there. My patches and forum posts gave me enough recognition to be invited to coauthor a book: Mule In Action, Second Edition. Writing this book made me develop a close relationship with MuleSoft, and eventually making a living out of being part of the community I love.

Many of my colleagues have a similar background; and I’m sure every open source contributor has an exciting story to tell. And I’m sure most of them share the Vincent’s view on the goodness of contributing to an open source project.

How to contribute to Mule

Let me invite you to become an open source contributor, and if you let me go even further, I’d like to invite you to be part of the Mule ESB open source community.

There are different ways to participate in this community, and each of them requires a different abilities, not necessarily just development skills. Let’s review:

Request an improvement or report a problem in the issue tracker

There’s no perfect piece of software in the world. If you find a problem or you have a suggestion to improve Mule ESB, sign into our issue tracking system, make sure the issue is not already reported, and then report the new issue.

If you are reporting a problem, it’s especially important to include, if possible, a mechanism to reproduce the error, either in the form of a minimal app to reproduce the problem or in the form of a comprehensive list.

If the problem cannot be reproduced easily, the odds that someone else in the community picks up that issue and submits a pull request to fix it are low.

Submit a Pull Request

If you are a developer who wants to contribute code, we’d be happy to receive your contribution and incorporate it to the mainstream repository.

Three very relevant but often forgotten steps of this guide are:

  • Sign the contributors agreement. A necessary step, common to all big open source project. Thankfully it is quick and painless.

  • Include the issue identifier in the commit description. This is very necessary as we need to keep record of the issue related with every single commit.

  • Cover your changes with tests. No contribution is completed if there are no tests covering its functionality.

If you need ideas around what bug to fix, or which new functionality to implement, just take a look at the issue tracker and find an unassigned issue. And remember, new functionality that breaks backwards compatibility should be merged against mule-4.x otherwise you can use mule-3.x.

Improve the documentation

If you find any area of improvement in the documentation there are two options:

  • The quick one is to use the Rate this page and leave feedback area at the top/bottom of every documentation page. Make sure you add some written feedback to your rating to make sure we understand exactly what can be improved.
  • Or to send your improvement request to documentation@mulesoft.com.

Help in forums on Stack Overflow

There are a number of Mule related tags: mule, mule-studio, mule-component, etc. in Stack Overflow, they’re relatively popular and get questions pretty often.

If you want to test your Mule skills you can visit those tags or our forum and try to help some other users in exchange of some stack overflow karma. This is a very relevant site which reputation system is becoming a very well considered knowledge measurement in hiring.

Help through social networks

Social networks, especially Twitter, post questions quite often. To find those questions keep an eye on hashtags like, #mulesoft and #muleesb along with mentions to @muledev and @MuleSoft.

You might also be interested in following @mulesoft and @learnmulesoft for the latest news and learning materials.

Share your knowledge in a meet up or in a blog post

If you understand Mule ESB, if you have learned how to leverage a certain connector or any other interesting matter — why not sharing your knowledge by hosting an event or writing a blog post?

Meetup is a very trendy site for organizing events. You might want to organize your own event or to propose one to any of the existing groups.

If you are going to speak around Mule ESB in an event or write a blog post, make sure you mention it to @muledev or @MuleSoft — you might get a retweet or more help.

Get rewarded for becoming a champion

At MuleSoft, we’re thankful for our vibrant community and all the contributions we’ve had along the years. As a gesture of recognition we have created the MuleSoft Champions Program!

With the MuleSoft Champions Program you get rewarded with cool prizes for expanding and sharing your knowledge:

  • USB powered mini-fridges
  • Your own flying drone
  • Conference tickets
  • MuleSoft swag

If you contribute to the MuleSoft community make sure you register in the Champions program for rewards!

Asynchronous Logging in Mule 3.6

Reading Time: 16 minutes

“Logs are like car insurance. Nobody wants to pay for it, but when something goes wrong everyone wants the best available” – Pablo Kraan

The phrase above fully explains why logs are important and why we need to be able to log as much information as possible without impacting performance. Because logging usually implies an I/O operation, it’s a naturally slow operation.

The Problem

Before Mule 3.6, logging operations were done using synchronous I/O. This means that the thread processing your message has to wait for the log message to be fully handled before it can continue.

Continue reading

API Best Practices: Hypermedia (4.3)

Reading Time: 14 minutes

This is part four, sub-series 3, of the API design best practices series. Jump to sub-series 1 of the hypermedia sub-series.

A Road Trip

First off, let me apologize for the delay in this third part of the hypermedia sub-series. Christmas meant a warm trip back to Minnesota, a road trip through the Texas panhandle, and numerous snow storms in between — until finally I had the chance to cut through the mountainous desert of Southern California on my way back to beautiful San Francisco.

muley-on-roadtrip

Now I understand some of you are probably wondering what any of that has to do with this post, other than it’s about 3 weeks after promised. One of the greatest challenges of the drive was battling my way through the snow and construction, and just hoping that the interstate would stay open (they literally close the interstates if it’s bad enough). But the one thing I could be sure of was that at every turn, between my steady GPS and road signs, I knew where I was going, and I knew when my path was being detoured or I couldn’t take a certain road… I knew this, because everything was nice and uniform.

In a lot of ways, APIs are like roads — they are designed to help us transport data from one point to another. But unfortunately, unlike the DOT system that spans the country, the directions (hypermedia) aren’t always uniform, and depending on the API we use, we’ll probably have to utilize a different hypermedia spec — one that may or may not provide the same information as others.

Continue reading

Secure your APIs

Reading Time: 28 minutes

Securing an API in Anypoint Platform is easy. In a previous post we showed how Anypoint Platform for APIs allows you to fully protect your API. We concluded then that the combination of HTTPS and OAuth 2.0 are a rule-of-thumb best practice for Web API security. In this post, we’ll take a deeper dive into the makeup of a security configuration in Anypoint Platform and explore in more detail the areas of Basic Authentication and OAuth2 Authorization in the context of Identity Management. We’ll also give you some pointers about when and how to use these two standards.

Security Manager

Central to authentication in Mule is the Security Manager. This is the bridge between standard mule configuration and Spring Security beans. In the example we build in this blog, we will use Spring Security to authenticate credentials against an LDAP server. We suggest you read the Spring Documentation on this topic if you want to delve further.

Continue reading

MuleSoft Performance and the Choke in the Wire

Reading Time: 9 minutes

Hello from MuleSoft’s performance team!

This post describes a real-world tuning example in which we worked with a customer to optimize their Mule ESB application.

A customer presented us with an application that was to be a proxy to several endpoints. As such, it needed to be very lightweight since the endpoints introduced their own latency. We required the application to provide high throughput and minimal latency.

This real-world example shows how we helped the customer tune their application from a number of angles. We had quite an adventure: the performance metrics were a crime, the usual suspects were innocent, and there were some unexpected twists. But our story has a happy ending. What started at 125 tps and high latency ended at 7600 tps and low latency.

For more info on the tips and tricks we describe here, please see our Performance Tuning Guide.

The original synopsis of this tuning case was recorded by Wai Ip. Additional contributors include Daniel Feist and Rupesh Ramachandran. Edited by Mohammed Abouzahr.

Continue reading

Rise and Fall of the Black Box Developer

Reading Time: 14 minutes

Let me start by stating that this is not a rant, it’s a look at my personal experience with interviewing candidates for technical Java positions. I guess that I first started to think about the following concepts when I read James Donelan’s post “Can Programmers Program?“. In that post he elaborates on some statistics from coding tests perform by developers applying for jobs. Those results indicated that many experienced developers were not able to solve simple problems. At first I thought that those statistics were blowing things way out of proportion, but then I started a retrospective down the memory lane and began to doubt.

I first started interviewing Java candidates back in 2006, eight years ago. I’m proud to say that the very first candidate I ever interviewed is to this day a very dear friend. In these 8 years I’ve seen outstanding, great, regular, mediocre and really bad sets of skills – and that’s nothing but to be expected. However, I do think that candidates were very different in 2006 than they are today, reason being that frameworks and libraries have come a long way in these 8 years.

The 2006 landscape

Back in the day, Hibernate and Spring were only in version 2 and starting to dazzle the standard Java EE stack. Struts 1.2 was the most popular MVC framework with JSF and Spring MVC as distant contenders. Of course there were other frameworks around, but these 3 basically constituted the backbone of most Java apps in those days. What those frameworks did really well was move complexity from the code (in some cases boiler plate code) to the configuration – quite a verbose configuration I might add. From the data side, relational databases were the only way of storing data, with Oracle, IBM and all the traditional heavyweight providers ruling, while MySQL first became a serious option with the release of version 5.0. In this context, knowing how to identify long running queries and to determine which indexes needed to be created was a basic skill. Finally, sites like Stack Overflow and other forums existed but were not as popular as they are today, and even Google had a less keen eye at the time of searching for technical information. I remember that getting a copy of a Maning “In action” e-book was something similar to finding the source of all and only truth. In summary, there were a lot of tools, but you needed to know what you were doing.

The Rise

Time went by and frameworks got smarter, requiring less and less configuration through the use of annotations, fluent APIs, DSLs and “convention over configuration” models. A Maning book is not as valuable as an asset as it once was, since there are now infinite blog posts, forums and examples around in the internet. Relational Databases now compete with NoSQL engines and full-text engines (like Lucene and Elastic Search) which are just fast, no questions asked. As thus the black box developer was born. Developers all around the globe who are enabled to quickly build very powerful applications in relatively little time, having to concentrate in little but they’re own business logic. And that’s awesome.

The Fall

The problem is that when I interview today, I see more and more developers who claim to be senior, very experienced ones, but all they know is how to use those frameworks. Here’re some examples:

  • A question I frequently ask is how would you sort a 100GB file with all the phone numbers in the US, using nothing but a laptop with 1GB memory, 1TB hard drive, a text editor and a Java compiler. One candidate replies with something like: “This is very simple! I would just load the file into a DB and retrieve the results in a select query with an ORDER BY clause”.After reminding the candidate that his only tool was a Java compiler, I decided to go along with it and asked: “How would a DB engine manage to resolve the ORDER BY clause of a 100GB table with only 1GB of RAM?“.The candidate’s face turned pale and after some minutes staring at the table he said “I don’t know”.

    As a pointer to I asked back “how does a DB index work?”.

    “Indexes make queries faster”.

    I replied: “The question is how do they work, not what do they do. How do they make queries faster?“.

    After some silence, the interview finished.

  • In another interview I met a candidate whose resume claims he is some kind of an expert in Hibernate. So, just as a conversation starter I asked: “Could you name all the fetch strategies, how they impact performance and which one would you use in each case?”.“What’s a fetch strategy?” he said.I recently had the honor to have lunch with Gavin King, creator of the Hibernate framework and I told him this story. He was really surprised and he commented that, “setting the wrong fetch strategy could be tremendously harmful for an application“. As a side note, I’m completely aware that Hibernate is not well suited for certain types of applications, but at the same time I have met many developers and architects claiming that, “we need to remove Hibernate because it makes the app too slow” while those performance problems went away with just some small performance tweaks.

I could go on and on with stories like this. Stories of candidates who were supposed to be really strong on OOP but couldn’t model something as simple as a composite filter, or who couldn’t draft a scalable architecture. I even met a candidate who claimed that web servers spawn one thread per request and had absolutelly no idea of what a thread pool was. But that’s not the point I want to make

So what am I saying? Nothing but a plea…

So what’s my point? My point is that having teams of developers who are so caught up with the abstraction that frameworks provide allow companies to quickly build minimum viable products. But because those devs are not aware of the inner machinery underneath, because their mind is bound by the limits of the black box, when the time comes to address performance issues, to handle scalability problems, to make sure that the object model is flexible enough to easily accommodate changes – in those times – these developers won’t have the necessary tools to be successful. So are frameworks evil? Were we better off in the pre 2006 era? NO! Actually, my family eats because of one of these tools! Mule ESB and all the related solutions that are made for Anypoint Platform are part of these post 2006 tools.

What I’m making is a plea for developers to not lose their curiosity. Just because it doesn’t make sense to re-invent the wheel doesn’t mean that from time to time we shouldn’t stop and ask ourselves what is it that makes the wheel so great and why is it that it spins – because if you understand why it spins, only then you can know which size of a wheel you need.

So this is my advice to any developer who is bored enough to have read so far, don’t lose your curiosity. To whoever is about to create a DB index, please take a minute to read about how those work, which types are available and which suits you better. To whoever is using Hibernate, when you do auto-complete and see a little attribute called fetchStrategy, go ahead and Google what that is. Use frameworks but look at their code, see how they work, experiment on the impact of small or big configuration changes. Be curious. Be bold. Even attempt your very own version of those products. It doesn’t matter if your product sucks big time, you will learn quite a few things you would never have from just using an existing one. Be curious. Do not forget that as Plato puts in Socrates mouth: “Philosophy begins in wonder“.

Thank you for reading!