API First Development with RAML and SoapUI [Webinar]

Reading Time: 3 minutes

Tackling API development with an API first approach allows companies to focus on designing APIs to deliver business, rather than focusing on the nuts and bolts of implementing those APIs. With API first design, businesses can create an application programming interface optimized for adoption and once finalized, use a platform to rapidly implement it by connecting it to backend services. Moreover, teams can work on various elements of an API solution simultaneously, ensuring all that it meets your technical and business expectations.

Starting with an API description is core to this API first approach – a clearly written service description makes it much easier for team members to collaborate from the very beginning. API mocking enables your team to build tests, clients and the server implementation for your API in parallel. From designing to building, to management and testing, approaching application programming interfaces with an API first strategy is crucial to creating successful APIs.

Continue reading

Why You Should Care Netflix is Shuttering Its Public API

Reading Time: 5 minutes

 

Netflix has decided to shut down public API support for third-party developers. An interesting decision, and in my opinion a bad one.

Launched six years ago, the Netflix API provided developers a way to access content from their streaming and DVD catalog. That helped the company grow and gave developers a way to build new experiences around Netflix content (for example Flixster).

Last year the company said it would stop issuing new API keys to developers and last Friday they announced that their API will stop working on November 14th for most developers, except a few key partners and applications.

This seems like a pretty bad idea. Here’s why..

Netflix has a massive catalog of data. However, the thing it doesn’t have is a massive catalog of features and functionality. Sure you can search, look up movies, review ratings, stream movies etc – but that is more or less the extent of it. Netflix is operating in a relatively closed feature set and it’s primary focus is around better streaming, playback and a growing catalog of movies.

It’s primarily for this reason that Netflix should be expanding its API and opening it up to developers more, not the other way around. By allowing third-party developers to take your data, mesh with it and come up with new ideas and features, you build out a bigger ecosystem around your products that adds way more value to your business in the long run.

Twitter’s API handles over 13 billion API calls a day and there are over 750,000 developers around the world contributing applications to the ecosystem. There are over 9 million applications built on the Facebook Open Graph API. Neither of these companies could have built out the application ecosystem they have today without embracing API support and acknowledging that there are a lot of very smart developers out there who might just come up with better ideas than they can.

 

It’s unlikely Netflix will be the the first and only to come up with every brilliant new feature around movie catalog management, social interaction, streaming and playback. And now it’s missing out on a whole community of developers who will build applications and features in potential areas that aren’t even on their radar.

With Amazon, Hulu, major TV carriers and soon Google nipping into their business model the best thing they could do would be to embrace the developer community, encourage thousands of third-party developers to build an ecosystem around their data and features and help propel their business into areas they didn’t even anticipate.

Netflix should be expanding their API to the developer community and not the other way around.

Handle Errors in your Batch Job… Like a Champ!

Reading Time: 14 minutes

Fact: Batch Jobs are tricky to handle when exceptions raise. The problem is the huge amounts of data that these jobs are designed to take. If you’re processing 1 million records you simply can’t log everything. Logs would become huge and unreadable. Not to mention the performance toll it would take. On the other hand, if you log too little then it’s impossible to know what went wrong, and if 30 thousand records failed, not knowing what’s wrong with them can be a royal pain. This is a trade-off not simple to overcome.

We took the feedback we got from the early releases around this issue, and now that Mule 3.5.0 and the Batch Module finally went GA with the May 2014 release, small but significant improvements took place to help you with these errors. Let’s take a look at some simple scenarios.

Continue reading

Intro: Salesforce to Database Anypoint Templates

Reading Time: 14 minutes

I’d like to announce and introduce you to our second set of Anypoint TemplatesSalesforce to Database. This set leverages the newly improved Database connector which allows you to connect with almost any JDBC relational database, consistently using the same interface for every case. Our first set of templates, Salesforce Org to Org integration, and is a good base for any “Salesforce to X”, or, “X to Salesforce” integrations.

If you are new to our templates, they are essentially Anypoint Studio projects (a.k.a. Mule Applications) that were built with the intention of being configured, customized, extended and reused. They’re built on top of five base data integration patterns:

 

 

 

 

Business Cases for Salesforce and Database Integration

Below are some of the key use cases that we built this set of templates around. Note that each template can serve as a good base even if you are integrating more than just Salesforce and Databases.

Continue reading

The Entities Graph Inconvenience

Reading Time: 10 minutes

It seems like everyone is talking about APIs lately.

We can find tons of them out there. More and more cloud based services and on-premises services are exposing themselves to the outside world through APIs. Many of these systems are fairly complex, so they need a complex object model to reach their full potential. By complex I mean deep object graphs and many relations between the objects. While this is easy to achieve in any object oriented programming language, it is not as easy to serialize them or to deal with them afterwards.

For instance we all have to deal, at some point or another, with big XML documents describing an objects graph.

Here is were the API world starts getting a little bit more complicated. How do these services expose such complexity to the open world and at the same time offer an easy enough way of operate with the model?

There are a few issues to deal with, let’s take a look at them.

Continue reading

Hybrid Integration Platforms – What are they? How to use one?

Reading Time: 2 minutes

Applications, systems, and services used by businesses have evolved over the years, complicating the enterprise ecosystem. With the growing need to connect heterogeneous endpoints in various locations, businesses end up with a divided ecosystem – with systems on-premises needing to communicate with applications in the cloud.

Enabling seamless connectivity between systems and services across the enterprise requires a Hybrid Integration Platform (HIP). An HIP leverages both an Enterprise Service Bus (ESB) and cloud-based integration technology, like an iPaaS solution to support the implementation of applications that leverage cloud and on-premises resources. To accelerate the establishment of an HIP in your organization, Gartner provides best practices that highlight how your organization can best use a hybrid integration platform that supports cloud-to-cloud, cloud-to-ground and on-premises integration.

In this Gartner report, “How to Use Hybrid Integration Platforms Effectively”,  analyst Jess Thompson covers:

  • What is an HIP?
  • How to effectively establish an HIP in your organization
  • Analysis of a large pharmaceutical company’s implementation of an HIP

Download the entire report »

Continue reading

Enabling Transactions in Node.js using Domains – Part 2

Reading Time: 5 minutes

Background

In the first blog post of this two-part series, we reviewed how our data access layer was built and how multi-tenancy data was passed around using domains. We also hinted at how difficult this was to actually get off the ground.

We had to execute some fairly deep code dives to get domains to work for our purposes, since we quickly discovered that requests’ domains were getting lost somewhere in our code paths. We started opening the hood on Node.js and the libraries that we use and after a lot of debugging found a pair of critical issues that will affect just about any real-world system.

Open Source Collaboration

First, the knex library, which we use to talk to our back-end databases, was not domain compatible. Because it uses a connection pool, connections aren’t created in scope of a domain and so must be explicitly attached to domain when acquired from pool and detached when returned to the pool. We resolved this issue by adding domain support to the underlying generic pool library and submitting that patch back to the community.

Second, the bluebird library, which we use extensively to support asynchronous programming patterns, holds a reference to Node.js’ setTimeout implementation. A deeply tricky issue, however, is that when you use domains, setTimeout is replaced with a new function. So suddenly one finds onesself in a world where bluebird uses a different version of setTimeout than your own code!
We discussed this with the Node.js core team, who graciously fixed this issue: They no longer override the actual setTimeout reference, but an internal reference instead.

Conclusion

A lot of work went into making domains work in our system – the above two changes required a lot of investigation and a lot of work with the open source community. And the approach did, in fact, finally work. But, in the end, we decided to swallow our pride and remove domains from our codebase. Instead, we manually pass around a context object which holds the information that, before, we stored in our domain. This is a bulletproof approach and one that carries no risk of us having missed some crucial detail within one library or another, and no risk of a new library dependency in the system causing issues due to lack of proper integration with domains.
This switch over to manually passing around contexts has turned out to be less cumbersome than we expected, although certainly not as beautiful as the domain-based approach. Furthermore, the decision was further validated when the Node.js team began discussing deprecating domains.

The lesson we will take away from this is to avoid the use of overly exotic constructs and to work with Node.js’ single-threaded grain, rather than against it. We would still love to see a context-like construct be made available, but only one that did not require active support from third-party libraries to properly work.

API Manager – Simple JAVA Client Access Example

Reading Time: 11 minutes

I recently had a customer wanting to build a simple UI to maintain additional filtering data associated to a defined “Contract” contained within API Manager. This code would have to run outside of the MuleSoft eco-system, as a service, within a JAVA Data Layer container environment.

My goal was to develop a very simple JAVA API Manager Client Access Example, whose concept prototype could be used as a basis to construct a necessary Mashup of API Manager Resources and Custom Client oriented resources. A primary emphasis is to understand the OAUTH2 Authentication exchange requirements.

Requirements

  • API Manager Account
  • JAVA Development Environment
  • Maven

API Manager

To begin you can review the API Manager Console located at https://anypoint.mulesoft.com/api-platform/api/console/#

Here we can see all of the available protected resources available from API Manager’s REST API.

Continue reading