Even as SaaS adoption explodes and business processes move to the cloud, organizations still have crucial data locked in on-premises and legacy systems, and they aren’t going anywhere.

More and more companies are seeking new integration strategies to deal with these hybrid environments, and struggling with how and where to integrate them. This Thursday, join MuleSoft’s Chris Purpura, VP of Cloud Integration and Dan Diephouse, Director of Product Management, for a live webinar, “Top 3 Considerations for Integrating Hybrid Environments“, where they’ll give guidance to help you form a hybrid strategy.

This is the third post on the Gradle Plugin Series, and a lot has happened to the plugin since the first article of the series was published. Today, I’m announcing exciting new features useful for tackling enterprise users’ needs. For more information on how to get started on building apps with gradle, please check the previous blog post and specially the project’s readme file. Now let’s get started.

Fine tuning Mule Dependencies

This plugin is designed to be future-proof and still remain concise, so we’ve introduced a DSL for customizing the mule dependencies to be included as part of the build. This allows users to (in a very concise way) fine-tune the modules, transports and connectors included when unit-testing, compiling your code and running your app.

Fact: Batch Jobs are tricky to handle when exceptions raise. The problem is the huge amounts of data that these jobs are designed to take. If you’re processing 1 million records you simply can’t log everything. Logs would become huge and unreadable. Not to mention the performance toll it would take. On the other hand, if you log too little then it’s impossible to know what went wrong, and if 30 thousand records failed, not knowing what’s wrong with them can be a royal pain. This is a trade-off not simple to overcome.

We took the feedback we got from the early releases around this issue, and now that Mule 3.5.0 and the Batch Module finally went GA with the May 2014 release, small but significant improvements took place to help you with these errors. Let’s take a look at some simple scenarios.

I’d like to announce and introduce you to our second set of Anypoint TemplatesSalesforce to Database. This set leverages the newly improved Database connector which allows you to connect with almost any JDBC relational database, consistently using the same interface for every case. Our first set of templates, Salesforce Org to Org integration, and is a good base for any “Salesforce to X”, or, “X to Salesforce” integrations.

If you are new to our templates, they are essentially Anypoint Studio projects (a.k.a. Mule Applications) that were built with the intention of being configured, customized, extended and reused. They’re built on top of five base data integration patterns:

salesforce-integration-templates-migrate.png

salesforce-integration-templates-broadcast.png

salesforce-integration-templates-aggregate.png

salesforce-integration-templates-synchronize.png

Business Cases for Salesforce and Database Integration

Below are some of the key use cases that we built this set of templates around. Note that each template can serve as a good base even if you are integrating more than just Salesforce and Databases.

Damian Sima on Tuesday, June 10, 2014

The Entities Graph Inconvenience

0

It seems like everyone is talking about APIs lately.

We can find tons of them out there. More and more cloud based services and on-premises services are exposing themselves to the outside world through APIs. Many of these systems are fairly complex, so they need a complex object model to reach their full potential. By complex I mean deep object graphs and many relations between the objects. While this is easy to achieve in any object oriented programming language, it is not as easy to serialize them or to deal with them afterwards.

For instance we all have to deal, at some point or another, with big XML documents describing an objects graph.

Here is were the API world starts getting a little bit more complicated. How do these services expose such complexity to the open world and at the same time offer an easy enough way of operate with the model?

There are a few issues to deal with, let’s take a look at them.

Applications, systems, and services used by businesses have evolved over the years, complicating the enterprise ecosystem. With the growing need to connect heterogeneous endpoints in various locations, businesses end up with a divided ecosystem – with systems on-premises needing to communicate with applications in the cloud.

Enabling seamless connectivity between systems and services across the enterprise requires a Hybrid Integration Platform (HIP). An HIP leverages both an Enterprise Service Bus (ESB) and cloud-based integration technology, like an iPaaS solution to support the implementation of applications that leverage cloud and on-premises resources. To accelerate the establishment of an HIP in your organization, Gartner provides best practices that highlight how your organization can best use a hybrid integration platform that supports cloud-to-cloud, cloud-to-ground and on-premises integration.

In this Gartner report, “How to Use Hybrid Integration Platforms Effectively”,  analyst Jess Thompson covers:

  • What is an HIP?
  • How to effectively establish an HIP in your organization
  • Analysis of a large pharmaceutical company’s implementation of an HIP

Download the entire report »

Background

In the first blog post of this two-part series, we reviewed how our data access layer was built and how multi-tenancy data was passed around using domains. We also hinted at how difficult this was to actually get off the ground.

We had to execute some fairly deep code dives to get domains to work for our purposes, since we quickly discovered that requests’ domains were getting lost somewhere in our code paths. We started opening the hood on Node.js and the libraries that we use and after a lot of debugging found a pair of critical issues that will affect just about any real-world system.

I recently had a customer wanting to build a simple UI to maintain additional filtering data associated to a defined “Contract” contained within API Manager. This code would have to run outside of the MuleSoft eco-system, as a service, within a JAVA Data Layer container environment.

My goal was to develop a very simple JAVA API Manager Client Access Example, whose concept prototype could be used as a basis to construct a necessary Mashup of API Manager Resources and Custom Client oriented resources. A primary emphasis is to understand the OAUTH2 Authentication exchange requirements.

Requirements

  • API Manager Account
  • JAVA Development Environment
  • Maven

API Manager

To begin you can review the API Manager Console located at https://anypoint.mulesoft.com/api-platform/api/console/#

Priya Sony on Friday, May 30, 2014

CONNECT 2014 Recap!

0

mulesoft-connect-2014

That’s a wrap everyone! CONNECT 2014, the integration conference of the year, has come to an end! With speakers from companies like Salesforce, Tesla, Box, Cisco, and Stripe, in addition to our very own MuleSoft experts, there were plenty of great sessions for everyone.

Fun facts:

  • Over 1,200 registrants between CONNECT and its sister event APIcon
  • Attendees from 36 countries
  • Over 300 unique companies represented

It wasn’t all about being serious and talking business though – we made sure to have some fun along the way!

We’re very excited to announce the May 2014 release of Anypoint Platform.  This new release, which includes the GA release of Mule 3.5.0, provides a faster, easier way to deliver data to business applications, whether they are event-driven, real-time data or batch data. The Anypoint Platform May 2014 release also includes templates to solve the most common Salesforce integration challenges and new security certifications for CloudHub, speeding time to market for SaaS integration initiatives.  

At just over a year in the making, this release simplifies our core connectivity, increases performance, enhances the DataSense experience, and adds in new capabilities requested from our community and customers: