“Logs are like car insurance. Nobody wants to pay for it, but when something goes wrong everyone wants the best available”– Pablo Kraan
The phrase above fully explains why logs are important and why we need to be able to log as much information as possible without impacting performance. Because logging usually implies an I/O operation, it’s a naturally slow operation.
Enterprise integration poses huge challenges for developers, and with so many tools and solutions to choose from, finding the right one can be tricky. DZone’s 2014 Guide to Enterprise Integration, their latest research report and guide, provides coverage of the enterprise integration landspace and offers key insights from 500+ developers and experts. The guide considers web services, integration architecture, SOA governance, enterprise service bus products, message queues, integration frameworks and suites, iPaaS solutions,
We’ve all been there. Sooner or later, someone asks you to periodically synchronize information from one system into another. No shame to admit it, it happens in the best families. Such an integration should start with getting the objects that have been modified since the last sync (or all of them in the case of the very first sync). Sure, this first part sounds like the easiest of all the sync process (and in some cases it actually is),
We often see the need to reuse a component configuration between 2 applications deployed to Mule. For example, let’s say you have two Mule applications deployed in your Mule server. The first one, called AppA, gets information from Salesforce and stores it in a database. The second AppB modifies your Salesforce campaign information. Both applications share the same Salesforce credentials, so what happens if those credentials change?
Last month we released Mule 3.3 M1, our first milestone on the way to Mule 3.3. While for production you should use Mule 3.2.1, we hope these milestones are a great way to play around with the latest and greatest features. This is a great opportunity to provide feedback and have an impact on what we are doing for the Mule 3.3 release.
In Mule 3.2 a group of stand-alone Mule instances can be configured to act as a cluster. One or more applications runs in each instance – or node – and the cluster processes requests as if a single unit. A node goes down, the application is still running; the more nodes, the more throughput. And the more nodes, the greater the headache. How many Putty sessions are you already running, let alone a group of new sessions to manage all those nodes?
It’s Saturday night. You realize you don’t have your cell phone and won’t be able to check on your fulfillment system. Chuckling, you remember without nostalgia the electric panic that used to set in over such a conundrum. Now you don’t give it a moment’s thought. After the weekend you arrive to work and try to avoid that blinking little red light, always in the periphery, nagging from the telephone set. Twenty-three more irate complaints demanding tedious,
This is my final post in a series of to ESB or not to ESB articles where I have attempted to shed some light on what an ESB really is and show some alternative architectures for performing integration. I’ve given an overview of four main architectures that I saw most often and provided some context about the benefits and the considerations of each.
In my last post, ESB or not to ESB revisited – Part 1, I put some definition around what an ESB really is, today I’m going to describe two integration architectures, ESB and Hub and Spoke, providing the benefits and considerations for each.
MuleSoft provides the most widely used integration platform for connecting any application, data source or API, whether in the cloud or on-premises. With Anypoint Platform®, MuleSoft delivers a complete integration experience built on proven open source technology, eliminating the pain and cost of point-to-point integration. Anypoint Platform includes CloudHub™ iPaaS, Mule ESB™, and a unified solution for API management™, design and publishing.