Meet a Muley – Max the Mule!

Reading Time: 5 minutes

It’s been a while since we’ve had a Meet a Muley post, but we thought we’d bring it back this week and introduce you to a very special Muley – Max! Read on to hear more from the integration hero – from his favorite band to what exactly it is that he does here at MuleSoft.

First thought that came to mind when you looked into the mirror today?

  • Silos beware.

Favorite Band?

  • Haulin’ Oates definitley.

How did you find MuleSoft?

  • I felt a disturbance in the integration force.

How did you first get interested in your field?

  • You could say I was born into it. Integration is in my DNA. With the strength and stamina of a donkey to haul data, and the speed of a thoroughbred for faster connections, I was born to connect the world!

What exactly do you do?

  • Battle the tyranny of point-to-point, to build faster connections and help integration haul SaaS.

Best perk of being at MuleSoft?

  • Being with people who understand the value of a Mule! Oh, and free lunches. Definitely free lunches.

Biggest challenge since you’ve been at MuleSoft?

  • Preparing for the Internet of Things: so many connections, so little time.

What’s a typical weekend like for you? 

  • Being a super hero keeps you pretty busy. When I’m not connecting the world, I’m usually horsing around with the family!

Most embarrassing moment?

  • That time I went to the board room and my harness was undone.

What three words would your friends use to describe you?

  • You’d have to ask SaaSy, but I’d say industrious, fast, and constantly hungry. Being a Mule isn’t easy!

Most memorable moment at MuleSoft?

  • Well you always remember your first connection. For me it was one of the world’s largest banks. We still connect from time to time to reminisce about the good times.

What is you weakness?

  • I can’t deny a good salt lick.

What’s something about you that often catches people off guard?

  • Most people don’t expect to see a Mule leaping through the clouds and wearing an awesome outfit.

Anything else you’d like to share with us?

  • My great-great-grandfather was in the Pony Express.

That’s it for this week’s Meet a Muley! Be sure to follow us on Twitter @MuleSoft, Facebook, and LinkedIn to stay up to date on all things SOA, SaaS and API!

At CONNECT  this week? Stop by, say hi, and take a picture with Max!

Building Mule apps with Gradle and Mule Studio – Part 2

Reading Time: 5 minutes

Recently, I discussed how to build mule integrations using Gradle. This is a follow up post to discuss how to work with this plugin and mule studio, and to discuss some relevant enterprise features. This post assumes you already know how to do the basic setup of the gradle plugin (discussed on my previous post), so if you have not done it before, please go ahead and read it before continuing.

Creating a Mule Studio Project

From your gradle project, you can easily change it to a Mule Studio type just by applying the ‘mulestudio‘ plugin and selecting the appropriate mule version, here is an example:

Now you can simply run ‘gradle studio’ from the command line and it will create the necessary files so you can import the project into your workspace.

Importing it into Studio

Continue reading

Prototyping with HTML

Reading Time: 14 minutes

“I believe that if we think first about people and then try, try, and try again to prototype our designs, we stand a good chance of creating innovative solutions that people will value and enjoy.”Bill Moggridge, Designing Interactions

Prototyping is a key practice of design; it allows designers to visualize, evaluate, and communicate.

To explore design ideas, the prototype must be quick and inexpensive. It must suggest and explore these ideas, rather than confirm them. That’s why some HCI researchers like Bill Buxton, prefer the term sketch over prototype.

Paper prototypes have a unique feature: the return of investment, in terms of learning, is extremely high. It takes hours or even minutes to be created, and in exchange you can receive valuable feedback.

If you prefer digital means, tools like Balsamiq or Pencil, imitate the sketchy nature of a paper prototype. The term low-fi prototypes is used to refer to this kind of prototypes, regardless of them being on paper or not.

Tip: to quickly clean up scanned sketches, use the adjust levels option, included in most of the photo editing applications.

As design matures, more details are needed and low-fi prototypes cease to be enough. The good news is that you don’t need to create a prototype of the whole application, but to explore only the areas that needs more details. For example, a team from Autodesk, in one of the first articles about combining Agile and user centered design, described how they used small prototypes to test parts of the interaction.

Continue reading

In a vulnerable online world, what should you expect from a SaaS provider?

Reading Time: 12 minutes

Last month the massive Heartbleed security vulnerability was exposed. Three weeks later a security flaw in Microsoft Internet Explorer was revealed. It seems as though every few months there is news of a security breach or vulnerability. As more and more business is done online, in the cloud and through SaaS providers, how can you be sure the applications you and your business use are safe? Using the Heartbleed vulnerability as a case study, this article will examine what went wrong, as well as what you should expect from a SaaS provider, before, during and after a security event.

What is Heartbleed?

Heartbleed is the commonly recognized name of an exposure identified in a critical Internet security software package called OpenSSL, the most common transmission encryption software package used by Internet servers worldwide. This vulnerability allows an attacker to craft keepalive messages in such a way as to force a server disclose its short-term memory space. Since a server’s memory often contains personal, confidential information, such as user passwords or credit card numbers, the attacker could obtain that information. More severely, the server may also inadvertently disclose to the attacker its own private encryption key, which then allows the attacker to subsequently listen to all communications with that server, even without using the Heartbleed vulnerability. The nature of this attack is such that it leaves no traces, and is practically invisible to common detection mechanisms (although now that it has been exposed, signatures for it are becoming available for popular intrusion detection software).

Who was affected by Heartbleed?

Continue reading

Near Real Time Sync with Batch

Reading Time: 17 minutes

The idea of this blog post is to give you a short introduction on how to do Real time sync with Mule ESB. We’ll use several of the newest features that Mule has to offer – like the improved Poll component with watermarking and the Batch Module. Finally we’ll use one of our Anypoint Templates as an example application to illustrate the concepts.

What is it?

Near Real time sync is the term we’ll use along this blog post to refer to the following scenario:

“When you want to keep data flowing constantly from one system to another”

As you might imagine the keyword in the scenario definition here is constantly. That means, periodically the application will move whatever entity/data you care about from the source system to its destination system. The periodicity really depends on what the application is synchronizing. If you are moving purchase orders for a large retailer you could allow the application a few minutes between each synchronization. Now, if we are dealing with Banking transaction you’ll probably like to change that to something in the order of a few seconds or even a hundred milliseconds, unless you really like to deal with very angry people.

The nice thing about the template we’ll use as example is that such change in the behaviour of the application is very simple, just an small change in a properties file’s line.

Continue reading

APIs Can Be Copyrighted… Now What?

Reading Time: 8 minutes

The battle between software giants Oracle and Google hit close to home again last week, when a Federal Court of Appeals overturned a 2-year-old ruling by a lower court and established that APIs are entitled to copyright protection. The first reaction from most people is: how strange! After all, APIs are just the specification of how a software system or service behaves – call it this way, send it this information, get back that information – so how is that possibly something copyrightable? It’s not like a book, or a painting, or even software… And the second reaction, from many in the API world, is: no!!!

Do we really need to worry about being sued for adopting some popular API pattern to which developers have grown accustomed, and that’s been tried and tested by a major player? Should a new service provider be barred from providing an API that’s interoperable with an established player so it can compete with minimal developer friction? What if I build a mobile app to an API and that app embeds the specification of the API in order to call it correctly – can the API provider now control what I do with my app? And do we really want to scare the numerous enterprises now opening themselves up via APIs into coming up with arcane API designs just to avoid the specter of legal challenges?

These concerns are all very real and very immediate, especially to those of us in the API ecosystem working to solve problems of interoperability, usable and consistent and predictable API design, and accelerating value creation in the API economy. But before pushing the panic button, I took some time to really read the 69-page ruling of the appellate court, which – fortunately for someone like me with no legal background and even less patience – is remarkably well-written and clearly thought out. These are not lawyers and judges missing the gist of technology, confusing interface with implementation, mixing code with specs. They get it, they focus on the right issues, and they state unequivocally: copyright happens when an API is created, not later when you may want interoperability; copyright is appropriate when the API author explicitly chooses from many possible expressions of the interface’s desired functionality; and copyright isn’t invalidated just because the interface also embodies a functional process (which itself is not copyrightable).

Fortunately, they don’t end there. Two rays of hope also emerge from that tome:

  1. The copyright claim is based on the author of an API having a broad choice in expressing the functionality they’re providing. That’s like a book’s author having lots of choices for how to put a story down in words, and it’s that combination of words which receives copyright protection. Take away the choice, and there may be no copyright. If the words are generated by a program following a predetermined procedure, a recipe, there is no choice. So… if API design becomes even more prescriptive, if we come up with recipes that automatically lead to usable, consistent, predictable APIs in any domain, perhaps we also avoid legal hurdles? That would be sweet, providing even more motivation to pursue the course we’ve embarked on.
  2. Even if APIs are subject to copyright, that need not mean you cannot copy them. The court of appeals laid out the argument that could be used to establish fair use of APIs, and sent the case back to the lower court to make a determination on fair use. Now, fair use may not be as clean as non-copyrightability in avoiding legal battles, but it may be the best weapon we have, and it’s been very effective in other areas. When we quote a copyrighted work in our own writing, when a parody is created of another work, we’re not paralyzed by fear of infringement, because there’s a broad legal precedent established this as legitimate fair use. I anticipate such a precedent will now be set for APIs. Indeed, we should strive to test the boundaries of fair use sooner rather than later, to establish a safe zone of operation in which the API ecosystem can grow unimpeded.

And finally, we can do something about this now: establish lots of API patterns, examples, best practices, and full specs, put them in the public domain, and promote them, to not only improve and facilitate API creation but to preempt future copyright challenges; work harder on prescriptive approaches, such as practically-RESTful web APIs, and express them in ways that explicitly reference patterns, as you can do with RAML; and encourage anyone with a good API or pattern to open-source their interface and help us all press on the gas pedal and not the brake pedal.

You can read more details about all of this on my Wired Insights article »

Anatomy of an Anypoint Template

Reading Time: 8 minutes

Templates are simple solutions to start building your own integration applications that help to accelerate ‘time to value’ for your company. We focused on creating templates in a particular way to meet a high quality bar and to help you maintain that bar when you choose to extend our templates or build your own. In this post, I wanted to run you through our principles, structure, and provide some advice when using one of our templates.

A template is:

  • Complete for an atomic use case: In order to build something truly useful, the most important criteria was to develop a comprehensive solution focused on the main, atomic, value of the use case. Atomic means that its the base unit of value that can be compounded either by adding other flows in parallel or in a serial order.
  • Reusable: It’s surprising how often different people have slightly different requirements for the same base use case. Even though our templates are designed to solve particular use cases, they conform the base to base patterns which are leveraged in many variations of the same base problem.
  • Extendible: Templates are designed to grow into a complete Enterprise SaaS Integration. We built them with limited field mappings, data scopes, insert statements, definitions of ‘same’, and transformations, knowing that your specifics will be different than what we can predict. We provide all of the above knowing that you will have desires for each of those processing steps, but will want to customize each of them.
  • High Quality: Knowing that our templates could be expanded to be deployed to your production environments, we built and tested the templates with production quality in mind and document any gaps that we find which may affect your decision to take one to production.
  • Elegant: We aim to make the templates read like a integration story so that the flow is easily understood.
  • Documented: As with any piece of software, documentation is helpful in realizing its value. We provide documentation to help you understand and get going with our templates as part of the git repository for each template.

Template Components & Best Practices:

When you first import a template in studio you will notice that it is nothing more than a Mule project which was built with you, the person who is going to take it further, in mind. We set up the template mule project in the following structure for your convenience:

  • Folder Structure: Since Mule Apps are Java based, then the structure is similar to Java application. More info on this.

  • Property Files: Property placeholders are used to parametrize the particular values to be defined for each template, such as credentials and options. To make matters more organized, our templates provide several files, each with the properties needed to get the template running, for different environments (Development, QA, User Acceptance Test and Production). When you are ready to run the template, you can choose which properties file to run in the mule.env environment variable which is found under src/main/resources.

  • XML Files: Even though all mule code can be packaged in one XML file, we provide our templates with the mule flows broken into four flows for the purpose of starting with the best organizational practice from the start. The files can be found in src/main/app.

  • businessLogic.xml is the most interesting flow as it contains the code that solves this use case. This is where you will make modifications to implement your own solution to your need.
  • endpoints.xml is the place for the inbound and outbound flows of the application.
  • config.xml is where all the connectors, db, etc configurations are kept.
  • errrorHandling.xml is where you handle how your integration will react depending on the different exceptions. This file holds a Choice Exception Strategy that is referenced by the main flow in the business logic.
  • Custom Java Classes: are the way to implement custom logic that is not provided with mule. You will find that some of our templates that use custom logic will have them in src/main/java.

  • Tests: Unit tests should be stored under src/main/java. We highly encourage building integration tests and unit tests. This is something that we are looking to add with the templates in future template releases.

  • Maven based project: Templates are Maven Based projects. That is why you will find a pom.xml on the root directory. Maven is not required but is useful and standard tool for managing and building Java Applications.

Get started

To get started with Anypoint Templates visit our Salesforce to Salesforce integration page where you will find our first set of templates and where to find them on github. We look forward to hearing what you think!

5 (Internet of) Things you can Hack

Reading Time: 4 minutes

There may well be 50 billion device coming, but the most exciting things in the Internet of Things are the ones you can hack. I’ve developed a new weekend hobby of connecting and hacking devices. Here are my top 5:

Philips Hue
These connected lights bulbs have an HTTP API that is really easy to use and allows you to control single and groups of lights. You can control the colour range, brights and hue of course.  Furthermore your partner will love the pretty colours and you’ll convince your kids you can do magic.

Google Glass
Yes this is the device that is paradoxically cool not cool. It has a REST API that gives you access to post items to the Glass timeline and subscribe to location and updates from it.  The REST API is pretty limited, but luckily Glass runs Android and has a GDK for writing apps for the device itself, which greatly extends the possibilities.

Continue reading

Intro to Data Integration Patterns – Aggregation

Reading Time: 20 minutes

In this post I want to close the loop on introducing you to the last of the five initial patterns that we are basing our Anypoint Templates on. I’m sure we’ll continue creating templates and we’re going to continue discovering new data integration patterns. If you are just entering at this post, I would recommend that you look through the previous four posts to understand the other patterns. I generally do not repeat things that overlap between a patterns, so I would encourage you to work your way through all five posts if interested.

Pattern 5: Aggregation

What is it?

Aggregation is the act of taking or receiving data from multiple systems and inserting into one. For example, lets say I have my customer data in three different systems, and I wanted to generate a report which uses data from all three of those systems. I could create a daily migration from each of those systems to a data repository and then query against that database. But then I would have a whole other database to worry about and keep synchronized. As things change in the three other systems, I would have to constantly make sure that I am keeping the data repository up to date. Another downside is that the data would be a day old, so if I wanted to see what was going on today, I would have to either initiate the migrations manually or wait. If I set up three broadcast applications, I could achieve a situation where the reporting database is always up to date with the most recent changes in each of the systems. Still, I would need to maintain this database whose only purpose is to store replicated data so that I can query it every so often. Not to mention the number of wasted API calls to ensure that the database is always up to x minutes from reality. This is where the aggregation pattern is really handy. If you build an application, or use one of our templates that is built on it, you will notice that you can just on demand query multiple systems merge the data set, and do as you please with it. So in our example above, you can build an integration app which queries the various systems, merges the data and then produces a report. This way you avoid having a separate database, you can have the report arrive in a format like .csv, or format of your choice. Similarly, if there is a system where you store reports, you can place the report there directly.

Why is it valuable?

Continue reading

Intro to Data Integration Patterns – Correlation

Reading Time: 14 minutes

So far, in this series, we have covered Migration, Broadcast, Bi-Directional Sync, and today we are going to cover a new integration pattern: Correlation. In an effort to avoid repeating myself, for those who are reading through the whole series, I will omit a lot of relevant information which is shared between the patterns that I have previously covered. I urge you to read at least the previous post about bi-directional sync as correlation can be viewed as a variation of bi-directional sync. Also, note that this is the only one of the five patterns that we have not released any templates around, this was done in the interest of time, and due to the belief that this may be the least common pattern for Salesforce to Salesforce integration. We are however looking to create and release templates using the correlation pattern in the next few months.

Pattern 4: Correlation

What is it?

The correlation pattern is a design that identifies the intersection of two data sets and does a bi-directional synchronization of that scoped dataset only if that item occurs in both systems naturally. Similar to how the bi-directional pattern synchronizes the union of the scoped dataset, correlation synchronizes the intersection. Notice in the diagram below that the only items which will be meet the scope and synchronized are the items that match the filter criteria and are found in both systems. Whereas with the bi-directional sync will capture items that exist either in one or both of the systems and synchronize. In the case of the correlation pattern, those items that reside in both systems may have been manually created in each of those systems, like two sales representatives entering same contact in both CRM systems. Or they may have been brought in as part of a different integration. The correlation pattern will not care where those objects came from, it will agnostically synchronize them as long as they are found in both systems. Another way to think about Correlation is that is like a bi-directional sync that only does updates existing matches, rather than creates or updates.

Continue reading