types of apis

Building APIs around Microsoft Azure SQL Databases

Reading Time: 23 minutes
  • Connect to Microsoft Azure SQL Database using MuleESB
  • Expose a simple API around our Microsoft Azure SQL Database (for the remainder of this post I will call it Azure SQL)
  • Demonstrate the symmetry between Mule ESB On-Premise and its IPaaS equivalent Cloudhub

For those not familiar with Azure SQL, it is a fully managed relational database service in Microsoft’s Azure cloud.  Since this is a managed service, we are not concerned with the underlying infrastructure of Azure SQL.  Microsoft has abstracted all of that for us, and we are only concerned with managing our SQL Instance.  For more info on Azure SQL, please refer to the following link.

Prerequisites

To complete all of the steps in this blog post, you will need a Microsoft Azure account and a MuleSoft Cloudhub account.  A free Microsoft trial account can be obtained here, and a free Cloudhub account can be found here.

To enable the database connectivity between MuleSoft and Azure SQL, we will need to download the Microsoft JDBC Driver 4.0 for SQL Server which is available here.

Provisioning a SQL Instance

  1. From the Microsoft Azure portal, we are going to provision a new Azure SQL Instance by clicking on the + New label.

  1. Click on Data Services – SQL Database – Quick Create

  1. Provide a DATABASE NAME of muleAPI, select an existing SERVER and click CREATE SQL DATABASE.  Note if you do not have an existing SQL Server you can create a new one by selecting New SQL database server from SERVER dropdown list.

  1. After about 30 seconds we should discover that our new Azure SQL instance has been provisioned

  1. Click on the muleapi label to bring up our home page for our database.          
  2. Click on the View SQL Database connection strings label.

  1. Note the JDBC connection information we will need this later in the Mule ESB portion of this blog post

  1. Next, we want to create a table where we can store our Product list.  To do so, we need to click on the Manage icon.

  1. Next, we will be prompted to include the IP Address of the computer we are using.  By selecting Yes, we will be able to manage our Database from this particular IP Address.  This is a security measure that Microsoft puts in place to prevent unauthorized access.

  1. A new window will open where we need to provide our credentials.

  1. We will be presented with a Summary of our current database.  To create a new table, we need to click on the Design label.

  1. Click on the New Table label and then provide a name of Products. We then need to create columns for ProductName, Manufacturer, Quantity, and Price.  Once we have finished adding the columns, click the Save icon to commit the changes.

  1. We now want to add some data to our Products table and can do so by clicking on the Data label.  Next, we can add rows by clicking on the Add row label.  Populate each column and then click on the Save icon once all of your data has been populated.

This concludes the Azure SQL portion of the walkthrough.  We will now focus on building our Mule Application.

Building Mule Application

  1. The first thing we need to do is to create a new Mule Project, and we can do so by clicking on File New Mule Project
  2. For the purpose of this blog post, we will call our project SQLAzureAPI.  This blog post will take advantage of some of the new Mule 3.5 features and as a result, we will use the Mule Server 3.5 EE Early Access edition and click the Finish button to continue.
  3. A Mule Flow will automatically be added to our solution.  Since we want to expose an API, we will select an HTTP endpoint from our palette.
  4. Next, drag this HTTP Endpoint onto our Mule Flow.
  5. Click on the HTTP endpoint and set the Host to localhost, Port to 8081 and Path to Products. This will allow Mule ESB to listen for API request at the following location: http://localhost:8081/Products
  6. Next, search for a Database Connector within our palette.  Notice there are two versions we want the version that is not deprecated.
  7. Drag this Database Connector onto our Mule Flow next to our HTTP Endpoint.
  8. As mentioned in the Prerequisites of this blog post, the Microsoft JDBC Driver is required for this solution to work.  If you have not downloaded it, please do so now.  We now need to add a reference to this driver from our project.  We can do so by right mouse clicking on our project and then selecting Properties.
  9. Our Properties form will now load.  Click on Java Build Path and then click on the Libraries tab. Next, click on the Add External JARs button.  Select the sqljdbc4.jar file from the location where you downloaded the JDBC driver to and then click the OK button to continue.
  10. We are now ready to configure our Database connection and can do so by clicking on our Database connector so that we can specify our connection string. Next, click on the green plus (+) sign to create a new Connector configuration.
  11. When prompted, select Generic Database Configuration and click the OK button.
  12. In the portion of this blog post where we provisioned our Azure SQL Server Database, it was suggested to make a note of the JDBC Connection string.  This is the portion of the walkthrough where we need that information.  We need to put this value in our URL textbox.  Please note that you will need to update this connection string with your actual password as it is not exposed on the Microsoft Azure portal.  For the Driver Class Name find com.microsoft.sqlserver.jdbc.SQLServerDriver by clicking on the button.  Once these two values have been set, click on the Test Connection button to ensure you can connect to SQL Azure successfully.  Once this connection has been verified, click on the OK  button.Note: For the purpose of this blog post the connection string was embedded directly within our configuration.  In certain scenarios, this obviously is not a good idea.  To learn about how these values can be set within a configuration file, please visiting the following link.
  13.  With our connection string established, we need to specify a query that we want to execute against our Azure SQL Database.  The Query that we want to run will retrieve all of the products from our Products table.  We want to add the following query to the Parameterized Query Textbox: Select ID, ProductName, Manufacturer, Quantity, Price from Products
  14. For the purpose of this API, we want to expose our response data as JSON.  In the Mule ESB platform, this is as finding an Object to JSON transformer from our Palette.
  15. Once we have located our Object to JSON transformer, we can drag it onto our Mule Flow.  It is that easy in the Mule ESB platform, no custom pipeline components or 3rd party libraries are required for this to work.  In the event, we want to construct our JSON format we can use the AnyPoint DataMapper to define our specification and transform our Database result into a more customized JSON format without any custom code.
  16. That concludes our the build of our very simple API, but the key message to take away is how easy it was to build this with the Mule ESB platform and without any additional custom coding.

Testing our Mule Application

  1. For now, we will just deploy our application locally and can do so by clicking on Run – Run As – Mule Application.

  1. To call our API launch Fiddler, a Web Browser or any other HTTP-based tool and navigate to http://localhost:8081/Products.  As you can see the contents of our Azure SQL Database are returned in JSON format…pretty cool.

Not done….

One of the benefits of the Mule ESB platform is that there is complete symmetry between the On-Premise version of the ESB and the Cloud version.  So what this means is that we can build an application for use locally or On-Premise.  If we decide that we do not want to provision local infrastructure, we can take advantage of MuleSoft’s managed service.  There are no code migration wizards or migration tools.  We simple deploy our Mule Application to a different endpoint.

In MuleSoft’s case, we call our Integration Platform as a Service (IPaaS) Cloud Platform CloudHub.  There are too many details to share in this post so for the time being; please visit the CloudHub launch page.

Deploying to CloudHub

  1. To deploy our application to CloudHub there is one configuration change that we want to make.  Within in our src/main/app/mule-app.properties file we want to specify a dynamic port for our HTTP Endpoint.  Within this file, we want to specify http.port=8081.

  1. Next, we want to update our HTTP Endpoint to use this dynamic port.  To enable this macro, we need to click on our HTTP End point and then update our Port Text box to include ${http.port}. This will allow our application to read the port number from configuration instead of it being hard coded into our Mule Flow.  Since CloudHub is a multi-tenant environment, we want to drive this value through configuration.  

  1. With our configuration value set, we can now deploy to CloudHub by right mouse clicking on our Project and then selecting CloudHub – Deploy to CloudHub

  1. We now need to provide our CloudHub credentials and provide some additional configuration including Environment, Domain, Description, Mule Version.  For the purpose of this blog post I am using the Early Access Edition but prior versions of Mule ESB are capable of running in CloudHub.  Also, note that our dynamic port value has been carried over form our local configuration to our CloudHub configuration.  Once we have completed this configuration, we can click on the Finish button to being our Deployment.

  1. Within a few seconds, we will receive a message indicating that our Application has been successfully uploaded to CloudHub.  Now this does not mean that it is ready for use, the provisioning process is still taking place.
  2. If we log into our CloudHub portal, we will discover that our application is being provisioned.

  1. After a few minutes, our application will be provisioned and available for API calls.

  1. Before we try to run our application, there is one more activity that is outstanding.  Earlier in this walkthrough, we discussed how Microsoft Azure would restrict access to the Azure SQL Databases by providing a Firewall.  We now need to ‘white list’ our Cloud HubIP Address in Microsoft Azure.  To get our CloudHub IP Address, click on Logs and then set our All Priorities dropdown to be System.  Next look for the lines that indicate our “…application has started successfully.”   Copy this IP Address and then log back into the Azure Portal.

  1. Once we have logged back into the Microsoft Azure Portal, we need to select our MuleAPI database and then click on the DASHBOARD label.  Finally, we need to click on the Manage allowed IP Addresses.

  1. Add a row to the IP Address table and include the IP Address from the CloudHub logs and click the Save icon.

So a question that you may be asking yourself is: what happens if my CloudHub IP Address changes?  The answer is you can provision a CloudHub instance with a Static IP Address by contacting MuleSoft Support.  Another option would be to specify a broader range of IP Addresses to whitelist within the Microsoft Azure portal.  Once again, MuleSoft Support can provide some additional guidance in this area if this is a requirement.

Testing our API

  1. We can now test our API that is running in MuleSoft’s CloudHub instead of our local machine.  Once again, fire up Fiddler or whatever API tool you like to use and provide your CloudHub URL this time.  As you can see our results are once again returned in JSON format but we are not using any local infrastructure this time!

Telemetry

A mandatory requirement of any modern day Cloud environment is some level of telemetry of the services that are being utilized.  While the purpose of this post is not to get into any exhaustive detail, I did think it would be interesting to briefly display the CloudHub Dashboard after our API test.

 

Similarly, we can also see some Database analytics via the Microsoft Azure portal.

In the introduction of this Blog Post we discussed a few concepts including:

  • Connect to Microsoft Azure SQL Database using MuleESB
  • Expose a simple API around our Microsoft Azure SQL Database (for the remainder of this post I will call it Azure SQL)
  • Demonstrate the symmetry between Mule ESB On-Premise and its IPaaS equivalent

These different concepts highlight some of the popular trends within the computing industry.  We see the broader adoption of Cloud-based Database platforms.  We are then seeing an explosion of APIs being introduced and finally we see the evolution of Integration Platforms as a Service offerings.  As demonstrated in this blog post, MuleSoft is positioned very well to support each of these different trends. Another important consideration is we were only ‘scratching the surface’ when it comes to some of these features that are available in the MuleSoft platform including our comprehensive Anypoint Platform for APIs which wasn’t even discussed.

Intro to Data Integration Patterns – Bi-Directional Sync

Reading Time: 14 minutes

In this post I will continue talking about the various integration patterns that we used as the basis for our Anypoint Templates. The next pattern to discuss is bi-directional sync. Since bi-directional sync can be also accomplished as two, 1:1 broadcast applications combined and pointed in opposite directions, I would recommend reading my last post on the broadcast pattern before digging into this one since I will omit a lot of the same content.

Pattern 3: Bi-Directional Sync

What is it?

Bi-directional sync is the act of unioning two datasets in two different systems to behave as one while respecting their need to exist as different datasets. The main driver for this type of integration need comes from having different tools or different systems for accomplishing different functions on the same data set. For example, you may have a system for taking and managing orders and a different system for customer support. For one reason or another, you find that these two systems are best of breed and it is important to use them rather than a suite which supports both functions and has a shared database. Using bi-directional sync to share the dataset will enable you to use both systems while maintaining a consistent real time view of the data in both systems.

Why is it valuable?

Continue reading

Dinosaurs in the Land of APIs – Top Integration and API Articles of the Week

Reading Time: 3 minutes

Here’s our weekly roundup of the top 5 integration and API articles of the week.  Take a look, let us know if we missed any, and share your thoughts in the comments.  Don’t forget to follow @MuleSoft to stay up-to-date on integration & APIs!

If you’re interested in Integration and APIs, don’t miss CONNECT 2014 – the event behind the integration revolution!


Un-API Dinosaurs Can’t Leap The Legacy Chasm

How can long-established enterprise IT vendors adapt to a world of nimble startups and avoid extinction? They can’t.


Transform Your Bicycle Into A Connected E-Bike

Bicycles have joined the Internet of Things, providing valuable data for both riders and cities.

Continue reading

Intro to Data Integration Patterns – Broadcast

Reading Time: 25 minutes

In my post yesterday, we did a brief introduction to the migration pattern. Today we are going to do a similar overview of the broadcast pattern which is a kinetic version of the migration pattern.

Pattern 2: Broadcast

What is it?

Broadcast can also be called “one way sync from one to many”, and it is the act of moving data from a single source system to many destination systems in an ongoing, near real time or real time, basis. Typically “one way sync” implies a 1:1 relationship and to us it is just a instantiation of the broadcast pattern which is a 1:Many relationship, hence we chose the name broadcast even though it will manifest itself as a 1:1 in many integration applications like our Salesforce to Salesforce templates that we recently made available.

Whenever there is a need to keep our data up to data between multiple systems, across time, you will need either a broadcast, bi-directional sync, or correlation pattern. The distinction here is that the broadcast pattern, like the migration pattern, only moves data in one direction, from the source to the destination. Now, I know what you are going to ask next, “What is the difference between the broadcast pattern and the migration pattern which is set to automatically run every few seconds?” The main distinction to keep in mind is that the broadcast pattern is transactional meaning that it does not execute the logic of the message processors for all items which are in scope, rather it does it only for those items that have recently changed. So you can think of broadcast as a sliding window that only captures those items which have field values that have changed since the last time the broadcast ran. Another major difference is in how the implementation of the pattern is designed. Migration will be tuned to handle large volumes of data and process many records in parallel and to have a graceful failure case. Broadcast patterns are optimized for processing the records as quickly as possible and being highly reliable to avoid losing critical data in transit as they are usually employed with low human oversight in mission critical applications.

Why is it valuable?

The broadcast pattern is extremely valuable when you have any generic situation where system B needs to know some information in near real time that originates or resides in system A. For example, you may want to create a real time reporting dashboard which is the destination of multiple broadcast applications where it receives updates so that you can know in real time what is going across multiple systems. You may want to immediately start fulfilment of orders that come from your CRM, online e-shop, or internal tool where the fulfilment processing system is centralized regardless of which channel the order comes from. You may be a ticket booking system that has sales channel partners that broadcast bookings on their sites to your booking system. You may have video game servers that need to publish the results of a game to a players account profile management system. You may want to send a notification of the temperature of your steam turbine to a monitoring system every 100 ms. You may want to broadcast to a general practitioner’s patient management system when one of their regular patients is checked into an emergency room. There are countless examples of when you want to take an important piece of information from an originating system and broadcast it to one or more receiving system as soon as possible after the event happens.

When is it useful?

The broadcast patterns “need” can easily be identified by the following criteria:

  • Does system B need to know as soon as the event happens – Yes
  • Does data need to flow from A to B automatically, without human involvement – Yes
  • Does system A need to know what happens with the object in system B – No

The first question will help you decide weather you should use the migration pattern or broadcast based on how real time the data needs to be. Anything less than approximately every hour, will tend to be a broadcast pattern. However, there are always exceptions based on volumes of data. The second question generally rules out “on demand” applications and in general broadcast patterns will either be initiated by a push notification or a scheduled job and hence will not have human involvement. The last question will let you know whether you need to union the two data sets so that they are synchronized across two system, which is what we call bi-directional sync and will cover in the next blog post. Different needs will call for different patterns, but in general broadcast the broadcast pattern is much more flexible in how you can couple the applications and we would recommend using two broadcast applications over a bi-directional sync application. However sometimes you need to union two datasets across two systems and make them feel like one, in which case you would use a bi-directional sync.

What are the key things to keep in mind when building applications using the broadcast pattern?

Input Flow:

You generally have 2 ways to trigger the broadcast applications, and two ways to pass the data between the source system and the integration application. Below I walk through some of the pros and cons of each of the three combinations that make logical sense.

Trigger via a notification and pass the data via payload of the message notification.

  • Pro: minimal number of API calls since the integration application is not asking the system about changes, rather it waits to be notified.
  • Con: the code and/or feature that sends the notification needs to be configured or developed differently, with unique knowledge and skills, for each system, and is managed from a different place for each source system in your environment.
  • Con: if the integration application is not running properly or is down you could lose data, so a high availability or queue type approach is needed to make sure you don’t lose the unprocessed events.

Trigger via a notification message, but have integration application pull the data from the source system.

  • Pro: much easier to achieve a highly reliable system than the option above since the pull will only run when the application is healthy and the watermark only be updated once the items are successfully processed. In other words, you get auto retry.
  • Pro: with less coding, essentially just a http call to initiate the pull which will do the job of grabbing and moving the data.
  • Con: can become really chatty when the source system has a lot of changes.
  • Con: still requires the coding or configuration of the outbound message notifying the integration application that there is data waiting to be pulled.

Use a scheduled job or poll element which runs the application every X units of time and have the integration app pull the data from the source system.

  • Pro: easy to create a highly reliable system.
  • Pro: only need the skill of using the integration platform and the API or connector. For example, we provide Anypoint Studio and our connector suite to make this even easier.
  • Pro: in very dynamic systems, where changes are on the order of seconds, you can use a polling approach and save API calls since you can batch process many records at a time.
  • Con: in low frequency systems, you will use a lot of API calls to find out that nothing has changed, or risk data sitting for a while before being synchronized if your polling interval is too large.

We designed and built all of our templates using the last approach due to the fact the full application needed to accomplish the use cases is contained inside the Mule application and does not require any specific code on the source systems. This enables us to keep a much cleaner divide between the systems while still providing the processing logic for anyone who wanted to change to a push system by adding a http endpoint that would capture the payload and feed it into our business logic flow. That said, whether a pull or push can be used effectively also depends on the source systems. In our first set of templates they are all Salesforce which gives you both options. I am certain that future systems will require us to use a push notification model to trigger and or receive the payload when building templates based on best practices.

Objects and Fields:

One key concept to keep in mind when designing and using broadcast applications is that objects are being moved but the value is the data in the fields. Usually a flow or process of an application will be built around a specific object. Our templates are designed and built per object to maintain their atomic nature, and can be compounded to create larger applications. Since a flow handles an object, its the object that is being moved. This is the only practical way to build the integration because trying to make an application that moves field values as payloads would be very expensive in terms of development cost and performance given the nature of the system APIs that are used to get this data. So you are generally stuck moving objects, but the values and changes occur at the field level. This means that you have to synchronize whole objects even though only one field has changed, which exposes you to a problem when you have multiple broadcasts feeding into the same system. For example, if you have CRM A and CRM B both broadcasting contacts into CRM C, you will have to handle cases where both systems change the same or different fields of the same contact within the time interval of the poll, or prior of the two messages being processed. The elegant solution here is to have a detection that notices that there are two messages being processed which merges the values of the two payloads, with the later one winning if the same field was affected. This is something that we have not yet incorporated into our templates, which risk potential data loss in the case that the same objects are modified in two systems within a polling interval (that is usually on the order of a few seconds), but it is something that we are looking to address. Another mitigation to this problem is to define a master in the integration application configuration properties at either the system, object or field level. Having a clear master will make the result still have minor data loss (or inconsistency between the origin system and destination system) of those field values, but the behavior will always favor one of the systems so that at least you will have consistency which is something that you can correct periodically or via an MDM solution. Again, this is only a problem at small interval when you have multiple highly dynamic systems that produce broadcast messages into one common system or a bi-directional synchronization between two systems. For example, the chance that you would have your sales reps update the same account with two different values within seconds is very low.

Exception Handling:

Ideally the broadcast integration application would never encounter problems, all data would be consistently formatted, no null values would be provided for required fields, APIs would not change, and APIs on both sides would be at 100% availability. The truth is that unfortunately this is not the case. One of the really nice things about our batch module is that it has built in error handling, meaning that it will tell you which messages errored out and will even pass them to a exception management flow where you can further process the messages to either fix them or appropriately categorize them and report on them. All of this without locking up, meaning that it will continue processing the other records. When we build our templates out, we don’t yet capitalize on this feature, but it is something that you should definitely extend before taking a template to production. A simple exception management strategy can be to log all the issues to a file where someone can review them, or pass them via an email to someone to take a look at. A more sophisticated approach would look for the common problems that are known, like a null value in a required field and replace it with dummy data like “Replace me” for a string and resubmit that message. This would make the message pass and be processed correctly without avoiding the loss of the value that would be generated from that message being processed for the rest of the fields. Similarly, you can create a notification workflow based on the values in the payload so that different people are notified based on different errors for those exceptions that cannot be fixed by a prescriptive design time solution.

Watermarking

For our broadcast templates we leverage our watermarking feature which keeps track of when the last poll was done such that we grab only the newest set of objects that have been modified in that time window. This is something that is only needed in the case where the polling mechanism is employed, and not needed when processing a notification that was pushed. In general using a last modified timestamp as the query parameter in the systems that provide it is the cleanest way pull only the latest content. In the systems that don’t have something like this, you may have to either process all the records, or first scan a transaction record from where you can derive the objects that need to be updated, or come up with a clever way to capture only those items to be processed.

Scoping, Transforming the Dataset, and insert vs update.

Given that we covered these topics in the migration pattern post, and that the considerations here are effectively the same please refer to that post to get some additional comments on those themes.

In summary, broadcast is another critical integration pattern and is defined as the act of moving data from one to one or more systems in as near real-time fashion as you want. It is valuable when you have data in one system and would like to have that data in other systems as soon as possible. We also walked through some things to keep in mind when building or using an application based on the broadcast pattern. Check out our integration templates that use the broadcast pattern. In the next post I will walk you through a similar set of information for the bi-directional sync pattern – stay tuned!

Intro to Data Integration Patterns – Migration

Reading Time: 21 minutes

Hi all, in this post I wanted to introduce you to how we are thinking about integration patterns at MuleSoft. Patterns are the most logical sequences of steps to solving a generic problem. Like a hiking trail, patterns are discovered and established based on use. Patterns always come in degrees of perfection with much room to optimize or adopt based on the needs to solve business needs. An integration application is comprised of a pattern and business use case. You can think of the business use case as an instantiation of the pattern, a use for the generic process of data movement and handling. In thinking about creating templates, we had to first establish the base patterns, or best practices, to make templates atomic, reusable, extensible, and easily understandable.

When thinking about the format of a simple point to point, atomic, integration, one can use the following structure in communicating a Mule application:

Continue reading

What’s your connected world?

Reading Time: 3 minutes

We asked you to describe a connected world, and received some great responses!

Here are a few of our favorites:

In a connected world, my alarm clock would trigger my coffee maker #connect14 

— Sarah Burke (@scbsays) April 4, 2014

In a connected world, my doctor, dentist or surgeon knows my entire history #Connect14 

— Robert Anderson (@RobAnderson505) April 4, 2014

In a connected world, I don’t have to order pizza. It arrives before I know I want it. 😉 #Connect14 

— Marc Calder (@MCalder) April 3, 2014

In a connected world, my sprinkler system self-regulates based on weather forecasts. #Connect14 

— Karan Singh Malhi (@KaranSinghMalhi) April 9, 2014

In a connected world, products are crowdsourced and have zero marginal costs. should join http://t.co/196BgLod9H #Connect14 

— Via Voottoo (@Voottoo) April 10, 2014

In a connected world, a baby’s diaper will send a message to parents when it’s time to be changed.. #Connect14 #smartdiapers 

— James Donelan (@jdonelan) April 3, 2014

In a connected world being reactive is a thing of the past #futureproof #Connect14 

— Nic Clark (@nic4riptide) April 3, 2014

In a connected world your waiter will receive a txt when your glass is empty! #smartbeer #Connect14 

— Melissa Narvaez (@MelissaNarvaez) April 3, 2014

In a connected world, the worlds largest democratic election results can be computed immediately after polls -Good luck India #Connect14 

— Karan Singh Malhi (@KaranSinghMalhi) April 10, 2014

In a connected world, integration is not achieved, it is assumed. #Connect14 

— Marc Calder (@MCalder) April 3, 2014

You can still enter to win MuleSoft gear!

Simply tweet, “In a connected world … #Connect14” and let us know what your connected world is like.

// < ![CDATA[ !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+'://platform.twitter.com/widgets.js’;fjs.parentNode.insertBefore(js,fjs);}}(document, ‘script’, ‘twitter-wjs’);
// ]]>
 
 
 
 

Connect Legacy .NET to Salesforce – Part 1

Reading Time: 22 minutes

A Step-By-Step Guide

SaaS applications like Salesforce have proven to be highly disruptive for many .NET architectures.  In this multi-part blog series, we’ll demonstrate step-by-step how .NET-centric organizations can use existing assets (e.g. legacy .NET applications, SAP) to maximize the value of their Salesforce investment without disrupting underlying code.

Scenario:

In Part 1 of this blog post series we will explore a scenario that has an Auto Insurance company building quotes that allow customers to evaluate their premiums.  As part of this process there are some complicated quote calculations that must take place. These calculations take into consideration a customer’s age, driving record, level of education and location. These calculations are proprietary to the organization, provide a competitive advantage and have broad impacts if mistakes are made while building the calculations in the Quote system.

 In Part 2 we will implement a Quote to Cash scenario that will allow us to integrate our Salesforce instance with SAP and automate a cross boundary business process that increases productivity.

Citing the need for more agility, the Quoting department has identified the existing Customer Information System (CIS) as a risk to the overall business.  The custom CIS is a legacy application that is not flexible and is not accessible from outside the company’s firewall.  This creates challenges for agents that are meeting face to face with customers to discuss requirements and is introducing lag to the sales cycle.

The Quoting department has decided to move the CIS functions to Salesforce where the agility and accessibility issues will be eliminated.  However, moving the CIS functions to the cloud will create an integration challenge for the Quoting System.  Currently the Quoting system leverages the CIS system for customer master data. The team that supports the existing Quoting system does not have any experience integrating with Salesforce nor any real integration expertise.

 Replacing the Quoting system at this point in time is not realistic due to the customized business rules and lack of documentation and internal knowledge of the system.  In order to address this challenge, a decision has been made to migrate the CIS web services from the custom application to the Mule ESB platform.  This will allow the company to take advantage of the capabilities that Salesforce provides while preventing any disruptions to the Quoting system.

The Web Service contracts (or WSDLs) that are provided by the custom CIS system will be moved to Mule ESB where the endpoints will now be hosted. This ultimately means the custom .NET application will not have to be re-coded.  Only the Web Service URL needs to be changed within the Quoting system’s configuration.

 Instead of using the custom CIS for customer master data, Salesforce will be used.  MuleSoft’s Anypoint Connector for Salesforce will allow the Quote system to leverage this master data.  By using this connector, the organization can accelerate the development cycles involved in integrating with Salesforce and deliver value to the business faster than it could by developing point to point interfaces from the .NET application or by using BizTalk Server where significant customization would be required.

Build

The following steps will provide a high level walk through of this build process.

  1. A WSDL (Web Service Description Language) acts as a contract between a consuming application and a server based web service.  It describes the different message types and data structures that will be passed between these two systems.  In order to import the existing WSDL into Mule ESB we can copy our existing WSDL from the CIS system to disk so that we can import it into Anypoint Studio or if our WSDL is available via a URL just make a note of it as we can import the WSDL using this method as well.

  2. With our WSDL in place we can now create our Mule Project by clicking on File – New – Mule Project from within Anypoint Studio and we will call it InsuranceServiceModernization. For the purpose of this blog post we are going to use Mule ESB 3.5 early access but this also works with the Generally Available version of Mule ESB 3.4.

     

  1. Click the Next button and Finish button in order to create Mule Project.

  2. Next, we want to add a Mule Flow to manage our process.  We can do so by right mouse clicking on our flows folder and then select New – Mule Flow.

     

  3. We will call the Mule Flow called CreateAccountFlow.

     

  4. We want to expose this Mule Flow through a SOAP endpoint.  The first step in this process is to drag an HTTP Endpoint from the palette onto the the Anypoint Studio canvas.

     

  1. In order for the HTTP endpoint to be accessed from a consuming application we need to configure the endpoint to include the Host, Port, and Path.

     

  2. We now need to be able to process the incoming message.  In order to do so we need to include the CXF SOAP Component which is available in our palette.

     

  3. We can select this component from our palette and drag it onto our Mule Flow.

     

  4. With our endpoint in place we now want to take advantage of our existing WSDL that obtained from the legacy .NET service.  We have the opportunity to import this WSDL so that Mule ESB can support this same contract that our legacy .NET Service provided.  In order to import this WSDL we need to click on our CXF SOAP Component and then click on the Generate from WSDL button.

     

  5. We now need to point our Generate from WSDL wizard at the location of our WSDL.  In this particular case our WSDL has been stored locally on disk, but we could also point it at a URL if necessary.  We also need to provide a Package Name that will be used within all of the Java classes that this wizard will generate.

     

  6. Once the wizard has completed we will discover that our Service Class field has been populated.  We need to ensure that the class provided represents our Interface contract and not our implementation class.

     

  7. We will also discover the Generate from WSDL Wizard has created all of our Java classes that represent our Request and Response Objects.

     

  8. Since we are going to call Salesforce we do not need to implement a Web Method within our Java implementation class.  Instead we are now going to connect to Salesforce to generate the contract required to interface with the Salesforce API.  In order to do this we need to find the Salesforce connector from our palette.

     

  9. We can now drag this connector onto our Mule Flow.

     

  10. In order to populate the the Salesforce request message we need to connect to our Salesforce instance.  We can create a connection by clicking on the + button as depicted in the following image.

     

  11. We now need to make a choice in terms of what type of Salesforce connection we would like to make.  In this case we will choose the Salesforce connection type. This will allow us to use standard Salesforce credentials. The OAuth Global Type addresses scenarios where users have access to a Salesforce resource for a limited period of time. For the purpose of this blog post using the standard configuration is sufficient.  For more information regarding the OAuth Global type please refer to the following documentation.

     

  12. Valid credentials must be provided in order to connect to our Salesforce instance.These credentials include a Username, Password, and Security Token. If you do not have Salesforce credentials you can sign up for a free developer or trial account here.

     

  1. To ensure we have provided the right credentials, we can click the Test Connection button to validate them.

  2. After we click the OK button, Mule ESB will connect to Salesforce and generate all of the related meta data from the Salesforce API.

     

  3. We now want to select the type of Operation and the sObject Type.  By doing so we are instructing the Salesforce connector to perform a Create operation on the Account sObject.  Using this operation will allow us to create a new Account within our Salesforce instance.

     

  4. Next we want to perform a transformation from the SOAP request message that we received from our Quote application and map it into our Salesforce request.  To do so, we want to drag our Anypoint DataMapper onto our Mule Flow.

     

  5. Dropping our DataMapper onto our Mule Flow is not enough, we now need to configure it and can do so by double clicking on the shape. We then need to specify the Type of input which in this case is Pojo.  We also have to provide the name of our Class that represents our request message.  In this case it happens to be com.mulesoft.blogs.Account.  Our Output will automatically be configured since Anypoint Studio has determined that we are connecting to Salesforce based upon the configuration of our Salesforce connector. The final action in this step is to click on the Create mapping button.

     

  6. DataMapper will now load and we have the ability to create our detailed mapping from our SOAP request message to our Salesforce request message.  DataMapper will automatically derive some mappings and we must create any mappings that are not derived.  It is also worth noting that any custom Salesforce attributes that have been configured within our Salesforce system will appear in this mapping as well.  For example the DateOfBirth, IsCollegeGraduate and IsUpSell are all custom attributes.

     

  7. With our SOAP request to Salesforce request mapping complete we will now focus on the response mapping.  But before doing so we will also include a Logger Component onto our Mule Flow which will give us some additional visibility into the state of our interface.  We will also drag another DataMapper onto our canvas.  This DataMapper will be responsible for transforming our Salesforce response message into our SOAP response message.

     

  8. To configure our DataMapper double click on it. This time around our Input will be from the Salesforce Connector and we will select the create Operation and List<SaveResult> Object. Our Output will be of Type Pojo and our Class will be com.mulesoft.blogs.AccountResponse.  Once we have configured this we can click on the Create mappingbutton to complete our configuration.
  9. This time around DataMapper has anticipated my mapping and there is no additional mapping that is required. This concludes the configuration of our Mule Application.

Deploy

  1. In order to test our new Mule interface we need to deploy it by clicking on Run – Run As – Mule Application.

Testing

  1. After we have updated our configuration within our Legacy .NET application we can launch the application and create an Account.

  1. If we navigate to Salesforce we will discover that our new Account has been created.

     

Conclusion

In this blog post we modernized a legacy .NET application without making any disruptive code changes to the application itself.  In this particular instance we used an existing Web Service contract, or WSDL, and we imported it into Anypoint Studio.  The Anypoint Platform is flexible enough to host this contract allowing the custom Quote application to function without any code changes.  All we had to do was update the URL from being the URL of the legacy Customer Information Service and point it to our URL on the Mule platform.

 Not only did we move the legacy Web Service from a .NET application to Mule ESB but we also integrated our custom Quoting application with a leading SaaS application; Salesforce.  We now have a Quoting Department leveraging an industry leading platform.  We demonstrated how easy it was to build this interface and did not need to write any code while building this interface.

We also discovered there are interoperability scenarios that Mule ESB supports with .NET applications. This was demonstrated by taking an existing .NET application and having it consume a Web Service hosted on Mule ESB.  Even though Mule ESB is built upon Java, there were no interoperability issues or work-arounds required. Look for additional interoperability scenarios between .NET applications and Anypoint Platform in the near future as MuleSoft is making significant investments in this area.

Stay tuned for the second post in this series where we will implement a Quote to Cash scenario that will allow us to integrate our Salesforce instance with SAP and automate a cross boundary business process that increases productivity.

Delivering SOA, SaaS integration, and APIs on a single platform will facilitate future business opportunity and decrease resistance in integrating new services. This loosely coupled, frictionless architecture is made possible only with MuleSoft’s Anypoint Platform.

Download the free whitepaper to learn how to unlock the value of your .NET architecture with MuleSoft.

Optimize Resource Utilization with Mule Shared Resources

Reading Time: 5 minutes

In 3.5 we introduced the concept of domains in the Mule container. You can now set up a domain and associate your Mule applications with a domain. Within a domain project you can define a set of resources (and the libraries required by those resources) to share between the applications that belong to the domain.

Within the mule-domain-config.xml you can define a JMS broker connector that can be reused by all the Mule applications that belong to that domain.

This way, you can share a jms caching connection factory and reuse jms client resources with the consequent optimization of resource consumption.

How to deploy a domain in Mule

Once you have your domain fully configured, you need to deploy it in Mule. You deploy domains in the same manner you deploy Mule applications, but using the domains folder. You can create a zip file with the domain folder content or just move the domain folder to the domains folder.

Using domain level resources from within an application

Now that we have our domain defined you can use the resources defined in that domain from your applications. To do that you need to specify in your Mule application what domain you want to use. You can only use one domain per application and you just need to specify a the domain in the application’s Mule-deploy.properties file.

Then, in the mule application you can start using the shared resources declared by the domain.

As you can see in the application configuration file we are using the jms connector sharedJmsConnector declared in the domain.

Tips & Tricks

  • By sharing a VM Connector you can consume services exposed by one Mule application from another Mule application that belongs to the same domain. This is the most performant way of consuming internal services.

  • Sharing an HTTP connector allows you to use the same port from several applications.

  • If there’s only one connector defined for a particular transport in the combination of the domain config and the app config files then you don’t need to reference them explicitly using connector-ref.

Mule ESB keeps growing and with domains we become a complete container for applications. Sharing resources is really simple and easy to use since it uses standard Mule configuration. There is no need to learn a new syntax.

There are several companies using Mule and deploying hundreds of applications within a single Mule container. With shared resources you can optimize the use of connections to jms brokers and databases or decrease the number of ports in your firewall. You can even create better service reutilization with shared VM queues.

This is only the beginning. We expect to increase the components that you can share between applications in later releases.

Want to learn more?

If you want to learn more about this feature you can take a look at the early access documentation for 3.5.0 or take a look at the maven archetypes and plugins that we provide to simplify domain creation.

Mule Meets Zuul: A Centralized Properties Management – Part II, Client side

Reading Time: 2 minutes

Before reading on, please take a look at Part 1 of this post.

Connecting Mule application to Zuul server requires two additional jars in the application class path. One of them is jasypt library which can be downloaded here. The second one is zuul-spring-client. You can download the source and build the jar using Maven.

To configure Zuul client, first add zuul namespace to the mule tag. You will also need spring and context namespaces.

Next, configure zuul spring bean and spring context referencing this bean:

Note that the value of the config attribute – config="AcmeProperties" – is the name of the properties set that we created on the Zuul server.

Finally, edit the MULE_HOME/conf/wrapper.conf file on each environment and set the variables for environment name and the password used for encryption:

Mule Meets Zuul: A Centralized Properties Management – Part I, Server side

Reading Time: 8 minutes

It is always recommended to use Spring properties with Mule, to externalize any configuration parameters (URLs, ports, user names, passwords, etc.). For example, the Acme API from my previous post connects to an external database. So instead of hard-coding connectivity options inside my application code, I would create a properties file, e.g. acme.properties, as follows:

Obviously, as a developer, I would use a test instance of Acme database to test my application. I’d commit the code to the version control system, including the properties file. Then my application would begin its journey from the automated build system to the Dev environment, to QA, Pre-Prod, and finally Prod – and fail to deploy on production because it wouldn’t be able to connect to the test database! Or even worse, it would connect to the test database and use it and no one would notice the problem until customers placed $0 order for an Acme widget which would normally cost $1000, all because the test database didn’t contain actual prices!

Sure, I could just follow the recommendations on our web site and create multiple sets of properties, e.g. acme.dev.properties, acme.qa.properties, acme.prod.properties etc. But instead of solving the problem, it would create a few new ones.

Continue reading