Working with Databases (JDBC) in Mule Studio

In this blog post, I’ll give you some background information about , explain what Mule ESB and Studio do with JDBC, and demonstrate how can you use it in a simple example.

A little reference for JDBC:

JDBC, which stands for Java Database Connectivity, is basically an API that enables users to execute operations over a Data Source using the Java programming language. This API allows you to connect to almost any Data Source system, from relational databases to spreadsheets and flat files and, using the proper SQL syntax, you can perform queries, updates, deletes, or even execute store procedures.

What Mule ESB and do with JDBC

Now let’s see how this is architected in Mule ESB. What Mule ESB does is to make this Java code layer transparent to you. Simply importing a jar file with the driver for a specific data source (, Oracle, etc) and writing some easy XML code will make you able to connect to a Data Source and manipulate the data in it. Studio comes with a friendly User Interface, which makes Mule XML code very easy to create and edit. The image below gives you a better idea of how all this works:

At the very end of the line is your data source, which can be fed by any other application. Next you have the JDBC driver. As we mentioned earlier, this is the Java API interface provided by the vendor code of the Data Source that will allow Mule to connect to the Data Source and manipulate the data in it. What comes next is our Mule ESB instance, which will be the service that will be executing the Mule XML code. And finally we have Mule Studio and you.

Studio gives you the framework to easily create the XML code you need and will allow you to test it by executing the code in an embedded Mule ESB instance. So by using Studio, the other layers will be transparent to you.

My kingdom for a Driver!

Before configuring a JDBC connection the first thing we need is the Driver. If you want to keep your kingdom you should first go to the vendor website and look for a JDBC driver file, which should be in a jar format. Keep in mind that there are some vendors, like Oracle, that may require a license to use the driver. NOTE:  On  www.jarvana.com  you can look for the Driver class you need and download the jar file from there. In the example explained below we are going to work with a MySQL database. You can download the Driver file from here (registration required) or look for the connector class in jarvana.

Putting hands to work

Open new Mule Project in Studio, and then follow these steps to get your flow working: a. Import the driver b. Create a Datasource, c. Create a Connector that uses our Datasource, and finally d. Create a simple flow that uses our connector.

a. Import the Driver

Once you have the jar file, the next steps are very simple:

  1. In the Package Explorer, right-click over the Project folder ( in this case “jdbcprj”).
  2. Look in the menu for Build Path > Add External Archives…
  3. Look for the jar file in your hard drive and click Open.

Now you should see in the package explorer that the jar file is present in “Referenced Libraries.” This will allow you to create an instance of the Object driver you will need.

b. Creating a Datasource

Mule and Studio come with some predefined configuration elements for the most common datasources: Derby, MySQL, Oracle and PostgreSQL. If you want to use another datasource, you can do it by creating a bean object with the configuration and using the bean as the Datasource. No let’s create a MySQL datasource for our connector:

  1. Go to the Global Elements tab and click on the Create button, which will display a new window.
  2. Look for Data Sources > MySQL Data Source and click the OK button.
  3. In the Data Source configuration window only 3 things are need to make this work: the database name in the URL, the User and the Password. Enter those attributes according to your database configuration and click OK.

c. Create the

Now that we have the datasource with its driver we need a Connector.

  1. From the Global Elements tab, click on Create and look for  Connector > Database (JDBC). Then click OK.
  2. The only thing that we need to do here is tell the connector which datasource to use. To do this click on the ‘Database Specific’ drop-down list and look for our datasource created in the previous step. Then click OK.

Optionally, you can go to the Queries tab now and create the queries or SQL statements that you want. If you don’t do this now you will have to do it when configuring an endpoint.

d. Creating a flow

Now, we have the half of the work done.  To use our Datasource in a flow, we need an inbound endpoint or an outbound endpoint, depending on what we want to do, you can use a jdbc inbound endpoint if you want use de information from a database to feed your flow and do some process or use an outbound if you want to write the information you process in your flow in a database. In any of these cases you need to do this:

  1. In the Studio Message Flow view, add a JDBC endpoint (either inbound or outbound) in the flow, and open the configuration window by double-clicking on the endpoint.* NoteTo add the endpoint you just need to look for it in the palette and drag and drop it into the canvas, if you drop it in the canvas out of any flow then a flow scope will be created and you endpoint will be an inbound endpoint, if you drop it in a flow after any element, then you will have an outbound endpoint. Studio automatically perform this conversions as flows should always start with inbound endpoints:
  2. Go to  the reference tab and in the connector drop-down list, look for the JDBC connector created in the step C. We are telling the endpoint how to connect to the data source by specifying a reference to a connector. The connector configuration is something global so it can be reused in any amount of endpoints that you want.
  3. Go to the General tab and select the Query Key you want to use in this endpoint. The JDBC endpoint can execute one SQL statement. If you have not created the query in the connector then you can do it now by going to the Queries tab.* Queries Tab and a New Query * Query selected in the Query key drop down list:

Following these steps you are ready to feed your flow by doing queries to your database or create new database registers with the information processed in your flow, or execute any statement you need over your data source. Here you have an example flow. To use this just copy the configuration and paste it in the XML Configuration tab and save the project. You should see a flow like this in the message flow view:

<?xml version="1.0" encoding="UTF-8"?>

<mule xmlns="http://www.mulesoft.org/schema/mule/core" xmlns:mulexml="http://www.mulesoft.org/schema/mule/xml" xmlns:file="http://www.mulesoft.org/schema/mule/file" xmlns:jdbc="http://www.mulesoft.org/schema/mule/jdbc" xmlns:doc="http://www.mulesoft.org/schema/mule/documentation" xmlns:spring="http://www.springframework.org/schema/beans" xmlns:core="http://www.mulesoft.org/schema/mule/core" xmlns:http="http://www.mulesoft.org/schema/mule/http" xmlns:scripting="http://www.mulesoft.org/schema/mule/scripting" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" version="CE-3.2.1" xsi:schemaLocation="
http://www.mulesoft.org/schema/mule/xml http://www.mulesoft.org/schema/mule/xml/current/mule-xml.xsd 
http://www.mulesoft.org/schema/mule/file http://www.mulesoft.org/schema/mule/file/current/mule-file.xsd 
http://www.mulesoft.org/schema/mule/jdbc http://www.mulesoft.org/schema/mule/jdbc/current/mule-jdbc.xsd 
http://www.mulesoft.org/schema/mule/core http://www.mulesoft.org/schema/mule/core/current/mule.xsd 
http://www.mulesoft.org/schema/mule/http http://www.mulesoft.org/schema/mule/http/3.1/mule-http.xsd 
http://www.mulesoft.org/schema/mule/scripting http://www.mulesoft.org/schema/mule/scripting/3.1/mule-scripting.xsd ">
    <jdbc:connector name="jdbcConnector" dataSource-ref="MySQL_Data_Source" validateConnections="false" transactionPerMessage="true" queryTimeout="10" pollingFrequency="10000" doc:name="JDBC">
        <jdbc:query key="Users" value="SELECT * FROM Users"/>
    </jdbc:connector>
    <jdbc:mysql-data-source name="MySQL_Data_Source" user="root" password="" url="jdbc:mysql://localhost:3306/StudioQA" transactionIsolation="UNSPECIFIED" doc:name="MySQL Data Source"/>
    <flow name="flows1Flow1" doc:name="flows1Flow1">
        <jdbc:inbound-endpoint queryKey="Users" connector-ref="jdbcConnector" doc:name="JDBC"/>
        <mulexml:object-to-xml-transformer doc:name="Object-to-Xml"/>
        <file:outbound-endpoint path="/Users/myUser/myFolder" doc:name="File"/>
    </flow>
</mule>


We'd love to hear your opinion on this post

52 Responses to “Working with Databases (JDBC) in Mule Studio”

  1. Hello,

    Good Tutorial
    BUT
    I have concerns:

    1. Is it supposed to read the records “one by one” and output them? I was hoping it would create one .xml file with all the data.
    2. Why does it read table field recursively? e.g. i had 7 records in my table, so output those and after the “pollingfrequency” another 7 records and so on…

    Happy Integration,
    Danny

    Agree(1)Disagree(0)Comment
    • Hi Danny, good questions!

      About 1, not exactly, the transport will create a map with the result of the query and send the map in a message (consisting of column/value pairs), then you can use this map and convert it to an XML file by using the EE transformer Maps-to-xml.

      Regarding 2, you can use an acknowledgement query to make an update in the table so that the query that you are using does not read the same records again. For example, in your query you can have the condition ‘AND processed = 0’ and in your ack query you can make an update to the processed attribute and set it to 1, so when the query is executed again the record already read will not apply the query. When an inbound SELECT query is invoked by Mule, the ACK statement is invoked for each record returned by the query and typically, the ACK statement is an UPDATE, INSERT, or DELETE.

      You can find more information about these points here: http://www.mulesoft.org/documentation/display/MULE3USER/JDBC+Transport+Reference

      Hope this helps you.
      Cheers!

      Agree(0)Disagree(0)Comment
  2. Hi Matias,

    Greetings from Mexico.

    I’m new to Mule Studio and I’m trying to do a similar example using an Oracle data source, I have CE 3.2.1 version and EE 3.2.2 trial version. The project runs OK in CE version, but when I try to run the same flow in my Studio with Mule Server EE it throws next exception:

    org.xml.sax.SAXParseException: cvc-complex-type.2.4.a: Invalid content was found starting with element ‘jdbc:oracle-data-source’.

    Could you help me with this error please? Is there a bug or something with datasources in EE?

    This is the complete xml of my flow:

    Any help will be appreciated!

    Agree(0)Disagree(0)Comment
    • Hi Irasema, thanks for your post.

      There is a known issue regarding the datasources in the EE version which is already fixed, you can check it in our public jira: http://www.mulesoft.org/jira/browse/STUDIO-1375
      This will work out of the box in the next release wich will be out soon. The workaround for now is to add the JDBC CE namespace to your EE project, that should allow you to use the Oracle datasource.

      Please let me know if you need more help on this.

      Cheers!

      Agree(0)Disagree(0)Comment
  3. I couldn’t attach my xml flow, but is almost the same as yours but with an Oracle datasource. Thank you.

    Agree(0)Disagree(0)Comment
  4. Thanks for the article Matias. I am trying to read records out of SQL Server and I have a flow very similar to the one you used in the article as an example.

    The problem I’m having is that Mule reads about 150 records out of 25k records when I go to output the data. I’m using the CE version of Mule.

    I read on the forums that this might be a memory constraint issue.

    What’s the best way to get 25k records out from SQL Server using JDBC and then written somewhere else?

    Agree(0)Disagree(0)Comment
  5. As it turned out it seemed to be a restriction on where I was sending the records. I was reading from SQL Server, transforming to JSON and then writing to Mongo. A sort of relational to document thing. =)

    I guess I was overwhelming Mongo, so I decided to send the data to JMS first to act as a buffer. Then I can send data to Mongo in a more controlled fashion.

    As a note writing via JMS handled the query results just fine.

    Agree(0)Disagree(0)Comment
    • That explains why you didn’t receive an exception 🙂

      Good to know, thanks for posting your resolution.

      Agree(0)Disagree(0)Comment
  6. I am trying to use JDBC in Mule Studio EE. but getting error as ‘invalid content was found starting with element ‘jdbc:oracle-data’ I checked http://www.mulesoft.org/jira/browse/STUDIO-1375 but not sure exactly how to implement this.
    below is XML:

    Can some expert please let me know what I need to change in above source to get JDBC working.

    Agree(0)Disagree(0)Comment
  7. Thanks matias. i am using latest downloaded version of the Mule ESB with Studio (MuleStudio-EE-for-Windows-32bit_20120315.zip)
    below is the XML code:

    Agree(0)Disagree(0)Comment
  8. Hi Matias,
    Thanks for this clear explanation. I had just one problem. What I get as result is series of .dat files in the output folder. What happens to the resultset after querying the table? Using the community edition, I want to implement the transformers myself. Could you give me some guidelines how to gather those small column messages and send it as a serialized XML message?
    Best regards,
    Sherry

    Agree(0)Disagree(0)Comment
    • Hi Sherry, not sure if I understood well but the .dat files are being created because I used a file outbound endpoint and didn’t make much specifications in it but you can take the query result and do what you need. If you want to create a custom transformer you should pick a Script transformer from the palette and that will allow you to write the code for your custom transformer.

      The query will return you a list of maps (key, values), and the Object to XML transformer will map those values into an XML but will not be able to create the XML structure you want, to do that you can use Datamapper but it will be available in the next Studio EE Version, otherwise you can write your own code to map it, hope this helps you!

      Thanks for your comments!
      Regards.

      Agree(0)Disagree(0)Comment
  9. Hi again,
    Thanks for the information. I have coded the JDBCResultset to XML in java. How can I get the resultset from the JDBC end point and pass it to a custom transformer that translates resultset object to a proper XML to be passed to a JMS. The code that i wrote takes the resultset object and then generate XML out of it.
    That would be good if i could have a guideline on that also.

    Agree(0)Disagree(0)Comment
    • Hi Sherry, sorry for the delay in the answer I’ve been a little busy lately.

      I think that to adress you problem first thing you need in your flow is a JDBC endpoint with it’s connector and write a query to get the result you want, then you need to put a Custom transformer in which you can make a reference to the class you created, and after the transformer you will need a JMS endpoint with a connector configuration depending on the JMS service that you are using, Something like this should be enough.

      You can check the JMS reference configuration: http://www.mulesoft.org/documentation/display/MULE3USER/JMS+Transport+Reference

      But it’s pretty similar to the JDBC, you will need to configure and endpoint with the queue, and the connector with the information of your broker.

      Hope this helps you.
      Cheers!

      Agree(0)Disagree(0)Comment
  10. Hi,
    Thanks for your explanation. But could you give more details on how I can get the Hashmaps from JDBC and use that in my custom class. I have seen some custom transformers but not something that works on the Hashmaps and transform that type of data. I need this for my Thesis and can not use datamapper because I should implement this myself. Even i thought maybe you have some sample code for this type of transformers.

    I am near the deadlines, hope you reply soon.
    Regards

    Agree(0)Disagree(0)Comment
    • Hi Sherry, if I am getting it well I think the problem is that Mule is returning you a map as a result of the query and you are expecting a JDBCResultset, one of the options I see here is to modify you class to receive the map instead of the JDBCResultset, each record from the query result set is converted into a Map (consisting of column/value pairs).
      Other possible option, may be faster to try, is go to the JDBC endpoint configuration advanced tab, and look and check the “Disable Transport Transformers” checkbox, this should prevent Mule to realize any transformation over the resultset and return you the object.

      Hope this helps you.
      Regards!

      Agree(0)Disagree(0)Comment
  11. Hi Matias,
    Thanks for your reply. In the documentation about JDBC endpoint , it explains a “List Of Maps” as the input for the transformer which is in Mule EE. This transformer is called Mapsto XML , I want to write the same transformer myself. As the input it gets (java.util.List ) which is a
    (List of Maps. Each Map will be converted into a “record” in the XML) I want to see if I have a 5 column table with 100 records, what will be the structure for each record in the table? like what i wrote below?
    Record1—-> (id,value),(name,value),(lastname,value),(title,value),(Tel,value)
    when we query the table with SELECT statement, i think it returns each row separately, it polls the database , right? So I need to gather all those maps in a list and then do the transformation? or I already get the list from the JDBC ?
    How can I test the what output I get from JDBC?
    I need this for my Thesis, please give me feedback ASAP 🙂
    Cheers,
    Sherry

    Agree(0)Disagree(0)Comment
  12. Hi Matias,

    I am getting multiple output files i.e. my sql query should return only 4 records but, the query is getting executed infinite times and my output folder is full of files now.

    Please suggest how to make the query run only one time.

    Agree(0)Disagree(0)Comment
  13. Also it shud generate an XML file but it generates .dat i.e. flat file . Why ?

    Agree(0)Disagree(0)Comment
  14. My company is evaluation Mule ESB and Mule Studio, but we are a .NET development house, and some of the intricacies of Java elude us. Can you create a BLOG for using Mule ESB, with Mule Studio and connecting to a MS SQL Server?

    Agree(0)Disagree(0)Comment
    • Hi Joe, there shouldn’t be much difference between this blog wich uses MySQL and using an SQL server, the main difference will be that you will need to import the jar driver of the SQL Server and then create an Spring Bean to use as a Datasource, in the bean you have to specify the Driver Class name and other values as the URL, User, Password.

      Here it is an example of a Bean:

      
      spring:beans>
          <spring:bean id="jdbcDataSource"
            class="org.springframework.jdbc.datasource.DriverManagerDataSource"
            doc:name="Bean">
            <spring:property name="driverClassName" value="com.mysql.jdbc.Driver" />
            <spring:property name="url"
              value="jdbc:mysql://localhost/Test?user=root&amp;password=" />
          </spring:bean>
      

      Agree(0)Disagree(0)Comment
  15. Hi Matias
    Is it possible to capture the return value from a stored procedure? I can see we can use output parameters but just thinking about calling existing stored procs that may return an error status.

    Agree(0)Disagree(0)Comment
  16. […] with Mule Studio 3.3.1. If you haven’t already, you can download Mule here. Enjoy!Related posts:Working with Databases (JDBC) in Mule StudioWebinar: Mule Studio – Integration RedefinedAnnouncing Mule Studio General AvailabilityMule […]

    Agree(0)Disagree(0)Comment
  17. Hi rayan does MULE support 4GL Database.. is ther any options for it!!

    Agree(0)Disagree(0)Comment
  18. Hey Matias, thanks for the tutorial. One question: I’m using a PostgreSQL DB, and I want to return the ID of the row I just inserted, ie. INSERT INTO test(id,name) VALUES (DEFAULT,’Name’) RETURNING id;
    But I keep getting the error:org.postgresql.util.PSQLException: ERROR: syntax error at or near “RETURNING”
    Do you know if this is possible? Thanks for the help

    Agree(0)Disagree(0)Comment
    • Hi hgrayb

      The RETURNING statement is not still supported by the JDBC Connector provided by the ESB but it is supported by the JDBC Driver, you can create a Java Component to execute the query using the RETURNING statement instead of using the JDBC endpoint or other way would be to add a second JDBC endpoint to execute a query to get the last id. If you are using the Community version and you are experienced with Java you can also contribute to the JDBC connector to add the support.

      Hope this helps you.
      Cheers

      Agree(0)Disagree(0)Comment
  19. minor detail: you don’t have to register to download the MySQL driver– just scroll down and you’ll find the “No thanks, just give me the download” link 🙂

    Agree(0)Disagree(0)Comment
  20. Could you expand on how to use a Java Component to execute custom queries?

    Agree(0)Disagree(0)Comment
  21. Hi Matias,
    two times you said something like this:
    “[…] or other way would be to add a second JDBC endpoint to execute a query to get the last id”.
    This is a really bad idea!
    In a high concurrency environment you may have several other inserts with the stored procedure before the second JDBC endpoint executes and you get a wrong last insered id back.
    This is a task for a stored procedure with a transaction.
    And because of the lack of the feature to return data from a stored procedure in the community edition of Mule this is an impossible task.

    Do you know how to get the JDBC Driver (the configured database connector) within a custom component to execute the stored procedure by myself to get the result of it back?

    Agree(0)Disagree(0)Comment
  22. Hi,
    Can I get user and password for Global CMIS configuration information by database and not by properties file?
    Thanks in advance.

    Agree(0)Disagree(0)Comment
  23. I have problem to use my own QueryRunner with EEJDBCConnector. I want to improve the performance and change the prestatement with set properties fetchsize,

    public Object query(Connection conn, String sql, ResultSetHandler rsh,
    Object[] params) throws SQLException {

    PreparedStatement stmt = null;
    ResultSet rs = null;
    Object result = null;

    try {

    System.out.println(sql);
    stmt = this.prepareStatement(conn, sql);
    stmt.setFetchSize(512);
    this.fillStatement(stmt, params);
    rs = this.wrap(stmt.executeQuery());
    result = rsh.handle(rs);

    } catch (SQLException e) {
    this.rethrow(e, sql, params);

    the eeJDBCconnector just never call the overriden code. Can someone give me some advice. Where I can find the source code of eeJDBCConnector?

    Thanks

    Agree(0)Disagree(0)Comment
    • Hi Tommy, if you are using our EE version I believe you should be able to contact our support team, they will be able to give you a better and faster answer for this.

      Agree(0)Disagree(0)Comment
  24. I use Mule Studio Version 3.3 with an Oracle database connection.
    My problem is: I execute a SQL-SELECT-Statement with a database connector and get the result as the message payload, but I need the results columns and values as (session-) variables instead of overriding my payload.
    Do you have solution? Thanks

    Best regards
    Caro

    Agree(0)Disagree(0)Comment
  25. Hello

    I use Mule Studio Version 3.3 with an Oracle database connection.
    My problem is: I execute a SQL-SELECT-Statement with a database connector and get the result as the message payload, but I need the results columns and values as (session-) variables instead of overriding my payload.
    Do you have solution? Thanks for your help

    Caro

    Agree(0)Disagree(0)Comment
  26. Hi matias.pentreath,

    Nice post indeed. Very helpful.

    I have a small question, do have any idea how we can create database connection (MySql) and fetch data inside mule script using groovy?

    If yes, could u please share a sample code to do that.

    Thanks a lot

    Agree(0)Disagree(0)Comment
  27. Hi Matias, Thank you for the useful content
    I’ve recently started working with Mule. Can you help me about how I can define tables in database and read/write data into it?
    Thanks

    Agree(0)Disagree(0)Comment
  28. Thank you for the post. When I use global end points, can I avoid saving databae password in clear text in the final Mule XML file? This is mandated for audit reasons. Is there a simpler way to dynamically set jdbc connection properties esp when i move from one database instance to another during development cycle?

    Agree(0)Disagree(0)Comment
    • Hi Balajee, yes you can do that by using Properties Placeholder and the mechanisms provided by Spring which let’s you use encrypted properties.
      And when developing you can create different properties placeholders files for different environments. Is not a very short topic to summarise it here so I’d suggest you to google for more info around those Spring features.

      Agree(0)Disagree(0)Comment
  29. I am sure this paragraph has touched all the internet users,
    its really really good post on building up new blog.

    Agree(0)Disagree(0)Comment
    • Hi, thanks a lot for the feedback, I really want to write a new one and more updated blog around this topic 🙂 I just need to find some time.

      Agree(0)Disagree(0)Comment
  30. Hi Matias,
    I need a mule example to pull data from db to flat file. Would you suggest me any link please??

    Agree(0)Disagree(0)Comment
  31. Hi Matias,

    I have just started to use Mule and implemented the example as given in this blog. I have few queries as below
    1) After running the example, .dat files are getting generated with some random name. How can I give my custom name with extension.
    2) Series of .dat files are getting generated. How to stop that?

    Please resolve my queries at the earliest.

    Agree(0)Disagree(0)Comment
    • Hi Ananad,

      1) You can use the outputPattern property in the File endpoint in order to set the names of the files, and you can use MEL expressions (Mule Expression Language) in here.
      2) The dat files are generated due to the query execution, you need to configure your flow in a way that executes the query when you want. You don’t really need the .dat files, as this is more for learning propose, so depending on what you want to do with the result of the query is what you have to do after the query execution, you can transform the payload, do an Insert, hit a website, whatever you need for your integration or application

      Hope this helps you.

      If you want to check the latest version of Mule you have a new simplified way to use Databases, /release-mule-3-5-early-access/

      Agree(0)Disagree(0)Comment
  32. I configured as400 bean and database to use it as a datasource. I added external archive as well. When I run test within the studio it runs fine, successful. But when I start the project, it gives me an error No suitable driver found for jdbc:as400…. What could be the problem? Thank you!

    Agree(0)Disagree(0)Comment
    • Hi Arkadiy, what version of Studio are you using? it is odd as Studio build the same classpath.
      What you can check is this, run your application, and go to the workspace directory. There you should have a .mule folder and inside the apps folder where your application is deployed. There you can check that the jar is present in the project structure.

      Agree(0)Disagree(0)Comment
  33. It’s going to be ending of mine day, but before finish I am reading this fantastic
    article to increase my knowledge.

    Agree(0)Disagree(0)Comment