Introducing Hadoop (HDFS) Connector v5.0.0

October 18 2016


View our Hadoop Connector page here.

According to a recent survey conducted by DNV GL – Business Assurance and GFK Eurisko, 52% of enterprises globally see as an opportunity and 76% of all organizations are planning to increase or maintain their investment in big data over the next two to three years. In line with the survey, there is a growing interest from MuleSoft’s ecosystem in big data, which we are happy to support with our Anypoint Connector for () v5.0.0.

The Hadoop (HDFS) Connector v5.0.0 is built based on Hadoop 2.7.2 and is tested against Hadoop 2.7.1 / 2.7.2 and Hortonworks Data Platform(HDP) 2.4, which includes Hadoop 2.7.1. In this blog post, I’d like to walk you through how to use the Hadoop (HDFS) Connector v5.0.0 with a demo app called “common-operations”.

Before we start, please make sure you have access to Hadoop v2.7.1 or newer, if not, you can easily install one from the Apache Hadoop website. For the following demo, I’m going to use Hadoop 2.7.2 locally installed on Mac. After I run Hadoop 2.7.2 and hit localhost:50070, I can see the following page. (You might see a slightly different view based on your Hadoop version.)


Before you try the connector, please make sure you have Hadoop (HDFS) Connector v5.0.0 installed in Anypoint Studio. If not, please download it from the Exchange.


Once you download the common-operations demo app from this page and import it into Studio, you will see the following app showing you the CRUD operations on file and directory.


After you import the demo app, select “Global Element” and open the “HDFS: Simple Configuration” by clicking on “Edit”.


You can specify your HDFS configuration directly here, but I recommend you use the In, configure the following keys:

config.nameNodeUri=hdfs://localhost:9000 (Yours can be different)
config.sysUser= (I have not set up any sysUser.)

If you start the demo app in Studio and hit localhost:8090/ with your browser, you can see the simple html page helping you play with operations supported by Hadoop HDFS Connector v5.0.0.


You can simply create a file with the “Create File” form. I created the hellohdfs.txt with the following information:

Path: connectordemo/hellohdfs.txt
Content: Connect anything. Change everything.


As you can see below, hellohdfs.txt is created under /connectordemo.


While you can try out other operations, I’d like to highlight a new operation called “Read from path” which we added with Hadoop HDFS Connector v5.0.0. With this new version, the connector can read the content of a file designated by its path and stream it to the rest of the flow. You don’t have to drop a poll component in source to periodically patch a file. To try this out, first specify the path (i.e. /connectordemo/hellohdfs.txt) and change the initial state of the flow from “stopped” to “started”.


For new users, try the above example to get started, and for others, please share with us how you use or are planning to use the Hadoop HDFS Connector! Also, explore the Anypoint Exchange to see other resources you can leverage today.


We'd love to hear your opinion on this post

6 Responses to “Introducing Hadoop (HDFS) Connector v5.0.0”

  1. Does this also work on @cloudera ? They lead the Hadoop market in innovation, security, management, performance, and much more.

    • Kevin,

      It should work since HDFS in Cloudera distribution wouldn’t be different from HDFS in Apache Hadoop.


  2. Hi,

    One of our client’s are using mule soft and wanted to integrate with Hortonworks hadoop.

    We wanted to check mulesoft is responsible for which version of Hortonworks hadoop.

    I am able to see from a blog of Oct 2016, that mule soft is compatible with Hortonworks 2.4 version.

    Can you please let us know the plan on Hortonworks HDP 2.5, specifically on 2.5.3 ?

    Sreehari Takkelapati,
    Sprint Tele communications

    • Sreehari,

      Glad to know our mutual customers are interested in using this connector to integrate with Hortonworks (HDP). Since HDP 2.5 includes Hadoop 2.7.3, the HDFS Connector should be able to work with HDP 2.5 as well.


  3. We are not able to establish a secure connection from Cloudhub to Impala on a public IP with default port 21050. Though we can SSH to it.

    Customer do not want to white-list the Cloudhub IP as any change in cores configurations or redeployment would change the IP. Any suggestion?

  4. I am working on a client requirement where Mule ESB (REST API) is required to be integrated with Hadoop to consume data on real time (Request/response). How web HDFS will be integrated with mule ESB . what about Yarn config to meet performance. what about Knox gateway