Lean Startup…meet Enterprise

Reading Time: 8 minutes

There is a lot of talk about the lean startup and whether it works or not. Some proclaim it is critical to the success of any startup and that it is even the DNA of any modern startup. Others claim that it’s unproven, unscientific and gets your product to market in a haphazard way that is ungrounded in quality.

But the lean startup model, when you boil it down, simply says that when you launch any new business or product you do so based on validated learning, experimentation and frequent releases which allow you to measure and gain valuable customer feedback. In other words, build fast, release often, measure, learn and repeat.

Real World Example: WebVan vs Zappos

Sometimes the best way to look at the lean startup approach is through examples.

WebVan went bankrupt in 2001 after burning through $1 billion on warehouses, inventory systems and fleet delivery trucks. Why? They didn’t validate their business model before investing so much in it and under-estimated the “last mile” problem.

Contrast this to Zappos. Zappos could have gone off and built distribution centers and inventory systems for shipping shoes. But instead Zappos founder Nick Swinmurn first wanted to test the hypothesis that customers were ready and willing to buy shoes online. So instead of building a website and a large database of footwear, Swinmurn approached local shoe stores, took pictures of their inventory, posted the pictures online, bought the shoes from the stores at full price, and sold them directly to customers when purchased through his website. Swinmurn deduced that customer demand was present, and Zappos would eventually grow into a billion dollar business based on the model of selling shoes online.

Guess who took the lean startup approach? 

Lean Startup Principles

Lean Startup methodology is based on a few key simple principles and techniques:

  • Create a Minimum Viable Product (MVP) which is feature-rich enough to allow you to market test it. This doesn’t mean that the product is inadequate or of poor quality, it means that you launch with enough features that allow you to collect the maximum amount of data. 
  • Use a Continuous delivery model to release new features quickly with the least amount of friction and short cycles in between.
  • A/B test different versions of the same feature on different user segments to gain feedback on which is more valuable or easier to use.
  • Act on metrics meaning that if you can’t measure it you can’t act on it to ensure that you are always improving the product.

Lean Startup Engineering

Lean Startup engineering seems to work for consumer products. Facebook does it – they push new code to their platform at least twice a day and this includes changes to their API, which has over 40,000 developers using it for building apps. But what if I’m building an enterprise product or platform? Can I move at the same fast pace? Absolutely.

There are lots of product companies out there who have applied the lean startup model successfully including DropBox, IMVU and Etsy. I’ve also been involved in many startups and I’ve seen the lean startup model work. I think the engineering philosophy behind it makes total sense – move fast, build quickly, automate testing, validate your decisions through data, leverage open source when you can, build MVPs and get as close to continuous deployment as you can. Not only does it make sense, it’s also a fun and enjoyable way for engineers and product teams to work together.

How We Apply It At MuleSoft

MuleSoft is no longer a start up, but as high growth company, we’re releasing new software and features at a very fast pace. This stems from our open source foundation of releasing early and validating with our community.  Today we have kept that culture – all our teams use agile development, iterate quickly and make builds available every night for other teams to try out and provide feedback on. We beta test new features with early adopters and our community to gain valuable feedback before getting too far into development. When we launch a new product we define an MVP – focusing on a well-defined set of customer needs and expand the capabilities base on value to users without bloating a product with unnecessary features. We continually release products internally and then we release new versions to our customers every 1-2 months which is pretty much unheard of in the enterprise software space. Having a Cloud Platform also means we can push silent updates at a much faster pace. To do all of this you need to have a solid automated testing process in place and system health monitoring in order to roll back changes if any issues are identified. 

We think the approach we take is a win-win for us and our customers. Happy iterating…

Getting started with JPA and Mule

Reading Time: 6 minutes

Working with JPA managed entities in Mule applications can be difficult.  Since the JPA session is not propagated between message processors, transformers are typically needed to produce an entity from a message’s payload, pass it to a component for processing, then serialize it back to an un-proxied representation for further processing.

Transactions have been complicated too.  Its difficult to coordinate a transaction between multiple components that are operating with JPA entity payloads.  Finally the lack of support for JPA queries makes it difficult to  load objects without working with raw SQL and the JDBC transport.

Mule Support for JPA Entities

The JPA module aims to simplify working with JPA managed entities with Mule.  It provides message processors that map to an EntityManager’s methods.  The message processors participate in Mule transactions, making it easy to structure JPA transactions within Mule flows.  The JPA module also provides a @PersistenceContext implementation.  This allows Mule components to participate in JPA transactions.

Installing the JPA Module

To install the JPA Module you need to click on “Help” followed by “Install New Software…” from Mule Studio.  Select the “MuleStudio Cloud Connectors Update Site” from the “Work With” drop-down list then find the “Mule Java Persistence API Module Mule Extension.”  This is illustrated below:

Installing the JPA Module in Mule Studio

Fetching JPA Entities

JPA query language or criteria queries can be executed using the “query” MP.  Supplying a statement to the query will execute the given query and return the results to the next message processor, as illustrated in the following Gist:

The queryParameters-ref defines the parameters.  In this case  the message’s payload as the parameters to the query.  The following query illustrates how a Map payload could be used to populate query parameters:

The query processor also supports criteria queries by setting the queryParameters-ref to an instance of a CriteriaQuery, as illustrated in the functional test snippet below.

You can use the  “find” MP to load a single object if you know its ID:

Transactions and Entity Operations

The default behavior of most JPA providers, like Hibernate, is to provide proxies on entity relationships to avoid loading full object graphs into memory.  When these objects are detached from the JPA session, however, attempts to access relations in the object will often fail because the proxied session is no longer available.  This complicates using JPA is Mule applications as JPA objects pass between message processors and inbetween flows and the session subsequently becomes unavailable.

The JPA module allows you to avoid this by wrapping your operations in a transactional block.  Let’s first look at how to persist an object then query it within a transaction.  The below assumes the message’s payload is an instance of the Dog domain class.

Now let’s see how we can use the merge processor to attach a JPA object to a new session.  This can be useful when passing a JPA entity from one flow to another.

Detaching an entity is just as simple:

Component Operations with JPA

The real power of using JPA with Mule is allowing your business services to participate in Mule managed JPA transactions.   A @PersistenceContext EntityManager reference in your component class will cause Mule to inject a reference to a transactional flow’s current EntityManager for that method, as illustrated in the following class:

We can now wire the component up in a flow:

Conclusion

JPA is an important  part of the JEE ecosystem and hopefully this module will simplify your use of JPA managed entities in Mule applications.

Installing Mule Studio 3.4 via Update Site or Eclipse Marketplace

Reading Time: 4 minutes

Eclipse users have always felt at home in Mule Studio, but users have often asked for Studio to “play well with others” — specifically, that it support plugin-style installation into existing Eclipse environments they already use every day.

With Mule Studio 3.4, we have delivered this wish list item. Specifically, users of Eclipse 3.8 can now install Mule Studio as plugins into their existing environments.

The old-fashioned way to do this is via the Eclipse Update Manager, using the update site http://studio.mulesoft.org/3.4/plugin:

Screenshot of Eclipse Update Manager with Mule Eclipse Plugin Install Site
Using the Mule Eclipse Plugin Install Site

There’s nothing unfamiliar about the install process– tick off all the options (you can omit connectors you don’t plan to use), accept the license and go. You will receive one warning about installing unsigned content:

Click OK to accept the unsigned content. The plugins install, and once Eclipse restarts, you have Mule Studio via new Mule and Mule Debug perspectives, and all the usual views and menu commands available.

Mule Perspectives and Views in Eclipse 3.8

For a more app-store-like installation process, use the Eclipse Marketplace. Mule Studio is listed in the Marketplace here:

Eclipse Marketplace is a cool way to find lots of different plugins for your Eclipse environment, without chasing down details of update sites and managing plugin installation details manually.

If you don’t already have the Eclipse Marketplace plugin, install it using Help->Install New Software:

Installing Eclipse Marketplace Client
Installing Eclipse Marketplace Client

Once you have Marketplace Client, the installation is simple:

  1. Visit the Mule Studio listing on marketplace.eclipse.org.
  2. Find the “Install” button on the page, to the left of the product description:
    Mule Studio in Eclipse Marketplace - Screenshot
  3. Drag the “Install” button into an open Eclipse instance, and drop it on the toolbar (above any open tabs):
  4. The Marketplace window opens, and identifies the Mule plugins and their dependencies.When the process completes, click Next. Accept the license terms, and click Finish.
  5. As with the update site-based install, you will receive one warning about installing unsigned content.

    Click OK.
Once the installation completes, Eclipse will restart, and Mule Studio is there, mixed in with the rest of your Eclipse tools.
Happy Muling!

Using continuous deployment with CloudHub

Reading Time: 4 minutes

Introduction

After creating a basic Mule App, you might be wondering how to automate the process of deploying a Mule App to CloudHub. In this post, we are introducing a Maven plugin that enables that use case. As a result a Mule App will be deployed automatically to CloudHub after a Maven build. This is achieved using the goal cloudhub-deploy from the Mule AppKit Maven Plugin.

In a ideal development workflow, each time the project builds the Mule Application will be deployed to the cloud providing a cutting edge instance that can be used for QA of the latest snapshot. Both Bamboo or Jenkins can be configured in order to run Maven and deploy the Mule App to CloudHub.

Show me the code

Given an existing Mule App (created using the Mule Application Archetype), we have a Maven pom.xml file. Check that the project has as packaging type mule. Then, add the following to the build > plugins pom.xml section:

<plugin>
    <groupId>org.mule.tools.appkit</groupId>
    <artifactId>mule-appkit-maven-plugin</artifactId>
    <version>3.4</version>
    <extensions>true</extensions>
    <executions>
        <execution>
            <!-- This can be changed to any Maven phase -->
            <phase>deploy</phase>
            <goals>
                <goal>cloudhub-deploy</goal>
            </goals>
            <configuration>
                <!-- Where the app will be deployed -->
                <domain>${cloudhub.domain}</domain>
                <!-- Max wait time in millisecs before timeout -->
                <maxWaitTime>180000</maxWaitTime>
            </configuration>
        </execution>
    </executions>
</plugin>

The property cloudhub.domain must be set in the properties block. This is where the app is going to be deployed:

<properties>
    <!-- This is the domain where the app will be 
        deployed: i.e. mydomain.cloudhub.io -->
    <cloudhub.domain>mydomain</cloudhub.domain>
</properties>

And in the settings.xml file a server must be added together with some valid credentials for CloudHub so that the deploy can take place. This will be the credentials used for the deploy:

Include the plugin repository (where the AppKit Maven Plugin is hosted) in the pom.xml file:

<pluginRepositories>
  <pluginRepository>
    <id>mulesoft-releases</id>
    <name>MuleSoft Release Repository</name>
    <url>http://repository.mulesoft.org/releases/</url>
  </pluginRepository>
  <pluginRepository>
    <id>mulesoft-snapshots</id>
    <name>MuleSoft Snapshot Repository</name>
    <url>http://repository.mulesoft.org/snapshots/</url>
  </pluginRepository>
</pluginRepositories>

After that, run the deploy maven goal:

$ mvn clean deploy

and the app will be deployed to CloudHub.

Wait! I don’t want to deploy my artifacts yet!

As deploy Maven Phase is related also with Artifact Deployment, it can be better to change the plugin deployment phase to verify. On that way, and by doing mvn clean verify, it can achieve the same result without having to upload the resulting Maven artifact to a remote repository.

That’s great but where can I find some usage examples?

Some working examples can be found in Mule Appkit integration tests:here and here.

Happy Hacking!

Data as a Service: An OData Primer

Reading Time: 10 minutes

It’s pretty common to hear and read about how everything in the IT business is going “as a service…”. So you start hearing about Software as a Service (SaaS), Platform as a Serivce (PaaS) and even Integration Platform as a Service (iPaaS, which is where our very own CloudHub platform plays on). But what about data?

APIs, they’re everywhere

If you’re an avid reader of this blog, you probably read countless posts about how APIs are everywhere, making the integration of cloud services possible. Sometimes those APIs expose services and behavior, like when Facebook API let’s you change your status or when the Box API let’s you store a file. But what happens in the cases when I just plain and simply want to expose data? What if I don’t need to expose explicit behavior such as Facebook does when sending a friendship request? What if for me, allowing to query and optionally modify my data is enough?

For example, consider President’s Obama Open Data Policy. In case you’re not aware of it, President Obama ordered all government public information to be openly available in a machine readable format. That’s A LOT of data feeds to publish. Let’s make a quick list of things government’s IT officials would need to carry this out:

  • APIs: In order to consume these feeds, there has to be a way to connect to them. Just publishing government’s data bases out in the Internet wouldn’t work for many reasons (from security to scalability). Also, some level of communication/scalability/governance layer is necessary.
  • Standarization: With so many feeds to publish, a common stardard consumption is required. You don’t want to build and maintain a different infrastructure per each feed.
  • Compatible: It should be easy for existing systems to interact with these feeds

All of the above, is what OData stands for. Initially created by Microsoft but then opened to the public, OData is a REST based protocol that defines a standard way to expose/consume data feeds. Along its features we can mention:

  • REST based
  • Compatible with ATOM and JSON
  • Metadata support to discover data catalogs
  • Query language including aggregation functions
  • Full CRUD capabilities
  • Batch processing

Open Data Policy is just the tip of the iceberg. Many governments all around the world are taking on similar initiatives. In case you feel that government data is a little bit out of the ordinary compared to your average day at work, let’s take a look at other services that use OData:

  • Microsoft Dynamics CRM uses OData to expose its data catalog. You can query and modify its data and even execute some functionality using navigations.
  • Microsoft Azure uses OData to expose table information
  • Splunk: This Big Data company let’s you integrate through a OData API
  • Netflix & Ebay: Although recently deactivated, these two where using OData to allow remote queries to their databases.

Where does Mule fit in?

Well, as usually we have a connector for it. Since OData is a standard protocol, we were able to develop a OData connector that will let you into any service using it. As of today, the connector supports:

  • V1 and V2 protocol specifications
  • All CRUD set of operations, including search functions
  • ATOM and JSON feeds
  • Batch operations
  • Marshalling / Unmarshalling to your own Pojo model

A quick demo

Although the goal of this post is not to dive deep into the connector, let’s take a quick look at the connector’s demo app just to illustrate how it works. This app consumes the OData feed from the city of Medicine Hat in Alberta, Canada. It’s basically an OData API listing public information such a list of the city buildings. So, let’s see how to consume that!

First, open up Mule Studio and install it from the Cloud Connectors update site:

Then, let’s a start a flow with an http inbound endpoint. It’s configuration should look like this:

Then, drop the OData connector into the canvas. First, create the connector’s config:

Notice that the V1 and ATOM were selected as protocol version and format merely because that’s what the team at medicine hat used.

Once the config is created, use the Get Entities operation to retrieve all the buildings in the city:

In the screen above, you can see how the CityBuildings catalog was selected for querying and how you can add filters and projections to this query (although we won’t be showing that in this demo). Also, notice that we’re specifying a class as a return type. If not provided, then the connector will return an object model that represents the OData model. That is good but not really easy to work with. By being able to specify your own return type, you can easily make an object that carries the info you need and that is easier to integrate with other components such as DataMapper. In this case, our object looks like this:

Finally, we just add a Choice Router so that if no results came back we show a message saying so. If results were indeed found, then we transform the results to JSON format and print on the browser. This is how the final flow looks like:

And this is how the Mule XML config looks like:

That’s it! Try and enjoy!

Additional resources

Here’s a couple of helpful links:

  • The OData page
  • The Medicine Hat City feed
  • Source code for the OData connector and the sample app shown in this post
I hope you found this post helpful. As always, your comments are very welcome.
Thanks for reading!