Mule is already running on premise and in the cloud and is soon bound to run virtually everywhere thanks to a new project called Anypoint Edge. This initiative not only focuses on allowing Mule to run on lightweight platforms, like the Raspberry Pi, but also to support protocols and devices relevant to the embedded world, like MQTT and Zigbee. The “Internet of things” had its rabbit, it’s now going to have its Mule!
If you follow this blog, you may remember that we’ve already discussed MQTT before. We’ve introduced the Mule connector for this lightweight publish/subscribe protocol with an example where MQTT was used in the context of conference booth QR Code scanners. In this post, we’re circling back to this example but we add a new twist to it: we’re taking it to the Edge!
In the original architecture, Mule was running on a laptop, using its webcam to scan QR Codes but now that Mule is able to run on the Raspberry Pi, we will revisit the example to run on this tiny device. We will also use the brand new blink(1) connector, which will allow us to use this smart USB RGB LED to flash a visual confirmation when the scan correctly happened.
So let’s start by reviewing our architecture diagram and let’s unroll the example from there.
Architecture
We’re going to build up on the architecture and implementation of the previous incarnation of this scenario, so in case you haven’t checked it already, take a moment to review it.
The main components in this architecture are:
- A USB webcam that captures QR Codes (using the excellent ZBar toolkit) and writes their content into files,
- A blink(1) USB LED device to provide a visual feedback to the attendee that her QR Code has been successfully scanned,
- Anypoint Edge running on the same Raspberry Pi that controls the webcam,
- A single Mule flow that picks up the generated files, transforms their CSV content into JSON, publishes the captured data to an MQTT broker and flashes an animation sequence on the blink(1),
- Any number of remote systems subscribed to the relevant MQTT topic in order to be informed of all attendee QR Code scans that occur.
Implementation
The Raspberry Pi runs on an ARM processor, therefore in order to run our application, we will use:
- Oracle’s Java SE Embedded virtual machine — This version of the JVM provides all the features of Java Standard Edition, which is needed by Mule, while being able to run on the ARM architecture. It can be downloaded here.
- Anypoint Edge — A special distribution of Mule that contains the sub-set of transports and modules that’s most relevant to an embedded environment. Anypoint Edge also comes with service wrapper libraries that are compatible with the ARM architecture, allowing to run Mule as a service like on any regular production environment. Anypoint Edge is still in development but please reach out if you would like early access.
We will also use the blink(1) connector to control the LED device. Thanks to this connector, we will program a pattern of flashing green light to indicate success, pattern that we will trigger when a message has been successfully dispatched over MQTT.
Before going any further, and to please the hidden in geek you (don’t deny it), here is a picture of the actual contraption, which to be truly fantastic would need a 3D-printed custom case.
Yep, this is what embedded Mule looks like! Do not panic: the software side of things is way more elegant.
Since we’ve already covered the Mule configuration needed to make this work, but without the blink(1), in the aforementioned blog post, we’ll only cover here what’s different. Note that there’s no difference at all induced by the fact we’re now running on Anypoint Edge: the same configuration works the same way!
Note that some transports, like JDBC, and some modules, like jBPM, are not available in the Anypoint Edge distribution because they are too heavyweight or unnecessary for embedded-type scenarios.
The first notable difference consists in the flows used for setting up and resetting the blink(1) device. They are shown hereafter:
We have wired-in these flows to the notifications sent by Mule when its configuration is fully loaded and when it’s starting to unload (respectively CONTEXT_STARTED and CONTEXT_STOPPING) so these custom flows are run at the right moment. We only define two steps (“green” and “off”) in the pattern stored in the blink(1) as it’s enough since the device will loop it automatically for us. We will play this pattern for one second after a successful dispatch on MQTT. How is this done? Take a look at the following flow:
The old tricks are often the best: a “sleep” statement right smack in the middle of starting and stopping to play the stored pattern is all we need to achieve a nice visual confirmation that the QR Code has been processed successfully.
The full flow looks like this:
The source code is on github here, and the full flow is here.
And that’s all there is to it! Our initial application is now ready to be deployed on Anypoint Edge, Embedded Java SE and Raspberry Pi. Picture a hundred of these puppies: don’t you think conference booth QR Code scanners never looked so good?
At the edge… and beyond
The initial promise of the Internet was that it would be resilient through distribution, decoupling and smart routing. But the trend of these past years has been towards the opposite: more centralized, more aggregated… and this mainly for convenience reason. What if our tools would allow us to make the Internet of Things a reality? What if producing and consuming the Internet was becoming truly distributed again?
With the capacity to run on limited platforms and interact with the physical world through smart devices, Anypoint Edge, the first embedded integration platform, has all it takes to deliver in the domains of distribution, decoupling and smart routing… So where would you take Anypoint Edge? Let us know what cool things it will allow you to bring to the Internet of things.