Managing the migration to ISO 20022 XML

Reading Time: 4 minutes

Financial services industry standards and file formats are unique and complex. There are different standards for low value payments, high value payments, international payments, bank statements, securities trades, derivatives, and stock custody transactions. In an effort to unify disparate formats based on geographic boundaries, industry utilities and market participants, the International Organization for Standardization (ISO) developed the ISO 20022 XML standard. The ISO 20022 standard was originally developed in 2004 and is an internationally agreed upon, global set of common standards for the development of financial services messages using a standardized XML syntax.

ISO20022 Migration Initiatives Underway

ISO20022-1

Adoption of ISO 20022 has reached a critical mass and implementation efforts continue to evolve. The European Union’s SEPA credit transfers and direct debits migration, completed in 2014, was the first high profile ISO 20022 standardization project. There are many other ISO 20022 migration initiatives underway across payments, securities, treasury, trade services, cards, and foreign exchange, including:

Continue reading

Missing at SXSW 2015: Conversations Between Engineers and Designers

Reading Time: 2 minutes

UX, meet Engineering. Engineering, meet UX. You two should talk.

MuleSoft’s very own Mason Foster, Director of User Experience, checked out this year’s SXSW Interactive festival and made an interesting observation. In a recent post on re/code, he points out that, “there were myriad opportunities for engineers to learn the ins and outs of the hot technologies, and countless rooms packed with user experience (UX) professionals discussing the latest trends in design. But you didn’t see these folks talking to each other, and there weren’t many structured opportunities to do so.”

He goes on to point out, “At the heart of every great technical innovation is a deeply entrenched partnership between design and development, yet we don’t spend much time figuring out how to nurture that relationship and maximize innovation.”

When UX folks and engineers come together, it opens the door for the opportunity to take products to the next level.

Read the entire article on re/code »

What web APIs can do for government

Reading Time: 8 minutes

This post brought to you by Christopher Jay. The original article can be found on The Australian Financial Review.


government

In the ever-evolving world of the internet, transferring data between different internet locations and organisational databases is becoming a lot easier with the dramatic spread of a particular set of software modules designed to standardise routine connection tasks.

In previous times, a lot of laborious effort in connecting various systems has been done with hand-built strings of software – basically re-inventing the wheel in different shapes, sizes and colours.

Enter a 21st-century set of software companies specialising in providing standardised tools for connecting different applications, data and devices using common software sets to eliminate the endless duplication of effort.

This involves the familiar concept of application programming interfaces, or APIs. These are bundles of separate routines, protocols, security software and software assembly tools designed to interact with computer devices, internet communication systems and organisational databases to speed up and simplify communications.

A conversation about the fabulous new opportunities these approaches are providing is typically peppered with references to particular APIs and the huge advances in convenience and services they are providing.

From a government point of view, there are three major implications for communications and internet policy.

Integration

The first is the need to ensure the internet specialists in each public-sector department and agency are thoroughly briefed on advances in connection APIs which can allow big improvements in the range of services from government sources, and refinement of operating procedures for computer service installations.

The second is continuing effort to extend the effectiveness of security systems to counter the constantly expanding activities of foreign governments, criminal enterprises and local computer pests bent on information theft, monetary fraud and operational disruptions, such as distributed denial of service attacks.

The third is steadfastly maintaining or even increasing the rate of broadband modernisation to cope with the sizeable increases these spreading connection or Web APIs will bring to overall levels of traffic.

The current, slightly bizarre Wi-Fi arrangements, with their melange of multiple possible carriers, anarchic chaos of access passwords to systems some of which charge, and many of which don’t, the whole thing often clogging up in the evenings when the computer-games-playing crew cut in is no way to run a long-term, fast internet system in a high-tech economy.

An example of how web APIs can work would be a social interaction or business presentation exercise which can incorporate photographs, video clips, verbal material, print or graphical presentations, user information, dynamic live feeds (for example, from a conference, speech or broadcast) and environmental information, such as current temperature, wind speed and humidity.

All this has been notionally possible for some time, using the previously-developed set of routines. But the practical difficulties and impediments to date are readily apparent in the constant complaints on social media about links that don’t work, video that doesn’t play, audio that stays silent and presentations that run in fits and starts.

Catalyst

An example of the current set of Web API proselytisers is the San Francisco-based multinational Mulesoft, founded by Ross Mason as recently as 2006 and now running to nearly 500 people with the usual string of additional offices in Atlanta, New York, Buenos Aires, London, the Netherlands, Munich, Sydney, Hong Kong and Singapore.

Instead of custom-coding by hand, Mulesoft developed platforms which provided a range of pre-prepared functions which could be quickly and efficiently strung together to provided desired connections.

“Mobility, cloud, big data and the internet of things are transforming business and creating new opportunities,” Mulesoft company material points out.

“Yet companies are only starting to tap the vast potential. To truly realise the promise of this new era, these disparate technologies must all be connected.

APIs are the catalyst for this change, unleashing information and eliminating the friction of integration for unprecedented speed and agility.”

These big advances from the newly-emerging web API companies will be particularly important in coping with the rapid continuing spread of interactions between central databases and mobile equipment in the field, or even in the office, in the form of smartphones, tablets and laptop computers.

Chief Technical Officers implementing improved web communications need to distinguish between simple publication of standardised data (for example, train timetables, city populations, addresses) and applications where the device, or the user separately, needs to be identified and authenticated as cleared for the relevant access.

Read the entire article on The Australian Financial Review »

From Deloitte’s Tech Trends 2015: The fusion of business and IT

Reading Time: 8 minutes

Each year, Deloitte’s Tech Trends reports take a look at the technology landscape and examine the trends that have the potential to transform business, government, and society and impact organizations – across industries, geographies, and sizes today and in the future. The theme for this year’s report is the fusion of business and IT.

MuleSoft’s very own Ross Mason, Founder and Vice President, Product Strategy, and Uri Sarid, Chief Technology Officer, contributed to the report, discussing how CIOs are using APIs to help drive innovation from the inside out, turning integration into a competitive advantage.

Ross’ thoughts

In order to survive, companies need to open up their digital channels. More and more, businesses large and small are recognizing the important opportunities being created by establishing an open approach to data. The common way to accomplish this is through APIs, which allow for the fluid exchange of information between internal systems and those belonging to third parties. Adopting an open approach to sharing data through digital channels will be a driving force for companies of all size this year.

Read the excerpt below for more information on what Ross and Url had to say, and be sure to download the entire Tech Trends report to explore all the trends.

Over many years, companies have built up masses of valuable data about their customers, products, supply chains, operations, and more, but they’re not always good at making it available in useful ways. That’s a missed opportunity at best, and a fatal error at worst. Within today’s digital ecosystems, business is driven by getting information to the right people at the right time. Staying competitive is not so much about how many applications you own or how many developers you employ. It’s about how effectively you trade on the insights and services across your balance sheet.

Until recently, and for some CIOs still today, integration was seen as a necessary headache. But by using APIs to drive innovation from the inside out, CIOs are turning integration into a competitive advantage. It all comes down to leverage: taking the things you already do well and bringing them to the broadest possible audience. Think: Which of your assets could be reused, repurposed, or revalued— inside your organization or outside? As traditional business models decline, APIs can be a vehicle to spur growth, and even create new paths to revenue.

Viewing APIs in this way requires a shift in thinking. The new integration mindset focuses less on just connecting applications than on exposing information within and beyond your organizational boundaries. It’s concerned less with how IT runs, and more with how the business runs.

The commercial potential of the API economy really emerges when the CEO champions it and the board gets involved. Customer experience, global expansion, omnichannel engagement, and regulatory compliance are heart-of-the-business issues, and businesses can do all of them more effectively by exposing, orchestrating, and monetizing services through APIs.

In the past, technical interfaces dominated discussions about integration and service-oriented architecture (SOA). But services, treated as products, are what really open up a business’s cross-disciplinary, cross-enterprise, cross-functional capabilities. Obviously, the CIO has a critical role to play in all this, potentially as the evangelist for the new thinking, and certainly as the caretaker of the architecture, platform, and governance that should surround APIs.

The first step for CIOs to take toward designing that next-generation connected ecosystem is to prepare their talent to think about it in the appropriate way. Set up a developer program and educate staff about APIs. Switch the mindset so that IT thinks not just about building and testing and runtimes, but about delivering the data—the assets of value. Consider a new role: the cross-functional project manager who can weave together various systems into a compelling new business offering.

We typically see organizations take two approaches to implementing APIs. The first is to build a new product offering and imagine it from the ground up, with an API serving data, media, and assets. The second is to build an internal discipline for creating APIs strategically rather than on a project-by-project basis. Put a team together to build the initial APIs, create definitions for what APIs mean to your organization, and define common traits so you’re not reinventing the wheel each time. This method typically requires some adjustment, since teams are used to building tactically. But ultimately, it forces an organization to look at what assets really matter and creates value by opening up data sets, giving IT an opportunity to help create new products and services. In this way, APIs become the essential catalyst for IT innovation in a digital economy.

Download the entire report »

The making of: Mule 3.6 Next-Gen HTTP Connector

Reading Time: 15 minutes

You might have read Dan Diephouse’s post last month announcing the release of Mule 3.6, and if you haven’t – go read it! And then come back to this. Seriously.

Ok, so now that you’ve read about the new HTTP connector in Mule 3.6 and seen the cool demo that Dan put together it’s my turn to drill down into some of the more interesting details – why we built the new connector, how we went about it and how we ensured that it’s performant.

Why?

The HTTP transport available prior to this new connector has been around for a long time – since Mule 1.x in fact! We’ve obviously updated and improved it a lot over time, but it still used the same underlying technology for some time. It has served us well and has been used to build countless integrations, but not only was it in need of some love (and long overdue for it), it was critical to have killer HTTP connectivity in the brave new world of APIs.

Usability and API’s

You may have noticed that our new HTTP support is now called the “HTTP Connector“. This is due to the fact that it doesn’t stick the the more rigid transport configuration model using endpoints, but instead uses a connector style configuration that you may be familiar with from using our Anypoint Connectors. The other major usability improvement, which you will have seen in Dan’s video, is the support for API-definitions, with initial out-of the-box support for RAML.

Scalability

The old HTTP transport uses Apache Commons HttpClient for outbound connectivity while using a home-grown implementation built on top of the TCP Transport for inbound. We were still using version 3.1 of HttpClient for outbound, because comparative performance tests didn’t show much performance improvement in 4.3 and so we decided to wait and do this bigger refresh of HTTP support. In terms of inbound, while the HTTP transport was great in terms of functionality and performed well, we had been recommending the use of the Jetty Transport for some time because the HTTP transport is based on the thread-per-connection model. This is OK if you can configure your thread pool size based on the number of concurrent connections you expect, but it becomes a notable issue if you expect significant or unknown levels of concurrency. This is where Jetty comes in. Jetty uses non-blocking IO, enabling it to handle many, many more connections without requiring a thread for each.

Confused? Exactly! Thats why in 3.6, HTTP and Jetty transports are now deprecated and a single new performant/scalable HTTP connector with improved configuration had been introduced.

How?

I won’t go into details on the usability improvements or RAML support in this post – and anyway, I’m sure what we did under the hood is much more interesting.

We started nearly from scratch in terms of the underlying technology choices; this meant deciding if we should build our own HTTP support from the ground up or use an existing open-source HTTP framework. Building our own made no sense given the number of great libraries out there, so we started playing around with Http Components, Jetty 9, Netty 4 and Grizzly. We compared their performance as well as their APIs while considering the feasibility of integrating with Mule. We also wanted to use the same framework for inbound and outbound connectivity.

Performance

For performance testing, we were interested in understanding each library’s performance profile for different concurrencies and message size, but also wanted to understand how each library behaved with different processing models – both processing requests in the selector thread and using a pool of worker threads to hand-off request processing to. This is important for Mule, as we support a wide variety of different integration scenarios from high throughput HTTP proxying to slower, more-complex service implementations which may involve database queries, or composing multiple other services.

We performed tests in our dedicated performance testing lab using two servers – one to execute the JMeter test plan and the other server to run the Http Library under test – with the following specifications:

  • PowerEdge R620 with two Intel(R) Xeon(R) CPU E5-2630 processors running at 2.30GHz
  • 10Gb/s dedicated network
  • RHEL 6.5 / Java HotSpot 1.7.0_45

We tested multiple scenarios:

  • Payload size: 100B to 100KB
  • Concurrency: 10 to 8000
  • Processing Model: Selector Thread vs. Worker Thread.

The graphs below include results for both processing models and you can see that in general, with a smaller payload size and such a simple (echo) operation, the use of worker threads is simply an overhead, whereas with larger payload sizes the difference is much less noticeable.

HTTP get 100B
HTTP get 100kb

In terms of raw performance, the top performers were Netty and Grizzly. We got very similar numbers from both of these frameworks in our tests, with Jetty and HttpComponents lagging behind. We didn’t have time to dig into the Jetty or HttpComponent results in details, and it should be noted that performance also wasn’t our only criteria, but these were some observations on the different frameworks performance.

  • HttpCompoments did really well with a small (100B) body, but lagged behind significanlty as soon as the body size was increased (10KB). Also, it did poorly when worker threads were used, potentially as a result of mechanical sympathy issues because worker thread was only used after the http message had already been parsed.
  • Jetty scaled very well, but TPS was lower than the other frameworks with both small bodies (100B) and low concurrencies. I assume this is due to Jetty always using a worker thread for processing, but Grizzly/Netty still managed to do better with a worker thread pool.
  • Initially Grizzly was slightly slower than Netty, but enabling Grizzly’s ‘optimizedForMultiplexing’ transport option improved performance giving Grizzly marginally better numbers. We also found that Grizzly appeared to scale better than Netty (up to 8000 client threads).

In order for us to perform a fair comparison, we did the following:

    1. Configured all frameworks to use the same number of acceptors
    2. Ensured all frameworks used the same tcp socket configuration (soKeepAlive, soReuseAddress etc.)
    3. Implemented exactly the same test scenario for each framework
    4. Performed tests with same test plan on the same hardware (detailed above) after a warm-up of 30s load.
    5. Ran all implementations with Java 7 (default GC) and a 2GB heap.

API/Extensibility

In terms of API and extensibility the primary things we were looking for were:

    1. Fully configurable in terms of socket options, number of selectors etc.
    2. Easy switch between processing requests in selector or worker threads
    3. Easy to integrate into Mule

All of the libraries we looked at were fully configurable, and wouldn’t have been hard to integrate into Mule, the most obvious difference was how they varied in terms of allowing different types of request processing.

For both HttpComponents and Jetty, easily switching between processing requests in selector or worker threads, was an issue as Jetty always uses a ThreadPoolExectutor for processing requests and HttpComponents process in selector threads by default – and doesn’t provide a mechanism whereby a ThreadPoolExectutor can be used. Netty was low-level enough to support this, but we found Grizzly’s IOStrategies to be the most elegant and flexible solution, allowing us to easily support a single instance of Grizzly in Mule with different ThreadPool’s for different Http Listeners.

We also found Grizzly to have better abstractions such as the MemoryManager interface makes it generally easier to integrate and extend. Another simple thing was that Grizzly uses the standard Java Executor interface, whereas Netty requires a netty EventExectutorGroup. Also, while we didn’t use it in the end, Grizzly has a higher level servlet-like Server API, which is even easier to use.

One disadvantage of Grizzly compared to Netty is that while Grizzly is already fairly mature, it’s community is much less active. Having said that Oleksiy Stashok has been very responsive on the Grizzly user-list and multiple changes have already been incorporated based on our feedback.

The Result

Wondering if everything we did was worth it? So were we. Once integrated, we went ahead and performed some comparative performance tests between the different version of Mule and different connector/transport implementations.

Screen Shot 2015-02-20 at 6.36.15 PM

As you can see from the above graph, the new Grizzly based implementation in Mule 3.6 both outperforms the HTTP transport in Mule 3.5 at low concurrencies as well as the Jetty transport available in Mule 3.5 at higher concurrencies meeting our goal to have a single HTTP connector that can be used, and be high performing, in all scenarios. This graph only shows the results with 100 Byte payload, but the results with large payloads are very similar, with the delta between implementations reducing significantly once testing with a 100KB payload.

I didn’t cover HTTP Connector outbound performance in this blog, otherwise it’d would have been twice as long! Perhaps I’ll cover it in a follow up post.

So, if you aren’t already using Mule 3.6, you now have another very good reason to go and download Mule now!

SOAP & REST Attachments in Mule

Reading Time: 5 minutes

I was recently working on a project where we had to handle SOAP attachments. Working with SOAP attachments is the kind of thing that you work on every 3-5 years and then 10 seconds after you are done you forget all about them. All the information required is available in our docs but it can still be good to have a complete end to end example as a reference. Esteban Robles Luna’s (former MuleSoft colleague) blogpost, Working with SOAP attachments using Mule’s CXF module, from 2011 was most helpful.

When working on this project I also wanted to see if how this could be implemented using RAML and REST services instead of SOAP. When researching the topic it seems like there is no complete consensus for how to do this, but I found this discussion – How do I upload a file with metadata using a REST web service? – quite interesting.

The use case is very straightforward, sending and receiving a PDF file as a SOAP attachment and as a REST attachment. The application has four different flows:

  • Read a PDF file from disc and then add it as a SOAP attachment to the request.
  • Expose a SOAP service that is capable of receiving a SOAP request with an attachment.
  • Read a PDF file from disc and add that as an attachment to a REST call.
  • Expose a REST service using APIkit and RAML that is able to handle a request with an attachment.

1. SOAP Attachment – Client

This is probably the most complicated flow since it requires some 4 lines of code in order to create the attachment and uses the CXF module to configure the client piece. Trigger the flow by copying the file src/test/resources/esb.pdf to the folder src/test/resources/soap/attachment/in. The configuration file can be found here:

<flow name="file2soap" doc:description="Reads a file and sends that as a SOAP attachment to a SOAP service.">
	<file:inbound-endpoint path="src/test/resources/soap/attachment/in" responseTimeout="10000" doc:name="Read File" />
	<processor-chain doc:name="Processor Chain">
		<scripting:transformer doc:name="Create SOAP Attachement">
			<scripting:script engine="Groovy"><![CDATA[def attachment = new org.apache.cxf.attachment.AttachmentImpl(originalFilename)
				def source = new org.apache.axiom.attachments.ByteArrayDataSource(payload.getBytes(),'application/pdf');
				attachment.setDataHandler(new org.apache.axiom.attachments.ConfigurableDataHandler(source));
				message.setInvocationProperty('cxf_attachments',[attachment])
				return payload
			]]></scripting:script>
		</scripting:transformer>
 
		<set-payload value="#[['FirstName', 'LastName', '123'].toArray()]" doc:name="Create Payload Map" />
		<cxf:jaxws-client operation="contact" serviceClass="org.mule.demo.soap.Contact" doc:name="SOAP Client">
			<cxf:outInterceptors>
				<spring:bean class="org.mule.module.cxf.support.CopyAttachmentOutInterceptor" />
			</cxf:outInterceptors>
		</cxf:jaxws-client>
		<http:request config-ref="SOAP-Service" path="contacts" method="POST" doc:name="Call SOAP Service" />
	</processor-chain>
</flow>

 

2. SOAP Attachment – Server

This flow exposes a SOAP web service and retrieves the SOAP attachment from the request and writes that to disk. The service is triggered when triggering the client, if you want to test it it individually just use SOAP UI and point it to the endpoint (http://localhost:8883/contacts). The configuration file for this web service can be found here:

<http:listener-config name="SOAP-in" host="0.0.0.0" port="8883" doc:name="HTTP Listener Configuration" />

<flow name="soap2file">
	<http:listener config-ref="SOAP-in" path="/contacts" doc:name="Receive SOAP Request" parseRequest="false" />
	<cxf:jaxws-service serviceClass="org.mule.demo.soap.Contact" doc:name="Parse SOAP Request">
		<cxf:inInterceptors>
			<spring:bean class="org.mule.module.cxf.support.CopyAttachmentInInterceptor" />
		</cxf:inInterceptors>
	</cxf:jaxws-service>
	<choice doc:name="Choice">
		<when expression="#[flowVars.containsKey('cxf_attachments')]">
			<set-payload value="#[cxf_attachments.iterator().next().getDataHandler().getContent()]" doc:name="Retrive Attachment" />
			<file:outbound-endpoint path="src/test/resources/soap/attachment/out" outputPattern="#[server.dateTime.toString()].pdf" responseTimeout="10000" doc:name="Write File to Disc" />
		</when>
		<otherwise>
			<logger message="********************** No SOAP Attachement Found! **********************" level="INFO" doc:name="Log Missing Attachment" />
		</otherwise>
	</choice>

	<set-payload value="Success" doc:name="Generate Response" />
</flow>

 

3. REST Attachment – Client

This implementation is very straightforward, just use the attachment message processor and the outbound HTTP call to make the call. You can trigger the flow by copying the file src/test/resources/esb.pdf to the folder src/test/resources/rest/attachment/in. The configuration file for this can be found here:

<http:request-config name="HTTP_Request_Configuration" host="localhost" basePath="api" port="8884" doc:name="HTTP Request Configuration"/>
<flow name="file2rest" doc:description="Reads a file from your desktop and sends that to a rest service.">
	<file:inbound-endpoint path="src/test/resources/rest/attachment/in" responseTimeout="10000" doc:name="File"/>
	<file:file-to-byte-array-transformer doc:name="File to Byte Array"/>
	<set-attachment attachmentName="#[originalFilename]" value="#[payload]" contentType="multipart/form-data" doc:name="Attachment"/>
	<http:request config-ref="HTTP_Request_Configuration" path="contact/abc/datasheet" method="POST" doc:name="HTTP" parseResponse="false"/>
</flow>

 

4. REST Attachment – Server

The REST service accepting is autogenerated using a RAML file that can be found here. The API has some additional methods not used in this example. The example can be triggered by the client described in step 3 by copying the file to the right folder or by any REST console that can take an attachement by posting to this URL: http://localhost:8884/api/contact/abc/datasheet. The configuration for this API can be found here:

<flow name="main">
	<http:inbound-endpoint address="http://localhost:8884/api" doc:name="HTTP" exchange-pattern="request-response" />
	<apikit:router config-ref="apiConfig" doc:name="APIkit Router" />
</flow>

<flow name="post:/contact/{contactId}/datasheet:apiConfig">
	<set-payload value="#[message.inboundAttachments]" doc:name="Retrieve Attachments"/>
	<foreach doc:name="For Each">
		<set-payload value="#[payload.getInputStream() ]" doc:name="Get Inputstream from Payload"/>
		<file:outbound-endpoint path="src/test/resources/rest/attachment/out" responseTimeout="10000" doc:name="File" outputPattern="#[server.dateTime.toString()].pdf"/>
        </foreach>
	<set-payload value="{&quot;status&quot;:&quot;success&quot;}" doc:name="Generate JSON Response" />
</flow>

The complete project is available in github:
https://github.com/albinkjellin/soap-rest-attachments

How to: New Mule Agent

Reading Time: 9 minutes

This blog post is the third in the new Mule agent blog post series.

You can access the first two blogs here:

  1. Introducing the new Mule agent
  2. Mule Enterprise Management and the new Mule agent

In this post, I’m going to talk about some of the common use cases we envisioned for the agent as we were developing it.

The vision we had when building the agent was that it would be the primary interface to access the Mule runtime. It facilitates platform agility (it can release faster than the Mule runtime) without compromising backward compatibility (it will maintain proper API versioning). 

The ability to interface with the Mule runtime via APIs as a very powerful capability. Through clearly defined, documented, secure APIs, users who leverage the agent can tie in cleanly to their existing SDLC (software development lifecycle) processes involving Mule.  For example: Mule application deployment and runtime operational visibility.

Before we talk about the possible use cases for the agent, it is important to understand the agent’s architecture, which you can read about in our documentation. To supplement the information in our docs, I’ve also created a diagram (see below) describing the different components of the agent.

mule-agent-arch

In the diagram, the new Mule agent sits inside the Mule instance and interfaces with the Mule ESB (or API gateway) instance at runtime.

Each of the components described in the diagram above are extensible; users can create new components by implementing agent interfaces. The components are also dynamically configurable; many component attributes can be dynamically turned on/off or modified at runtime.

Here are some of the common use cases we see for the new agent:

Publishing Mule metrics to existing monitoring systems

The new agent can be configured to publish Mule metrics directly to other systems. This is done with a combination of an appropriate agent service that collects information from Mule (e.g. the JMX service that collects JMX information from Mule), and an appropriate agent internal handler (aka publisher) to push metrics to other systems. We’ve actually written and open sourced several internal handlers. At the time of this blog post’s writing, we currently have the following publishers open sourced and available for download:

  • Cloudwatch
  • Graphite
  • Nagios
  • Zabbix

Note that at this time MuleSoft does not officially support these publishers. We can only help if there are problems with the agent APIs, not with the publisher code.

If you don’t see your monitoring system listed above, don’t worry. The steps for creating your own publisher are listed in the documentation. We also encourage users to actively contribute new publishers to the mule-agent-modules Github repository.

Currently, the metrics available for publishing are those exposed by the JMX service. That said, users of the new agent can also build their own services that collect other metrics from Mule. In the future, the agent engineering team may also look into building other services that collect different sets of metrics.

Deploying Mule applications automatically via APIs

The new agent provides a mechanism to deploy a Mule application via a REST or Websockets API. For a quick demo on how to deploy via REST, watch this short video I created. Using Websockets is a bit more complex to build a client for, but Websockets is a full duplex communication channel, which means that as an application is being deployed, clients can get real-time feedback about the application’s deployment state.

The ability to automate Mule application deployment is a feature targeted to operations teams. Currently, the only way to deploy a Mule application with an API is to leverage the Mule Enterprise Management Console (aka MMC) APIs, but that requires standing up a separate app server for MMC. Otherwise, deployment has to be done by SSHing into the box where Mule is running, copying the Mule application into the /apps folder, and waiting until the Mule application anchor file appears. This process is more tedious and error prone. The Mule agent now allows you to deploy a Mule app directly with an API called directly against the Mule instance.

Tracking detailed events in Mule applications

Many of our customers have asked about detailed event tracking to inspect the state and contents of a Mule message as it goes through a flow. Customers want that data to be accessible directly from the Mule instance outside Mule Enterprise Management. With the new Mule agent, this is now possible. You can read all about how to do it here. Basically, users can configure various debugging levels to specify the level of granularity for capturing event information from Mule applications. Information captured can then be used as input for other operations applications or processes in the enterprise, thus increasing runtime visibility for Mule applications.

Furthermore, users can create and customize different publisher buffering strategies for controlling the flow of event tracking information from the agent into third party systems. Proper configuration of a buffering strategy allows for handling unexpected events like network or system outages. Read more about configuring publisher buffering strategies here.

Thanks for reading! We hope reading about the agent inspires you to try it out and send us feedback!

Demo: Anypoint Exchange

Reading Time: 2 minutes

In our latest installment in the MuleSoft webinar series, we’ll introduce you to Anypoint Exchange! We’ll walk you through a demo showcasing both the public and private exchange. In the public exchange, you can access hundreds of templates, examples, and connectors made available to you by MuleSoft. In the private exchange, you can expose your own internal assets so that your organization can get the most benefit from each asset or project that you create. This is extremely helpful in onboarding new users to Anypoint Platform™ inside your organization, providing best practices for how to best integrate with or consume internal services, and promoting a culture of sharing and reusing the insights and learnings with team members.

Check out the webinar and learn:

  • What is Anypoint Exchange, where to use it, and why it’s valuable
  • The difference between public and private assets in Anypoint Exchange
  • How to set up, control access to, and create and edit entries in your Anypoint Exchange
Continue reading

Developers, You’re Invited!

Reading Time: 2 minutes
heroic.sm

Are you a developer using or interested in using Mule ESB, Anypoint Studio, Anypoint Platform™, or RAML? If so, we’d like to invite you to come hang out and meet the MuleSoft team next Thursday, March 12 at 6pm, at the Cartoon Art Museum in beautiful San Francisco!

Registration is limited and required, so make sure to register soon! We’ll be kicking things off with plenty of food and drinks, a few surprise guests, and a chance to meet and chat with MuleSoft’s founder, Ross Mason.

With the party’s theme circling around being a connectivity hero – and cartoon art – you can be sure there will be some fun activities, a few surprises, and a HUGE reveal at the end. This is one party you don’t want to miss!

Register today using access code: TOTHEMAX

Looking forward to seeing you there!

Disrupting your Asynchronous Loggers

Reading Time: 6 minutes

A little while ago we decided that it was time to include the option of asynchronous loggers in Mule ESB.

The asynchronous implementation is faster than the synchronous one because the process waits for IO on a different thread, instead of on the main execution thread, allowing it to continue its execution.

After some research, we decided to use LogBack or Log4j2, both are successors of the Log4j framework.

To choose between the two, we ran performance tests in our performance lab comparing Log4j, Logj2, LogBack and no-logging. The following Mule app was created and used in all the tests.

Continue reading