Thanks for connecting with MuleSoft!

Reading Time: 5 minutes

Well, it’s that time of the year again! This is when we put our phones down (well, mostly) and give thanks for everything and everyone around us. We at MuleSoft have a lot and a lot of people to be thankful for. Business is going well (see our Record Q3 Performance announcement), we’re gaining industry recognitions (see our Awards page and in particular, the last Deloitte’s Technology Fast 500 report), and we keep growing. But it’s in times like these that we want to thank everybody who has helped us get where we are today. It’s our customers who believed in our products and always gave us their brutally honest feedback that allowed us to improve and make a better product. It’s all of the Muleys who have poured their energy day-in day-out in building this company and making our vision a reality. But I would like to take a few moments to thank some people who rarely get all the credit they deserve for the work they do: developers.

We provide the tools, but developers are the ones who make something beautiful and amazing with them. Developers are the people who bring to life the magic of MuleSoft platform in the most diverse and challenging environments. They’re the ones who connect the world’s apps, data, and devices to create a sum greater than the parts. When an emergency room is able to access our medical records from the other side of the world, when we look up on our phone when the next bus is arriving, when our plane reservation gets done in seconds, when the green power of a wind turbine is remotely managed… We have to thank the developers who built those connections and make the world a better place for us all.

This is why it is so important to have a vibrant developer community. Some developers go the extra mile and are passionate about advancing the projects they believe in and helping other developers. I am especially thankful for the amazing developer/contributors we’ve met on our journey from the first steps we took with Mule ESB to a fully fledged complete integration platform spanning SOA, SaaS, and API platform. Beyond simply using our tools, these developers help build the very tools themselves. Contributing to the Mule ESB and related projects on GitHub from the very start. This community has provided invaluable contributions to all of our open source projects. This community has kept us honest submitting bugs reports whenever new versions were released. This community is always ready to help fellow developers on MuleSoft community forums, on Stackoverflow, or anywhere there’s a need for a connection. This is why today I want to thank our developer community for all they have done.

While Thanksgiving may be an American tradition, this global community is the heart and soul of MuleSoft. So to our developers we want to say, “Thank you.

Anypoint Platform for APIs – November release

Reading Time: 6 minutes

I am excited to announce that our November release of the Anypoint Platform for APIs is now live. This update to the platform includes a variety of new features and workflow enhancements aimed at streamlining the process of managing APIs. These new features include the following:

  • Improved proxy configuration & support for HTTPS and load balancer scenarios
  • API version deprecation
  • Swagger file import & folder support in API Designer
  • Analytic tracking of policy violations

Proxy Configuration Improvements

As part of the November release of the Anypoint Platform for APIs we have also released a new version of the API Gateway, version 1.3 which you can download here. The new API Gateway includes enhancements that now make it possible to easily deploy API proxies in a loadbalancer environment as well as use a shared port for HTTP and HTTPS endpoint. Shared ports allow you to deploy multiple API proxies to a single gateway. As a result, we’ve modified the proxy generation interface in the platform. Please note, that to take advantage of these new updates to the API proxies you will need to use the latest API Gateway version 1.3. To learn more about configuring and deploying proxies in the Anypoint Platform for APIs you can find full documentation here.

 

Continue reading

API Best Practices: Spec Driven Development (Part 2)

Reading Time: 9 minutes

This is part two of the API design best practices series.

Define Your API in a Flexible, but Standard Spec

I cannot stress the importance of spec driven development enough.  One of the quickest ways to kill your API is to define the API in your code, instead of coding to its definition.  By utilizing an API modeling spec such as RAML you can quickly build out your API in a consistent manner using code and pattern reuse.

Utilizing pattern design and code reuse helps to ensure that your API remains uniform across the full interface, keeping resources and methods alike standardized and easily implemented by your developers.

Tools like API Designer allow you to view your API as it is being designed and watch for inconsistencies/ dependencies you might have missed otherwise.  And perhaps most importantly, once your spec is in place it keeps everyone on the same page, ensuring that your API works exactly the way you want it to.

RAML provides a quick, powerful, semantic, yet human readable format for describing your API.  The API Designer makes it simple to get started and even shows you what your API looks like as you describe it.

 

Mock Your API and get User Feedback

Another huge advantage of tools like RAML or Swagger is that they allow you to mock your API.  This means that you can not only build your API in a visual interface and take advantage of the very same best practices we utilize in development, but you can also share a mock version of your API with potential clients.

Using MuleSoft’s API Designer you can easily turn on a mocking service that gives you a URL that can be shared with other developers.  This allows your clients to “test out” your API by making real calls as if they would from their application.  By utilizing the example responses defined in the RAML file developers can quickly identify issues and inconsistencies, helping you eliminate the majority of design related issues before development even starts.  And by passing along tools like the API Notebook, developers can interact with your Mock API through JavaScript without having to code any calls themselves, and also having the ability to send you the specific use case back giving you true examples of what your developers are trying to accomplish.

The API Notebook handles complex negotiations like OAuth while providing tool tips to walk users through the different API resources and methods, letting them try out and explore your API without server-side coding or having to dig through extensive documentation.

This process can be repeated as necessary, as modifying the spec and creating a new mock only takes minutes, empowering you to perfect the design of your API and ensure that it not only meets your developers’ needs, but also provides a solid, strong foundation for the future of your API.

After all, the nemesis of a long-lived API is not the code, nor the system architecture, but the API design itself.  No matter how careful you are with your API, without a solid foundation it will crumble quickly, costing you thousands to hundreds of thousands of dollars down the road.  It’s better to take the time now, up front and ensure that your API is well designed.

Code to the Spec… and Don’t Deviate

I cannot emphasize the importance of coding to the spec enough.  If you have taken advantage of the above steps and carefully laid out your API, carefully designed your spec, user tested, and perfected your API– there is nothing worse than throwing all that work away by deviating from the spec and doing one-off things.

One of the main reasons for REST was to focus on long-term design, or as Dr. Roy Fielding pointed out, we as humans, as developers are very good at short term design, but horrendous at long-term design.  What may seem like a good solution in the short-term, if not carefully thought out and tested long-term is likely to create big problems down the road.

Think of it like this, how many times have you written code only to look back at it three months later and wonder “what was I thinking?!”  Your API is a contract, and unfortunately the one thing you cannot fix is poor design.

For that reason it’s important to avoid editing your spec during the development cycle.  Instead, if you find an issue with the spec, go back to the design phase where you can visualize the changes, prototype them, and get user feedback.  Then once you are sure your spec provides a solid foundation you can continue with development.

After all, you worked hard to involve your users and create the perfect spec, let’s make sure you use it.

Go to Part 3: Nouns, CRUD, and More →

  • The importance of using Nouns as Resources
  • CRUD and the HTTP Action Verbs
  • Accept & Content-Type Headers
  • JSON vs XML

You can learn more about RAML at RAML.org, or start using MuleSoft’s FREE API Design and Interaction tools with the Anypoint Platform for APIs.

API Best Practices: Plan Your API (Part 1)

Reading Time: 13 minutes

This is part one of the API design best practices series.

Understand WHY you are building an API

Perhaps the foundation of the foundation, understanding why you are building an API is a crucial step towards understanding what data/ methods your API should make accessible and how your users will utilize it. Unfortunately, API is a buzzword right now, and many companies are rushing to build APIs with the idea that “we’re going to make our data accessible and our users will love it!” There’s probably some truth to that, but that is not a good enough reason. What exactly are you making accessible and why? Who are your API users – are they your customers, or third party services, or developers who are looking to extend upon your application for their customers? Understanding the market you are serving is vital to the success of any product or service.

Key questions to ask:

Continue reading

New Series: API Design Best Practices

Reading Time: 5 minutes

By now, you’ve probably already seen the image of the iceberg cross section showing just how many APIs are available out in the world. With over 13,000 public APIs available for use across the web, and hundreds of thousands more being used privately and in-house, the possibilities are endless.

The demand for flexibility and extensibility has driven the development of APIs and tools alike, and in many regards it has never been easier to create an API than it is today with multitudes of frameworks (such as JAX-RS, Apigility, Django REST Framework, Grape), specs (RAML, Swagger, API Blueprint, IO Docs), and tools (API Designer, API Science, APImatic) available.

However, despite the predictability of the demand for APIs, this tidal wave has taken many by surprise. And while many of these tools are designed to encourage best practices, API design seems to be constantly overlooked for development efficiency. The problem is, however, that while this lack of focus on best practices provides for a rapid development framework, it is nothing more than building a house without a solid foundation. No matter how quickly you build the house, or how nice it looks, without a solid foundation it is just a matter of time before the house crumbles to the ground, costing you more time, energy, and resources then it would have to simply build it right the first time.

Continue reading

Performance Impact of an IO-Intensive Application

Reading Time: 11 minutes

If you’ve ever worked on performance issues with an IO- intensive app, there is a good chance you already know that the application performance degrades when the disks are stressed out. This fact is usually well known, but the reasons behind it aren’t always clear. I’d like to try and clarify what’s going on behind the scenes.

In a typical scenario, when data is written to a file, it is first written to a memory area reserved as the page cache. The page holding the newly written data is considered dirty. After a period of time per the kernel IO policy, the kernel flushes the dirty data to the device queue to persist to hard disk. Once the data gets to the queue, the rest is mechanical: The device driver reads the IO requests, then spins, seeks and writes to the physical disk block where the file is. The journal file is written first if enabled, then the actual file.

In a recent discussion with a few other engineers, an idea of disabling file system journaling came up to improve disk write latency. While this is certainly true because it is one less disk operation per write, the actual time gained is negligible because the journal file is in the same block as the file to be written. The benefits of having the journal file to restore the disk from a crash far outweighs the little latency saved.

Continue reading