Tag: Performance

Our RESTx project – a platform for the rapid and easy creation of RESTful web services and resources – is largely written in Python. Python is a dynamic, duck-typed programming language, which puts very little obstacles between your idea and working code.cartoon_duck At least that’s the feeling I had when I started to work with Python several years ago: Never before was I able to be so productive, so quickly with so few lines of code. I was hooked. No wonder some people describe it as ‘executable pseudo code': Just write down what you want and for the most part, it will actually work.

Now there’s a price to pay for this increased developer productivity, where the interpreter somehow figures out at run time what it is that you actually want to do and how to let your code deal with all sorts of different types: Python is interpreted, it’s dynamic and this means that for the most part it’s slower than compiled languages.

In this article here, I will talk about an interesting optimization technique for Python, which will come as a surprise to many Python developers.

As an integration framework and broker, Mule ESB is the platform of choice for implementing and deploying enterprise integration solutions and related services. In this series of posts, I’m going to look at situations beyond the traditional integration scenarios where using Mule ESB has enabled the implementation of effective and elegant solutions.

In this first installment, I will talk about using Mule ESB as a frontal load-throttling middleware for applications that are not able to handle sudden peaks of requests, as illustrated in the following diagram.

I’m proud to announce that we have released the new Mule ESB Management Console (MMC) — this is an important step forward for Mule ESB.

We built MMC based on significant feedback from our customers, and we put the product through two early access pre-releases to incorporate feedback from real users. I must say that I’m pretty pleased with the end result — I hope that you find it as useful as we do.

We’re pleased to announce the immediate availability of our newest release of Tcat Server 6. This new release includes many fixes, in addition to bundling the Apache Software Foundation’s official release binaries of the newest Tomcat release, version 6.0.26.

Here is a summary of the changes and fixes that are included in the new version of Tomcat, since our last release of Tcat Server 6:

jasonb on Thursday, January 21, 2010

Apache Releases Tomcat 6.0.24 – Whats New


The new stable release of Tomcat 6.0.24 represents six months of open source software development. Version 6.0.24 includes a small number of new features, plus a large amount of important bug fixes and enhancements. This release is an incremental bug fix release, but the number of fixes included in this release is high.

jasonb on Thursday, September 24, 2009

Recompiling Tomcat May Cause Runtime Problems


It’s a very good thing that Tomcat is open source software. Because it is open, it enjoys broad stand-alone adoption, plus it has been incorporated as part of many other application server products, both commercial and open source. Why reinvent the wheel when Tomcat works great as a generic web container, and the source code is free? Many smart application server teams have chosen to embed Tomcat as their web container. They pull a copy of the Tomcat source code that they know works well, put it into their own source tree, and hook Tomcat’s Ant build system into their own, and rebuild Tomcat as part of their project.

jasonb on Wednesday, September 16, 2009

Tomcat Performance Tuning Tips


I often get questions about how to tune Tomcat for better performance.  It is usually best to answer this only after first spending some time understanding the installation of Tomcat, the web site’s traffic level, and the web applications that it runs.  But, there are some general performance tips that apply regardless of these important details.  In general, Tomcat performs better when you:

The promise of a monitoring solution that will pinpoint application problems and give you exact steps to fix the problem has remained a dream. In addition, monitoring systems have become notorious for being expensive and difficult to maintain. Diagnosing application performance problems requires application-specific diagnostic information that general-purpose monitoring tools often do not provide.

While system monitoring products are useful in triaging a problem and assigning responsibility to a particular team (for ex: Application Server team), they often do not provide the necessary details to help you determine the problem and fix it. Monitoring products are described by their users as mile wide, inch deep – great for providing high-level visibility into broader systems such as browsers, web servers, app servers, network devices, databases, storage etc, but not so great for specific diagnostic information that you need for fixing problems.

Instead, it often takes specific diagnostics tools tied to the application container to really be able to effectively drill down into the data sufficiently.

In this article, we will use Apache Tomcat as an example, and explore a few scenarios where Tomcat administrators need more information to help determine the problem.

We’ve been busy working on Mule releases recently, so this blog hasn’t had as much developer voice as it deserves. Working on things like WebSphere MQ can be demanding, which is another reason to appreciate the all-new shiny WebSphere MQ connector in Mule Enterprise 2.2.1. Makes one’s life much much easier.

That is not to say we didn’t cure our (and your) itch for new features. Many great ideas are currently being born, killed and re-born again, and I’m happy to announce an official user-facing kick-off of Mule 3.x (yes, it’s our third one already!) with the availability of the bleeding-edge 3.0 Milestone 1 Build.

Many features in this build aren’t obvious on the surface, like our massive private Bamboo infrastructure behind the firewall – an Octopus would be a more precise name for this highly distributed build monster, and it’s spawning more and more offspring, OMG! :). But although the build may look the same at first glance, there’s a subtle twist. A hot one, or more precisely, a hot-deployment one!

Kevin Depew on Monday, March 30, 2009

Galaxy on EC2 in one hour!


We have been running Galaxy successfully on our in-house servers and laptops for demo purposes for some time now and decided that having a running image of Galaxy on Amazon’s EC2 was the next logical step. Galaxy in the cloud gives us the opportunity to expose a running instance to a much wider audience than might otherwise interact directly with the product.