Nginx (pronounced ‘engine x’) is an HTTP and reverse proxy server. It is well known for its high performance and stability. It is pretty feature rich and very simple to configure. Nginx hosts nearly 12.18% (22.2M) of active sites across all domains. Nginx uses event-driven architecture to handle requests. When compared to a thread-per-request model, event-driven is highly scalable with a low and predicatble memory footprint.
Front-ending Apache Tomcat with Apache Web Server or IIS is sometimes thought to improve performance. However, performance of Tomcat standalone has already been known to be very good. So why add IIS or Apache web server in front of it? – the answer is scalability and maintenance. Front-ending Tomcat with such web servers allows you to add more instances in case of increased load and also bring down instances for maintenance/upgrades.
This blog shows you end-to-end, step-by-step detailed instructions on how to setup such an environment. It should take you about an hour to configure this setup for yourself. Follow the instructions slowly and carefully to make sure you do not skip/miss any step. Here is a higher level view of the setup.
There is no shortage of well-known reasons for wanting to migrate your Java EE web application to open source Tomcat. But without development experience with both your current Java EE application server as well as with Tomcat, it isn’t clear what you must change in your Java EE application to get it to run properly on Tomcat. The benefits of being able to run it on Tomcat are significant — for example, Tomcat is free to run in production, and Tomcat is faster at tasks such as redeployment.
Tomcat saves a significant percent of developer productivity time over Java EE app servers (Source: ZeroTurnaround.com)
It’s easy to migrate your Java EE app to Tomcat as long as it’s mainly a web container app, and as long as you know what you might need to change in your app’s code to get it running on Tomcat. Even if your Java EE app uses other Java EE server components, you can still migrate it to run on Tomcat if you add the open source counterparts of those Java EE components to Tomcat — you would need to know which open source components to add, and some instructions on how to make them work with Tomcat. For example, if your app used EJB, you could add OpenEJB to Tomcat.
For the purposes of this blog, I’ll focus on migrating from Weblogic to Tomcat, and on migrating from WebSphere to Tomcat. But, it’s a similar process if you’re migrating from other app servers such as JBoss or Glassfish.
Tomcat 7.0.6 has just been voted the first Tomcat 7 stable release! This makes Tomcat 6.0.x only a supported stable release, not the latest stable as it had been for several years. A little more than a half a year ago we saw the first 7.0.0 beta release, which was exciting, but now the first stable release is ready to use.
A major branch stable Tomcat release is an infrequent event — it’s been four years since the last major branch, Tomcat 6, was voted stable in February of 2007, and six and a half years since Tomcat 5.5 was voted stable in November of 2004.
We’re very happy to announce Tcat 6 R4.3. This latest release of Tcat Server builds on Tcat 6 R4, making life even easier for Tomcat users. Enhancements in this release include:
- Solaris support: Driven by customer demand, Tcat Server now includes a Solaris installer and deeply integrates with the Solaris 10 Service Management Framework (SMF), supporting standard service querying, stops, starts, and restarts.
- Server metrics on the global dashboard: Track critical statistics in one place with the new server metric portlet. You can get an instant view into statistics like requests per second or your favorite JMX metric, either for a single server or across all your servers. With this addition to the global dashboard, administrators now have a truly complete view of their infrastructure on one single screen.
- Alerts for server groups: Easily watch for SLA violations, log file warnings, or servers going down by creating alerts which apply to server groups. This means that you can more easily manage performance at the cluster or application level, rather than just for individual servers.
We all recognize the need for both server and application monitoring in a production environment and Tcat Server makes this easy. However, the development and QA process can also benefit from this feature.
At MuleSoft I’m often asked to write small one-off webapps for different parts of our internal infrastructure — often they are interim solutions or somewhat experimental; since these are somewhat less critical applications, at best I’ll create some unit tests, create a plan on our CI Server, and do some “developer QA” — which amounts to clicking around the basic flow and cajoling some co-workers into basic sanity and acceptance testing. Since I’m also responsible for provisioning the server and deploying the application, I like to take as many shortcuts as I can. Fast and informal is the name of the game here.
However, lately I’ve been using Tcat’s Alerting with these applications in both the production and QA cycles. This has allowed me to spend less time tracking issues and to catch obscure errors and performance issues before I deploy into production.
In my previous post, I talked about what devops is and the need for devops tools around Tomcat. In this post, I want to go in depth around collaboration between devs and ops and how it applies to Tcat Server.
A key concept of the devops movement is that not only are there developers and operations, but there are also lots of people in between. Perhaps there is an ultimate authority on the operations team, but there are still many things you might want to enable developers or devops persons to do outside the operations team:
You wouldn’t necessarily be very excited about reliable, graceful app server restarts — unless you go to restart your server and it doesn’t restart, or unless the restart script corrupted your webapp data. There are times when a reasonably fast, fully reliable restart is a very important feature. Some examples:
- You found that your webapp has a new memory leak, and you just fixed it in development, just finished testing it, and you’re about to deploy the fixed version. But, first, you want to undeploy and restart the server to be completely sure the memory leak code is gone. While you’re doing this, your server is offline, and you want to get it serving again as soon as possible, so you run the restart command.. but it doesn’t stop. It stays running, and while you spend time trying to figure out why, your webapp is undeployed.
- You have more traffic on your site, and now your memory utilization is climbing, and you’ve decided you should increase your Tomcat’s heap memory allocation. You make the configuration change, and you run the restart command, which runs and happily completes, but Tomcat doesn’t budge — it’s still running. You spend the next hour or two trying to figure out why.
- You wrote a shell or batch script that changes your web site in a way that it also has to restart Tomcat to make all the right changes take effect. Your script runs Tomcat’s stop command, and then Tomcat’s start command. But, after using it a few times you find that the script isn’t successfully restarting Tomcat like that either due to an error. You spend lots of time looking for the cause of the problem…
If stock Tomcat restarts could both integrate well with the operating system, and also be fully reliable, it would save you time in cases like these, and it would allow you to automate more. We’ve cases like these with stock Tomcat, and we have improved server restarts as part of Tcat Server.
I’m pretty sure that if Dante was in IT, there would be at least one stage of hell devoted to getting developers and operations to work together well. Horror stories abound. One of my favorite recent ones was about a company where the operations team wouldn’t let the developers surface any UI that they could access to manage their applications. The developers decided that they could get around this by building an API and then having a UI locally that they could use which was not in the realm of the operations team. Which, was a ridiculous waste of time and effort.
Granted not all companies are like this, but you can certainly understand why a division has arisen between the two groups. Developers are on the hook to deliver new functionality. Operations are on the hook to create a stable, secure, well run environment. Developers push change. Operations resist.
With a focus on production ready features, Tcat Server has become the leading enterprise Apache Tomcat in the world. As more and more leading organizations are adopting Tcat Server and using it to run their most demanding applications, we have been hearing from them about the features they need for gaining better visibility and for managing their applications in production.
We are pleased to announce General Availability of Tcat Server 6 R4, which represents the result of this feedback and is the most comprehensive product in the market for organizations using Tomcat in production.
This release includes:
- Global dashboards: Our customers have been telling us that while Tcat Server provides a single pane of glass for managing their Tomcat instances, they wanted to see a summary of all of their instances, the status of their applications as well as actionable information, all without drilling into each server instance. Global dashboards address this by providing a view of your entire Tomcat environment, web applications, status of current deployments and potential issues on one single screen. Similarly, the ability to view important metrics on a per-server basis is now also possible via server dashboards. Many of our customers’ applications expose data via JMX metrics, and being able to graph them to see the trends is now possible via these server dashboards.