Scalable NIO Servers – Part 3 – Features

April 13, 2009
Tags: , , , ,

We have now analyzed various open source NIO servers for performance and memory consumption.  Per my quick, initial testing, only Grizzly, Mina, and Netty were comparable. Now, let’s analyze features and how each of these frameworks use them.  For my purposes, I am going to be looking into the following features that I personally value most important for my project:

  • Intercepting Pattern (ie: Filters)
  • Access to high level, yet effcient, buffers rather than lower level byte buffers
  • Protocol independence and abstraction
  • Socket independence and abstraction
  • Custom protocol support
  • POJO support for encoding/decoding
  • Custom thread model support
  • HTTP support
  • User documentation (user guide, javadoc, source code, examples

Intercepting Pattern: Filters/Handlers

Let’s look at the first feature:  intercepting pattern.  The intercepting pattern is a J2EE-based pattern such as Filters that are used to take an incoming request and/or outgoing response and perform various logic such as compression, security checks, etc.  They are used to abstract away specific functionality from the true business logic.  This is a very good idea as the code that does your business logic should not have to care about performing compression, performing security, etc.  Intercepting filters provide that abstraction.  For NIO servers, this pattern also fits very well as you can define various protocol stacks using filters.  For example, you may want to have an encryption filter that provides SSL translation, a compression filter that performs GZip compression, an authentication filter that performs authentication, and finally an application handler that performs the business logic.  As each filter is its own implementation with its own purpose, you can easily change them in and out, re-order, and temporarily disable.

Netty provides this functionality through the use of channel handlers via ChannelUpstreamHandler and ChannelDownstreamHandler.  The handlers get added to a particular pipeline.  The order distinguishes how the handlers are applied to incoming or outgoing data.  Handlers in Netty can provide various types of functions.  First, they can merely perform a check on the data, such as authorization, session, etc and merely pass the buffer up or down the stream.  Second, they can remove a portion of the buffer such as a protocol header or codec and then pass the remaining data to subsequent handlers.  Third, it can translate the data into POJOs and pass the POJOs on to higher level handlers.  This allows you to build any number of helpful handlers as we will see in a future post on building protocol stacks.  For example:

ChannelFactory factory = new NioServerSocketChannelFactory
ServerBootstrap bootstrap = new ServerBootstrap(factory);
bootstrap.getPipeline().addLast("compressor", new CompressionHandler());
bootstrap.getPipeline().addLast("authenticator", new AuthenticationHandler());
    new DelimiterBasedFrameDecoder(Delimiters.lineDelimiter()));
bootstrap.getPipeline().addLast("handler", new ApplicationHandler());

Mina also provides the intercepting pattern through actual filters. The filters are used to translate and handle data before handlers get invoked. In Netty, everything is a filter/handler. They are the same. In Mina, filters get applied first and the resultant gets passed on to the handler. For example:

NioSocketAcceptor acceptor = new NioSocketAcceptor();
acceptor.getFilterChain().addLast("compressor", new CompressionFilter());
acceptor.getFilterChain().addLast("authenticator", new AuthenticationFilter());
    new ProtocolCodecFilter(new TextLineCodecFactory(Charset.forName("UTF-8"))));
acceptor.setHandler(new ApplicationHandler());

If you are familiar with Java web applications and/or J2EE, Mina would feel very comfortable for you. The filters and filter chains directly relate to the Filter and FilterChain in J2EE and the handler directly relates to the Servlet. In other words, in a typical web application, one or more filters process the data first (ie: compression, authentication, etc) and then the resultant stream/data gets passed to the servlet. This same technique is used with Mina.

Grizzly also uses the concept of a protocol filter chain, except that rather than have separate handlers and separate filters, it behaves like Netty in that everything being a filter. In any sense, all three frameworks provide the filtering mechanism to easily build protocol stacks and abstract key functionality from each other.

ProtocolChain protocolChain = pic.poll();
protocolChain.addFilter(new CompressionFilter());
protocolChain.addFilter(new AuthenticationFilter());
protocolChain.addFilter(new ApplicationFilter());

Support Handlers

Netty comes out of the box with handlers for Base64 encoding/decoding, delimiter based codecs, fixed length codecs, HTTP handlers, logging handlers, Java object serialization/deserialization codecs, Google Protocol Buffer codecs, SSL handlers, simple string codecs, and handlers used to control bandwidth, traffic shaping, etc. There are also several utility handlers that may be used to build custom handlers such as a replay handler, timeout handler, frame decoder, etc. Mina comes with support for blacklist filters, compression filters, connection throttling filters, SSL filters, logging filters, protocol codecs such as delimiter based, and HTTP. Grizzly provides support for SSL, custom protocol codecs, logging, and HTTP. They essentially offer similar handlers. However, in my personal preference, I prefer the handlers and architecture of Netty and believe they provide a little better support for custom handlers based on their already existent handlers and their utility handlers.

High Level Buffers

Next, let’s look at how the various frameworks support byte buffers in NIO. Byte buffers are a low level construct in the NIO library and has much complexity involved in order to maintain the proper states, handle de-fragmentation, etc. Most libraries, including these three, provide a custom high level object that wraps one or more byte buffers and provides access and utility methods for obtaining the data.

Netty uses an interface named ChannelBuffer. The ChannelBuffer class wraps multiple ByteBuffer instances and provides transparent zero copy to reduce its memory usage and improve performance. Rather than creating a composite buffer by copying multiple fragmented buffers, Netty maintains references to the fragments and allows access as if they were composite. ChannelBuffers also provide support for marking and resetting reader indexes, which is very helpful in custom protocol codecs. It also provides support for searching, slicing, and reading/writing various data types.

Mina provides an interfaced named IoBuffer that provides a wrapper construct around NIO byte buffers. IoBuffers use an underlying byte buffer instance and provides access and utility methods for interacting with that data. It also supports auto-expansion by re-allocating the buffer. As of version 2, however, it does not appear to support zero-copy operations, which is on the list for upcoming version 3. Othewise, IoBuffers provide similar functionality for access, marking, skipping, etc. One interesting thing to note, however, is that Mina may be moving to more of an InputStream type interface that manages one or more byte buffers.

Socket and Protocol Independence

For socket and protocol independence, all three libraries built their architectures precisely on those principles. For example, both Netty and Mina are not directly built as a NIO framework. Rather, they support the old style Java I/O as well. Mina even supports custom protocols such as RS-232 serial. By not directly relying on NIO, it makes changes in the future easier. For example, JDK 7 will introduce AIO (or NIO.2) for better support for asynchronous I/O. Each of these libraries can easily build support for that in without requiring a complete change of programs. Grizzly is already working on a new NIO.2 framework based on AIO. The only time your program needs to rely on a particular implementation is when it sets up its connection, which is generally only one or two lines of code in terms of selecting which selector you want to use (TCP, UDP, NIO, AIO, OIO, etc). The rest of the program is completely transparent to the underlying selector technology.

Custom Protocols and POJO

In terms of custom protocol support and POJO support, this is easily supported through custom filters/handlers. As filters/handlers pass data up and down the chain, they can translate the data as needed. This includes disposing of header data or translating the data into a POJO and returning the POJO. For example, a simple java application can easily serialize and deserialize objects on the wire by using a handler that reads the incoming bytes and deserializes and serializes an outgoing object and writes the bytes. Each of these frameworks support this by passing Object as the message during the message received and message sent callbacks. It is up to the filters/handlers to define how that Object gets translated and handled.

Threading Models

Threading support is a key to any successful NIO framework. In the old I/O frameworks, there was generally one or more acceptor threads that merely accepted incoming connections and then created a worker thread per connection. However, this failed to scale as it required thousands of concurrent threads. In NIO, everything is asynchronous. As a result, you have to manage data differently and ensure you have enough threads to handle the data without causing long blocks for other clients. As each and every protocol and server is different, the number of acceptor threads and the number of worker threads is vastly different. It is a fine tuning process. As such, it requires an extensible API. Further, as data may jump from one worker thread to another, it requires a very well designed API and architecture. Each of these frameworks have undergone several testing, analysis, and trials to ensure they properly handle threads and the NIO model. Netty provides this API by using java.util.concurrent thread pool libraries to specify the type of threading model. Further, it provides handlers for shaping traffic and controlling bandwidth. Mina provides similar facility by specifying the number of acceptor threads, the thread pool for worker threads and processor threads. It also provides execution filters to handle threads, bandwidth, etc.


Finally, documentation. In my opinion, Netty has the best documentation, source code, and developer guide available. However, Mina is closely behind and may actually overtake it. However, I prefer the API style of Netty more, so I am a bit biased I guess. I struggled to find good documentation for Grizzly apart from the JavaDocs, a few examples, and a few blog posting. Netty and Mina both made helpful examples readily available.


Overall, I have come to prefer Netty over both Mina and Grizzly for performance, memory, and feature usage. Note that you should undergo your own analysis to decide which library suits your requirements.

4 Responses to “Scalable NIO Servers – Part 3 – Features”

  1. In regards to documentation, to be honest I’m struggling to find anything… If you go to the Netty home page, there’s only one link to documentation, and it only includes a quick guide, of which I’m confused. Could it be because they just released version 3.0 and the docs aren’t as complete as the last version?

  2. I’ve also used XLightWeb,, with success, particularly to create an asynchronous reverse proxy (which is a mere prototype at the time of writing, but it holds up pretty well in tests so far).

  3. Thanks for very good review, it was helpful when choosing between grizzly and netty.

  4. Nice work.

    I am not aligned to any of these options though I have been tracking Grizzly and Netty for a while.

    Grizzly has improved their docs since you wrote your piece. Check it out: