The Node.js Event Loop
Node has quickly become one of the most popular environments for web development. It's written entirely in javascript, runs on Google's V8 engine, and uses an event loop that maximizes CPU usage. With just a few lines of code, developers can have a web server up and running with minimal configuration.
While its ease of use has largely contributed to Node's popularity, many developers are using Node.js without understanding the underlying event-driven architecture that makes it so great. This article gives a basic overview of Node's event loop, its supporting architecture, and how it differs from alternative run-time enviornments used today.
Understanding Threads and Blocking I/O
Whenever you query a database, make a REST call, or interact with the file system you are running a process that is considered 'blocking'. This is because the system is temporarily blocked from performing further action until your request completes. For example, if you query a database, your program is temporarily blocked while you wait for the process to return data. If your program uses the results of the query in the proceeding function, then you must wait for the response before the program can continue execution.
These blocking actions are run on threads. A thread is any independent scheduler (or process) that can perform tasks under the central operating system. It's important to remember that while separate threads can run in parallel, they share CPU power and memory. This isn't too much of a concern with traditional synchroneous programming, however when you start dealing with hundreds of concurrent threads, performance starts to take a hit.
Node's 'Single Threaded' Approach
Most traditional web servers are multi-threaded, meaning for every process or request a separate thread is run. While this has proven sufficient in handling hundreds of 'blocking' operations simultaneously, it can result in unnecessary wait times and poor CPU allocation.
Node takes a different approach. It uses a single thread to process I/O by way of an event loop. The main thread of this loop listens for events and then triggers callback functions when those events are detected. If a particular request is blocking, Node pulls from a separate thread pool to handle the request while it continues running the main thread. This way, blocking operations don't interfere with other non-blocking operations.
But Why?
This single threaded approach sounds great, but isn't Node technically multi threaded? After all, additional threads are still used to process blocking tasks. The main differences lie in how Node utilizes these additional threads. While a traditional multi-threaded web server will allocate a separate thread for every request, Node only partially uses additional threads for processes that are blocking. It further dissects requests so that only processes which are truly blocking are sent to the separate thread pool, while non-blocking IO remains on the single thread. This provides for much more efficient memory allocation and CPU usage, making Node more performant.
Conclusion
It should be noted that multi-threaded solutions are still widely used today. Proponents of Java based web servers (such as JBoss and Apache) argue that creating multiple threads safeguards against failures. If one thread fails, you still have other independent threads processing. With Node, if one process gets expensive, it can slow down the whole event loop.
Despite such criticism, Node's event based model has proven superior in handling CPU processing and memory. Its event driven model uses callbacks to keep a single main thread active while relying on a separate thread pool only when necessary. Sure, it depends on what you are trying to achieve with your architecture, however Node's extensibility provides additional modules and libraries that are continually addressing its weaknesses. For more on whether Node.js is right for your app, see Why Node.js.