Web server ‘hello world’ benchmark : Go vs Node.js vs Nim vs Bun

The Web is a convenient interface to your software. Many times, if you have an existing application, you may want to allow Web access to it using HTTP. Or you may want to build a small specialized Web application. In such instances, you do not want to use an actual Web server (e.g., Apache or IIS).

There are many popular frameworks for writing little web applications. Go and JavaScript (Node.js) are among the most popular choices. Reportedly, Netflix runs on Node.js; Uber moved from Node.js to Go for better performance. There are also less popular options such as Nim.

An in-depth review of their performance characteristics would be challenging.  But I just write a little toy web application, will I see the difference? A minimalist application gives you a reference since more complex applications are likely to run slower.

Let us try it out. I want the equivalent of ‘hello world’ for web servers. I also do not want to do any fiddling: let us keep things simple.

A minimalist Go server might look as follows:

package main
import (
  "io"
  "fmt"
  "log"
  "net/http"
)
func main() {
  http.HandleFunc("/simple", func(w http.ResponseWriter, r *http.Request){
    io.WriteString(w, "Hello!")
  })
  fmt.Printf("Starting server at port 3000\n")
  if err := http.ListenAndServe(":3000", nil); err != nil {
    log.Fatal(err)
  }
}

A basic JavaScript (Node.js) server might look like this:

const f = require('fastify')()
f.get('/simple', async (request) => {
  return "hello"
})
f.listen({ port: 3000})
  .then(() => console.log('listening on port 3000'))
  .catch(err => console.error(err))

It will work as-is in an alternative runtime such as Bun, but to get the most of the Bun runtime, you may need to write Bun-specific code:

const server = Bun.serve({
  port: 3000,
  fetch(req) {
   let url = new URL(req.url);
   let pname = url.pathname;
   if(pname == '/simple'){
     return Response('Hello');
   }
   return new Response("Not Found.");
  }
});

Nim offers a nice way to achieve the same result:

import options, asyncdispatch
import httpbeast
proc onRequest(req: Request): Future[void] =
  if req.httpMethod == some(HttpGet):
    case req.path.get()
    of "/simple":
      req.send("Hello World")
    else:
      req.send(Http404)
run(onRequest, initSettings(port=Port(3000)))

An interesting alternative is to use uWebSockets.js with Node:

const uWS = require('uWebSockets.js')
const port = 3000;
const app = uWS.App({
}).get('/simple', (res, req) => {
  res.end('Hello!');
}).listen(port, (token) => {
  if (token) {
    console.log('Listening to port ' + port);
  } else {
    console.log('Failed to listen to port ' + port);
  }
});

We can also use C++ with the lithium library:

#include <lithium_http_server.hh>
int main() {
  li::http_api my_api;
  my_api.get("/simple") =
    [&](li::http_request& request, li::http_response& response) {
      response.write("hello world.");
    };
  li::http_serve(my_api, 3000);
}

I wrote a benchmark, my source code is available. I ran it on a powerful IceLake-based server with 64 cores. As is typical, such big servers have relatively low clock speeds (base frequency of 2 GHz, up to 3.2 GHz). I use a simple bombardier command as part of the benchmark:

bombardier -c 10 http://localhost:3000/simple

You can increase the number of concurrent connections to 1000 (-c 1000). My initial tests used autocannon which is a poor choice for this task.

My result indicates that Nim is doing quite well on this toy example.

system requests/second (10 connections) requests/second (1000 connections)
Nim 2.0 and httpbeast 315,000 +/- 18,000 350,000 +/- 60,000
GCC12 (C++) + lithium 190,000 +/- 60,000 385,000 +/- 80,000
Go 1.19 95,000 +/- 30,000 250,000 +/- 45,000
Node.js 20 and uWebSockets.js 100,000 +/- 25,000 100,000 +/- 35,000
Bun 1.04 80,000 +/- 15,000 65,000 +/- 20,000
Node.js 20 (JavaScript) 45,000 +/- 7,000 41,000 +/- 10,000
Bun + fastify 40,000 +/- 6,000 35,000 +/- 9,000

*Jarred Sumner, the author of Bun, said on X that fastify is not fast in bun right now but that Bun.serve() is more than twice faster than node:http in bun.

My web server does very little work, so it is an edge case. I have also not done any configuration: it is ‘out of the box’ performance. Furthermore, the server is probably more powerful than anything web developers will use in practice.

There is considerable noise in this results, and you should not trust my numbers entirely. I recommend you try running the benchmark for yourself.

I reviewed some blog posts, all concluding that Go is faster :

It would be interesting to add C, Rust and Zig to this benchmark.

Regarding the C++ solution, I initially encountered many difficulties. Using Lithium turned out to be simple: the most difficult part is to ensure that you have installed OpenSSL and Boost on your system. My solution is just as simple as the alternatives. The author of Lithium has a nice twist where he explains how to run a Lithium server using a docker container with a script. Doing it in this manner means that you do not have to worry about installing libraries on your system. Running a server in a docker container is perfectly reasonable but there is a performance overhead, so I did not use this solution in my benchmark.

While preparing this blog post, I had the pleasure of compiling software written in the Nim language for the first time. I must say that it left a good impression. The authors state that Nim was inspired by Python. It does feel quite like Python. I will revisit nim later.

Published by

Daniel Lemire

A computer science professor at the University of Quebec (TELUQ).

34 thoughts on “Web server ‘hello world’ benchmark : Go vs Node.js vs Nim vs Bun”

  1. I have already explained my point on what’s the expected performance for simple Vs complex cases for this, but I suggest as well to run something to be sure all the framework are using the available CPU resources. They often doesn’t come with decent ergonomics and default for it. Furthermore, If the load gen run on the same machine, isolated the cores of the twos trying hard to make the server the bottleneck, constraining its resources.

  2. Not a fair comparison at all. Why did you a slow third party library for bun/nodejs? Seemed like you did not know what you were doing. Try HyperExpress or uWebsockets.js

  3. Those numbers are hiding something interesting. I never used bombardier.
    Could you clarify how much time the benchmark has run for? This would let us know at how much real connection/s each one as capped and if any reached the 10/10000 marks.

  4. Only Nim is using a third-party library built for speed—it’s only 777 lines of code, and it doesn’t even support HTTP2.

    For shame!

    1. httpbeast is not a 3rd party lib but by a Nim core team member. And btw. it’s not pimping up the speed but rather using Nim’s async ‘s quite speedy performance. Its task isn’t speed per se but rather to provide http related functionality.

  5. I tried building the equivalent in C++, but it was so painful that I eventually gave up.

    I hear you. Just to capture some of the knowledge I’ve built up I wrote a server entirely in C, which I call “sloop” (server loop):

    https://github.com/chkoreff/sloop/tree/main#readme

    It does the necessary buffering so it can print the “Content-length” header. At some point I could change it to chunked transfer encoding.

    The servers I actually use now are written in Fexl, so I can just call run_server with an arbitrary Fexl program to interact with clients, e.g.:

    https://github.com/chkoreff/Fexl/blob/master/src/test/server.fxl

    That Fexl code ultimately calls this “type_start_server” routine written in C:

    https://github.com/chkoreff/Fexl/blob/master/src/type_run.c#L355

  6. Interesting that the numbers are so low given that you use such a big server.

    On an AMD Ryzen 9 5900HX i get the following numbers for go1.21.1 and nim 2.0.0

    go 1.21.1

    bombardier -c 10 http://localhost:3000/simple
    Reqs/sec 246226.18
    bombardier -c 1000 http://localhost:3000/simple
    Reqs/sec 451854.02

    nim 2.0.0

    bombardier -c 10 http://localhost:3000/simple
    Reqs/sec 440969.81
    bombardier -c 1000 http://localhost:3000/simple
    Reqs/sec 799546.14

    I also testet this with fasthttp for go:

    package main

    import (
    "io"
    "log"

    "github.com/valyala/fasthttp"
    )

    func main() {

    h := requestHandler
    if err := fasthttp.ListenAndServe(":3000", h); err != nil {
    log.Fatalf("Error in ListenAndServe: %v", err)
    }
    }

    func requestHandler(ctx *fasthttp.RequestCtx) {
    io.WriteString(ctx, "Hello World")
    }

    bombardier -c 10 http://localhost:3000
    Reqs/sec 351120.08
    bombardier -c 1000 http://localhost:3000
    Reqs/sec 601480.20

  7. I also tried zig (i am not an zig expert).

    Its based on the simple http example from https://github.com/zigzap/zap

    example

    const std = @import("std");
    const zap = @import("zap");

    fn on_request_minimal(r: zap.SimpleRequest) void {
    r.sendBody("Hello World!") catch return;
    }

    pub fn main() !void {
    var listener = zap.SimpleHttpListener.init(.{
    .port = 3000,
    .on_request = on_request_minimal,
    .log = false,
    .max_clients = 100000,
    });
    try listener.listen();

    std.debug.print("Listening on 0.0.0.0:3000\n", .{});

    // start worker threads
    zap.start(.{
    .threads = 16,
    .workers = 16,
    });
    }

    Results:

    bombardier -c 10 http://localhost:300
    Reqs/sec 312406.93

    bombardier -c 1000 http://localhost:3000
    Reqs/sec 470699.26

  8. I tried building the equivalent in C++, but it was so painful that I eventually gave up

    Did you ever try Seastar? (a c++ server framework)

  9. In the mean time it is remarkable that high level language like Nim achieve such a performance and scale. Also impl in Nim seems very ideomatic.

  10. The C++ solution, I initially encountered many difficulties. Using
    Lithium turned out to be simple: the most difficult part is to ensure
    that you have installed OpenSSL and Boost on your system

    Since you are using boost::context, it means that you are using the stackful coroutine instead of the stackless C++20.

    If you want to reduce system dependency, you could opt for asio-standalone (without boost) which will allow you to couple it with co_* and awaitable (C++20 – stackless only).

    1. Can you illustrate how you’d build a small specialized web server similar to the examples above using Nginx? Suppose you have already your software, and you want to add a small HTTP server to it, how do you link to Nginx as a software library?

        1. Do you have code samples on how I can use openresty to embed a web server in my application?

          If you have an Apache, IIS or Nginx server, you can build web applications on top of it but that is not what my blog post was about. My blog post was about building small web applications (in different programming languages) using existing software libraries.

          I am considering the scenario where you have a program, when you launch the program, you launch a web server.

      1. Just install nginx. /etc/nginx/sites-enabled/default with:

        http {
        server {
        listen 3000;
        location /simple {
        return 200 'Hello';
        }
        }
        }

        Save file.

        sudo service nginx restart
        bombardier -c 10 http://localhost:3000/simple

        1. Let me restate what I wrote in the comment you are replying to:

          If you have an Apache, IIS or Nginx server, you can build web
          applications on top of it but that is not what my blog post was about.

  11. Nim compiles to either C or LLVM IR right? I don’t see any details in the nimble code – what became of your code? Are each of these solutions building executables?

    It’s impressive that Nim wins given that there isn’t serious optimization energy going into it yet, but I’m not clear on the validity of this benchmark yet.

    You might also include actual web servers to compare, like nginx (written in C) and Microsoft’s Kestrel (maybe written in C++, possibly C#).

    I’d switch to HTTP/2 or HTTP/3, since those are dominant now on the web. For example, lemire.me defaults to HTTP/3.

    1. The use cases here are to add a web server to your application (whether it is written in JavaScript, C, C++, Nim), or to build a small specialized web server.

      Would you share your code… e.g., how do I do the equivalent of my C++ application (see code in the blog post), say in C, using nginx as a library?

      Or do you mean the reverse… You have a web server, and you integrate your code inside it (e.g., use CGI calls). That’s a whole other paradigm, and not really comparable.

  12. So, I just read on the uWebSockets.js github page, that they are the default server for bun. So, I am curious as to why the node.js would be faster (maybe margin of error)?

    My stack is supposed to wrap the HTTP service. See copious-world repositories. E.g. in copious-transitions, one is supposed to be able to create a subclass of the lib/general-express.js and then run without changing much else. Otherwise, I am using JSON messaging on micro-services, and those are set up for TLS, UDP, etc. So, “just working” is a goal that has gone through a few renditions, but optimization paths into other languages is an eventual goal.

    So, one possibility is to work with components that are all about intrinsics. Perhaps the ones found at benchmarks (warning about brainf** and language). So, plugging in JSON and base64 libs might be good (maybe bun ffi is better than luajit). Also, Sha256 intrinsics are out there and blake3 is nice to have when not but more mute with them. You may see that V remains viable given they did work on MatMul.

  13. Unfair benchmark: an obvious difference in the implementations is that only 2 of them return “Hello!” in the body while others each return their own variation.

    If the implementers didn’t even ensure this simple comparison of the implementations, how serious is this work?

Leave a Reply

Your email address will not be published.

To create code blocks or other preformatted text, indent by four spaces:

    This will be displayed in a monospaced font. The first four 
    spaces will be stripped off, but all other whitespace
    will be preserved.
    
    Markdown is turned off in code blocks:
     [This is not a link](http://example.com)

To create not a block, but an inline code span, use backticks:

Here is some inline `code`.

For more help see http://daringfireball.net/projects/markdown/syntax

You may subscribe to this blog by email.