Ruby Rack servers benchmark

Facing the question which Ruby Rack server perform best behind Nginx front-end and failing to google out any exact comparison, I decided to do a quick test myself.

The servers:

Later I tried to test UWSGI server too as it now boasts built-in RACK module, but dropped it for two reasons: (1) it required tweaking OS to raise kern.ipc.somaxconn above 128 (which none other server needed) and later Nginx’s worker_connections above 1024 too and (2) it still lagged far behind at ~ 130 req/s, so after successful concurrency of 1000 requests, I got tired of waiting for the tests to complete and gave up seeking it’s break point. Still, UWSGI is very interesting project that I will keep my eye on, mostly because of it’s Emperor and Zerg modes and ease of deployment for dynamic mass-hosting Rack apps.

As UWSGI was originally developed for Python, I wasted a bit of time trying to get it working with some simple Python framework for comparison, but probably lack of knowledge on my part was the failure of it.

Testing

The test platform consisted of:

To set up a basic testcase, I wrote a simple Rack app that responds every request with the request IP address. I dediced to output IP because this involves some Ruby code in the app, but should be rather simple still.

ip = lambda do |env|
  [200, {"Content-Type" => "text/plain"}, [env["REMOTE_ADDR"]]]
end
run ip

Tweaking the concurrency number N (see below) with resolution of 100, I found out the break point of each of the servers (when they started giving errors) and recorded the previous throughput (the one that didn’t give any errors).

Results

The results are as follows:

  1. Unicorn – 2451 req/s @ 1500 concurrent request
  2. Thin – 2102 req/s @ 900 concurrent requests
  3. Passenger – 1549 req/s @ 400 concurrent requests

The following are screenshots from JMeter results:

Unicorn @1500 concurrent request
Thin @900 concurrent requests
Passenger @400 concurrent requests

None of these throughputs are bad, but still Unicorn and Thin beat the crap out of Passenger.

Details

The JMeter testcase

  1. ramp up to N requests concurrently
  2. send request to the server
  3. assert that response contains IP address
  4. loop all of this 10 times

Nginx configuration:

    # Passenger
    server {
      listen 8080;
      server_name localhost;
      root /Users/laas/proged/rack_test/public;
      passenger_enabled on;
      rack_env production;
      passenger_min_instances 4;
    }

    # Unicorn
    upstream unicorn_server {
      server unix:/Users/laas/proged/rack_test/tmp/unicorn.sock fail_timeout=0;
    }

    server {
      listen 8081;
      server_name localhost;
      root /Users/laas/proged/rack_test/public;

      location / {
        proxy_pass http://unicorn_server;
      }
    }

    # Thin
    upstream thin_server{
      server unix:/Users/laas/proged/rack_test/tmp/thin.0.sock fail_timeout=0;
      server unix:/Users/laas/proged/rack_test/tmp/thin.1.sock fail_timeout=0;
      server unix:/Users/laas/proged/rack_test/tmp/thin.2.sock fail_timeout=0;
      server unix:/Users/laas/proged/rack_test/tmp/thin.3.sock fail_timeout=0;
    }

    server {
      listen 8082;
      server_name localhost;
      root /Users/laas/proged/rack_test/public;

      location / {
        proxy_pass http://thin_server;
      }
    }

As is only logical, having processes match the number of cores (dual HT = 4 cores) gave best results for both Thin and Unicorn (thouch the variations were small).

Unicorn configuration

Passenger requires no additional configuration and Thin was configured from command line to use 4 servers and Unix sockets, but Unicorn required a separate file (I modified Unicorn example config for my purpose):

worker_processes 4
working_directory "/Users/laas/proged/rack_test/"
listen '/Users/laas/proged/rack_test/tmp/unicorn.sock', :backlog => 512
timeout 120
pid "/Users/laas/proged/rack_test/tmp/pids/unicorn.pid"

preload_app true
  if GC.respond_to?(:copy_on_write_friendly=)
  GC.copy_on_write_friendly = true
end

Disclaimer

I admit that this is extremely basic test and with better configuration much can be squeezed out from all of these servers, but this simple test surved my purpose and hopefully is of help to others too.

git terminal graph with branch names

I have searched several times how to produce graph tree in terminal similar to Gitk or other GUI visualizers. Compiling the knowledge in this StackOverflow question together, I came up with the following command:

git log --graph --full-history --all --color --date=short --pretty=format:"%Cred%x09%h %Creset%ad%Cblue%d %Creset %s %C(bold)(%an)%Creset"

UPDATE: I added author name to the end of line in bold so that you can blame people quicker.

UPDATE 2: I changed the command to use Git color codes instead of ANSI to ease reading

This produces graph shown on the image.

(Unfortunately the %d placeholder does not support separate colors for local and remote branches, as --decorate itself does, which would be even better.)

To make it useful, I have aliased all of this for a much shorter command git tree, which can be done with the following git config line:

git config --global alias.tree 'log --graph --full-history --all --color --date=short --pretty=format:"%Cred%x09%h %Creset%ad%Cblue%d %Creset %s %C(bold)(%an)%Creset"'

NB! Notice the two sets of quatation marks.

Brolog is now CommitBlog

Today I woke up with a new name for my blog. It is now known as CommitBlog.

Simple as that.

The why

When starting something new, there is always the problem with the name. The worst part is that, most of the time, you need to come up with a name just in the middle of creation and when you least know, whether the beast will walk, swim or fly. And the name sticks. And sometimes the name stinks too. After some initial moments, I never really liked the name Brolog (being not-so-clever wordplay on Prolog and Blog). Given that I have never actually seen Prolog in action and know nothing of the language it seemed a bit false. So today I woke up and had a new name, that relates more to what I do daily – commit to GIT. Or write to commitlog if you will.

So. There you have it. CommitBlog.

Inkscape CSV merge

I just uploaded inkscape_merge gem v0.1.0.

This is a script to merge SVG files with CSV data-files using Inkscape, to produce one outputfile (e.g. PDF) per data-row.

Script inspired by and based on Aurélio A. Heckert excellent InkscapeGenerator (wiki.colivre.net/Aurium/InkscapeGenerator)

Heckert’s original script unfortunately broke for me several times and I took the opportunity to rewrite it and make it more extendable for future.

 

USAGE

Install the gem

gem install inkscape_merge

Create files

Create CSV data file with first row as a header. The values from this row are used as keys in the SVG file substitution.

Create SVG file that contains some variables in the form:

%VAR_name%

Where `name` is the name of a column in the CSV file created previously. These variables can be anywhere inside the SVG, from plain text nodes to color values. This script just brute-forcedly `gsubs` these values as text w/o any thought.

Run the script

The script requires at least three arguments:

  • the input SVG file
  • the input CSV file
  • and the output file `pattern`

Note: output pattern undergoes the same substitutions as the SVG file, so to create easily unique file names. Additionally the output pattern can contain `%d` which is replaced with current row number.

Example:

inkscape_merge -f postcard.svg -d names.csv -o postcards/card_%d.pdf

This produces files like:

  • postcards/
    • card_1.pdf
    • card_2.pdf