I have seen someone describe Docker as "chroot on steroids", and it really is the best description I can think of. Chroot changes the path resolution algorithm so that / resolves to something else, thereby hiding everything above from the processes running in the chroot. Docker takes this a step further, and it isolates processes, mount points, user ids, and probably a lot of other stuff. I can see it useful because you can create Dockerfile that you ship with your application, which packages the whole stack that your application needs, installed packages, with their configurations and everything else. Another nice thing is that you can have your own repository of docker images (prebuilt docker application basically), and when you release a new version of your application, you push that into the registiry, and have your servers pull down the new version. This makes rolling back to a previous version trivial.

Here is an example that sets up Syncthing (see later) in an isolated Debian box:

FROM debian:wheezy
ENV RELEASE linux-amd64
WORKDIR /home/root
RUN apt-get update && apt-get install -y wget ca-certificates && \
apt-get clean && \
useradd -m syncthing && \
wget -O -$VERSION/syncthing-$RELEASE-$VERSION.tar.gz | \
tar -xzf - -C /usr/local && \
ln -s /usr/local/syncthing-$RELEASE-$VERSION/syncthing /usr/local/bin
EXPOSE 8080 22000 21025/udp
USER syncthing
ENTRYPOINT [ "syncthing" ]

You save this as a file called Dockerfile, and run docker build ., then docker run $the-id-that-docker-build-gave, and you have syncthing running in a "virtual machine". If you decide you no longer need it, you just stop the container, and delete the image. Updating to a new version is just a question of editing the ENV line in the Dockerfile.


I love Ansible. I have now setup my dotfiles repo to use ansible, and provisioned my own VPSes with it. It makes it trivial to go from special snowflake to phoenix server. I have tried playing around with Puppet before, but gave up (probably too) quickly with setting up the certificates it requires at the start. Ansible uses SSH as a transport layer, so you only need to have an SSH account to use it. You can automate provisioning home servers, web applications with load balancers, or even deploying new versions of your application.

It's rapidly growing, and has support for newish stuff like Docker. If you do find yourself in need of something it does not support, it's trivial to write plugins for it, it's basically text processing all the way. I needed a module to interface with loginctl (systemd's thingy), and after an hour I had it ready. If you use Python (which is the recommended language for plugins) it has premade modules to help you. I lost quite a few small modifications to my system, because it was a pita and too brittle to automate them with shell scripting, but it's trivial to do with Ansible.


There are some dropbox alternatives popping up, like Syncany, Seafile, Spideroak, BTSync, etc. that you can set up for yourself. Naturally I wanted something too, so I can be independent of a 3rd party, and I went with Syncthing (now called Pulse). It's open source, set up for yourself, end-to-end encrypted, decentralized, and uses the bittorrent protocol for syncing (so more nodes, more speed). It's currently at 0.10.30, but it's quite stable, and growing rapidly, with 1526 closed issues, and 136 open. I'm running 0.10.21 at the moment to syncronize my MP3 collection (49 GB of data) between 3 computers, and it's working like a charm. Unfortunately it is still missing some features that keeps it from being a complete replacement for Dropbox. The whole GUI is a webpage, so there is no tray indicator, which makes me want to open the webpage after I make some changes, to make sure it picked up. There is no inotify support for listening to filesystem events, so it uses polling.

There is also an android client, though I haven't tried it. This also my first app that I dockerized.

As a side note, take a look at the manifesto of the team that is behind Syncthing, called Indie. It's fucking awesome, and I'm definitely keeping a close eye on them, there is a real need to raise awareness of corporate surveillance.


Browserify is a javascript bundler, like RequireJS and WebPack. I had a fairly large project using RequireJS, but if I had to choose today, it would be Browserify without a doubt. They both do the same thing (work out the dependency graphs between your javascript files, and bundle them together). RequireJS uses AMD (require('depA', 'depB', function(depA, depB) {})), while Browserify uses CommonJS (var depA = require('depA');), with the latter being a bit more readable, because you don't keep on piling arguments to your callback, and the indentation starts for the margin.

There is no async loading in Browserify, out of the box, or actually, there is nothing in it ootb. It only does one thing: you give it one javascript file, and it spits out another javascript file. Nothing else. It traverses the dependencies in that one file, concatenates, and deduplcates them, then outputs it back. If you need anything else, they are provided as plugins. In contrast to this, RequireJS insists on async loading, for which you have to bundle the loader itself to work, and its r.js tool (which is used for concatenating the files together) is a mess to set up so that it works both in production and development. Partitioning (multiple entry points) is also much easier in browserify's partition bundle, than with r.js.


Another javascript bundler. Not a fan so far. It provides 99% the same things as browserify, but it was harder to get it working. Originally I needed it for react-hot-reload because it only works with webpack, but then I found out that browserify has a similar plugin for it, so I converted to that and haven't looked back. I'm willing to take another look in the future, but I don't see what it offers that Browserify doesn't. There is some nice reading here and here on the (minor) differences if you are interested.


I love Javascript. It's a good language, and everybody who is not an idiot figured out that it's not the language that makes your code shit. Yeah, it has its fair bit of quirks, but as someone said, there are two kinds of programming languages, ones that everybody bitches about, and ones that nobody uses. Javascript was the only friend of PHP in school, because they both got abused by the other kids. With that said, I don't get why anyone would make a serious application in Node, because the tooling around it is so immature and fragmented. Everything is 0.1. On npm, 0.1 is legacy, 0.2 is stable, and 0.3 is bleeding edge. Expect to do a lot of googling, and github browsing because things break. I can't for the love of god figure out why node inspector (the debugger) insists on stopping on a line that does not have a breakpoint (it's some function called _tickDomainCallback), or why it does not let me create breakpoints on the first run, but works ok when I open it a second time, and google doesn't know either.

I did read several success stories about how someone (the last one I think was BBC) rebuilt something in Node, and now it's oh so much better. My personal opinion is that if you get a clean state to rebuild something (or just build something), you would get the same end result. It's not the language that made it so good, but the skill and experience of the programmers. I can see it exploding (even more) in the future though, but right now I would not build something for a client with it, unless he specifically requested it.

Oh btw, it's already forked.