At the beginning of the year, I released Divide and map. Now. – the damn project. I present it as the proof of concept that HOT Tasking Manager can be done better.

The damn project consists of multiple repositories – server, client, manager, plugin, and deploy. Damn deploy repository contains setup used for running instance of server, client, and manager. The only changes to master branch of the damn deploy repository are secrets in .env file and email address in traefik.yml.

Why is there a separate repository for deployment? It complies with the philosophy of do one thing and do it well. The team of administrators deploying the project shouldn’t care about the development of the server or any client.

Why am I writing this? I already wrote about the improvements to the client since the damn project release. And the development continues! I like to improve things. I think that deployment is essential. Finally, there is Ray Kiddy’s comment on gitter:

Thinking more about it, I am interested in seeing something that is easy to provision and get running. So I used “flexibility” as a code word for the ability to bring up new damn instances, at will, and without much hassle.

So I was thinking about how to deploy easier.

How to deploy

I believe that if I want to describe something, the best is start from scratch. So, here we go.

Setting up virtual private server

At, I created new Debian 10 droplet, just $5/mo. In the time of publication, this droplet is already down, and testing instances do not work. See for running available instances I am not willing to shut down.

Then I added,, and DNS A records pointing to the IP address of the droplet. Now, I can ssh


All the deployment is in damn deploy repository. The howto is in readme file. So, I will just follow the readme.

The first command failed (git clone ...). There is no git command in my test-server and no info in the readme. Fixed!

So, install the prerequisites:

apt update && apt install -y git docker docker-compose

and clone the damn deploy repository:

git clone cd damn_deploy

Set up the environment

In env file (just link to .env), I am setting up:

Then, I generate the passwords with dd if=/dev/urandom bs=8 count=8 | base64. Just take the right part of the output. 2>/dev/null for dd command may help. (You see that some knowledge of a command line is necessary. Sorry.)

POSTGRES_PASSWORD=Sjr0jqbhsjnzBEptfvvXMAfQs2mT5LFNnpOy1TSIR1xiMgb9szInRDtuBnqszzVMZXMVw5tsYmFw JWT_SECRET=kX5s62Ecn0vju0h0V7Lyb63OC2RIz/eZND0T9stpEpwM0dyFPizq3LXLjxxSXQOug8Uj/URaF5NZ SESSION_SECRET=lCuCrMSM8VHDhW3dQ9xPViu0osZXl3CJRqwv4YRJ2LaMVgRfX+05zp2t78oQrOe5L4pgTajbH68I

Time for OpenStreetMap OAuth keys. See env file how to obtain. I go to page. You need to use your own OpenStreetMap username. I fill in Name (test-server-damn-project) and Main Application URL ( Only read their user preferences is necessary.

When successful, I copy Consumer Key and Consumer Secret to the env file.


NOTE: Do not forget that the testing instance is not running in the time of publication of this diary. Please, do not publish your passwords, secrets, and keys.

The last of env file are DNS names of clients:


I can stick with the default versions of the damn server and clients, so the environment configuration is done.

Finally, I set up the right email address in traefik.yml, and create acme.json with:

touch acme.json && chmod 600 acme.json

Autostart after the server restart

Here, I just copy and paste into the server’s terminal the Autostart with systemd section from the readme. I will not duplicate it.


I will again copy and paste the code from the Damn upkeep section. There is only one script in upkeep, now. Every 15 minutes, it checks for squares that are locked for more than two hours and unlock them.

Test the test instance

It should be all now. I can check if the docker containers are running with docker ps.

Then, go to the test manager, authenticate to OpenStreetMap, add some areas, go to test client, authenticate to OpenStreetMap, choose some area, map some square, and so on.

I had one complication! I don’t know how, but acme.json I created was a directory! I recognized when I checked the test client page and a Security Alert showed. I just rmdir -r acme.json directory, touch acme.json && chmod 600 acme.json again, and systemctl stop damn.service && systemctl start damn.service. Wait a while for certificates and nothing more.


I knew what I was doing, so it’s not surprising I finished the deployment in an hour, including a phone call, readme fixes, coffee preparation, some chatting, fixing some unrelated scripts for an unrelated project, and writing this diary. (In fact, I spent additional half an hour on the diary review.)

The fun is that even I am responsible for the damn project, I don’t remember how I did many things. Therefore, I wrote this step-by-step howto in the damn deploy readme. Just to know, how to fix issues I did.

And I am not going to say that it’s awesome, because I am biased.

Login to leave a comment