Coding Notes

HTTPS With Free Auto-updated Certificates

Posted on

Not so far ago there was only one possibility to get a free certificate for basic HTTPS - StartCom. But then it was sold to a Chinese company that had been caught on cheating with certificates, so StartCom is not trustworthy anymore. Its root certificates trust has been removed from major browsers. So what the options we have today?

There is a project Let's Encrypt which is open source, free to use, trusted by major browsers, fully automated and provides an API to update the certificates by a script. Below is a solution implemented at current site: CentOS 7.0 + Nginx.

Install certbot - the ACME client recommended by Let's Encrypt. This can be done from EPEL repository for yum.

Create a folder for verification through HTTP. In this folder we will put challenge files from the certification server.

mkdir /var/www/acme
chmod 0750 /var/www/acme
chgrp nginx /var/www/acme

Prepare a file which we will include in all virtual server configurations. Put it into Nginx configuration folder (/etc/nginx). We need to tell Nginx where to search for challenge files.

location ^~ /.well-known/acme-challenge {
    alias /var/www/acme;
    try_files $uri =404;
}

Now include it into server section of the site we want to protect with HTTPS. The section itself must be not protected, because the challenge verification is done over HTTP.

server {
    listen  80;
    server_name omelchenko.as www.omelchenko.as;

    include acme.conf;
    location / {
        return 301 https://$server_name$request_uri;
    }
}

To start using certbot we need to create an account on Let's Encrypt. This command does it: certbot register --rsa-key-size 4096, it also creates a set of folders under /etc/letsencrypt. We are going to use manual mode of the certbot. For that we will need 3 script hook files: auth-put.sh, auth-clean.sh and post-renewal.sh. It is not important where to put them. For simplicity put them into /etc/letsencrypt/scripts.

auth-put.sh

#!/bin/bash
printf "%s" $CERTBOT_VALIDATION > /var/www/acme/$CERTBOT_TOKEN
--------------------------------------------------------------
auth-clean.sh

#!/bin/bash
rm -f /var/www/acme/$CERTBOT_TOKEN
--------------------------------------------------------------
post-renewal.sh

#!/bin/bash
systemctl restart nginx
We need to restart Nginx, because reload command does not reload certificates.

Now that the hook scripts are in place, we can create our first certificate using the following command. We need to repeat it for each group of domain names that will use separate certificates.

certbot certonly --manual --strict-permissions --keep-until-expiring --agree-tos \
 --email <host master e-mail> --no-eff-email --rsa-key-size 4096 \
 --manual-public-ip-logging-ok \
 --manual-auth-hook /etc/letsencrypt/scripts/auth-put.sh \
 --manual-cleanup-hook /etc/letsencrypt/scripts/auth-clean.sh \
 --cert-name <certificate name> -d <your domain 1> -d <your domain 2>...
This command creates private key, makes the certificate request, validates the ownership of the domain through HTTP challenge, downloads the certificate and puts everything into the folder /etc/letsencrypt/archive/<certificate name>. It also creates symlinks in /etc/letsencrypt/live/<certificate name>: fullchain.pem, privkey.pem... We will use these links in Nginx configurations:
server {
    listen 443 ssl;
    server_name omelchenko.as www.omelchenko.as;

    ssl_certificate /etc/letsencrypt/live/omelchenko.as/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/omelchenko.as/privkey.pem;
...

Last important step is to setup automated certificates renewal. For that in crontab we can add the following command:

certbot renew --post-hook /etc/letsencrypt/scripts/post-renewal.sh 2>&1 >>~/log.txt
Certbot developers recommend running it twice a day. The actual renewal is executed when the certificate is expiring in 30 days or less.

Git Server Over HTTPS

Posted on

I want to do some programming experiments with Git. For that purpose I need a test environment simple but similar enough to GitHub: the server shall be accessible over HTTPS, basic HTTP authentication shall be required for git push, read operations shall be permitted without an authentication. Web server shall be Nginx.

Git comes with HTTP protocol adapter implemented as CGI called cit-http-backend aka Smart HTTP transport. I install git from standard repository, create user and group for Git, create a directory for future repositories.

yum install git
useradd --no-create-home --shell /sbin/nologin git
usermod -G git -a nginx
mkdir -p /srv/git
chmod 750 /srv/git
chown git:git /srv/git

Now as git user I create a bare repository test.git

su -s /bin/bash git

# execute next commands as git
mkdir /srv/git/test.git
cd /srv/git/test.git
git init --bare

# to allow anonymous push for testing purposes:
git config http.receivepack true

I already know how to run CGI scripts on Nginx through uWSGI, so I create a new worker for Git in uWSGI emperor directory git.ini set ownership as git:git and put the following content:

[uwsgi]
set-ph = runtime_dir=/var/run/uwsgi
set-ph = log_dir=/var/log/%U

uid = git
gid = git
plugins = cgi
socket = %(runtime_dir)/%n.sock
chmod-socket = 660
master = true
processes = 1
threads = 1
chdir = /srv/git
cgi = /usr/libexec/git-core/git-http-backend
env = GIT_HTTP_EXPORT_ALL=yes
env = GIT_PROJECT_ROOT=/srv/git
logto = %(log_dir)/%n.log

Here I put cgi command. If it points to a file, then this file will be run as CGI script for processing of all incoming requests. This is exactly what I need for Git.
I put environment variables GIT_HTTP_EXPORT_ALL and GIT_PROJECT_ROOT here so there is no need to set them in Nginx configuration.
For basic authentication I need a passwd file which can be created by several utilities. The standard one is htpasswd. htpasswd -c /srv/git/.htpasswd test creates the file and adds the user test. To add other users run the same without -c. Now I am ready to configure Nginx:

server {
	listen 443 ssl default_server;

	ssl_certificate /etc/pki/tls/certs/my.crt;
	ssl_certificate_key /etc/pki/tls/private/my.key;

	set $root_folder /var/www/vhosts/example.com;
	root $root_folder;

	location / {
		index index.html;
		try_files $uri $uri/ =404;
	}

	location ~ ^/git(/.*)$ {
		location ~ ^/git(/.*/git-receive-pack)$ {
			auth_basic "Restricted";
			auth_basic_user_file /srv/git/.htpasswd;
			include uwsgi_params;
			uwsgi_modifier1 9;
			uwsgi_param PATH_INFO $1;
			uwsgi_param REMOTE_USER $remote_user;
			uwsgi_pass unix:/run/uwsgi/git.sock;
		}

		include uwsgi_params;
		uwsgi_modifier1 9;
		uwsgi_param PATH_INFO $1;
		uwsgi_pass unix:/run/uwsgi/git.sock;
	}
}

It is important to set PATH_INFO parameter. Without it Git will return 500 error. Also is is important that PATH_INFO shall begin with / and contain path relative to Git root folder. It is where git-http-backend will look for the repositories.
Here I put Git URL under /git folder so that it could be accessed as https://example.com/git/test.git from Git clients.
Since I want authentication only for push commands, I have to distinguish all requests coming to /git/... For that I use nested locations directives. If request ends with git-receive-pack Nginx will request login and password. Otherwise not.

Running CGI behind Nginx on CentOS 7

Posted on

Nginx does not have native support of CGI protocol, so I need something which will work as an application server. Maybe the most widespread approach is to put some Apache and use Nginx as a reverse proxy to forward HTTP requests. I do not like this idea because I try to avoid using Apache as long as possible. Another approach is to use native support of FastCGI and translate those requests to CGI scripts calls. I found an utility fcgi_wrap which does the translation, and it requires something that will launch it and keep it running, like spawn-fcgi. None of them is included into CentOS 7 packages, so some people build them from source distribution.

I prefer to use another solution called uWCGI. It uses a fast binary uwsgi protocol and it looks very flexible and extensible. To serve CGI I need to install packages: nginx, uwsgi and uwsgi-plugin-common, which contains CGI plugin I need.

#The services can bee enabled as usual
systemctl enable uwsgi
systemctl enable nginx

The default uWSGI configuration creates /run/uwsgi folder and runs the service as uwsgi:uwsgi. Default /etc/uwsgi.ini config is good enough. It runs master process called emperor and monitors a specified folder for INI files by mask. When a new INI appears the master process creates a worker called vassal according to the settings. When a configuration file disappears, the emperor terminates the worker. When a file is modified, the master process re-spawns the worker with new settings. Sounds appealing. The only thing I add here is creation of folder where all vassal logs will go.

[uwsgi]
set-ph = user=uwsgi
set-ph = group=uwsgi
set-ph = runtime_dir=/run/uwsgi
set-ph = log_dir=/var/log/uwsgi

master = true
uid = %(user)
gid = %(group)
pidfile = %(runtime_dir)/uwsgi.pid
emperor = /etc/uwsgi.d
stats = %(runtime_dir)/stats.sock
emperor-tyrant = true
cap = setgid,setuid
exec-as-root = mkdir -p %(log_dir)
exec-as-root = chmod 770 %(log_dir)
exec-as-root = chown %(user):%(group) %(log_dir)

I want to run uwsgi master service as uwsgi user, but worker for processing CGI as nginx user. It is interesting part. Despite INI files can have parameter uid and gid, they are completely ignored if the master service is running in emperor-tyrant mode. The author's idea was to allow multiple users on the system to create their own worker processes putting INI files to the monitored directory. In this case you cannot use user or group which they want. It would be a security hole. Instead the master process runs slave processes under owner of the INI file credentials. That is why unlike most of configuration files under /etc I cannot keep root as the owner of those INI files. If the file's owner is root, there will be error messages in logs as follows:
[emperor-tyrant] invalid permissions for vassal test.ini

Here is my generic "vassal" configuration for CGI processing:

[uwsgi]
set-ph = runtime_dir=/var/run/uwsgi
set-ph = log_dir=/var/log/uwsgi
set-ph = vhosts_dir=/var/www/vhosts

uid = nginx
gid = uwsgi
plugins = cgi
#http-socket = :9090
#http-socket-modifier1 = 9
socket = %(runtime_dir)/%n.sock
chmod-socket = 660
master = true
processes = 1
threads = 1
chdir = %(vhosts_dir)/%n
cgi = %(vhosts_dir)/%n
;cgi-allowed-ext = .cgi
cgi-allowed-ext = .py
logto = %(log_dir)/%n.log

I still use uid and gid parameters so that I could run it from command line as:
uwsgi --ini test.ini
I prefer Unix sockets but it is supports also HTTP sockets. Make sure that nginx can read and write to the socket. For that there is chmod-socket parameter. For testing without Nginx I use http-socket parameter combined with http-socket-modifier1 and magic number 9 wich corresponds to CGI protocol. Documentation to uWSGI proposes using http parameter instead of http-socket, but it does not work. Magic variable %n expands to name of current INI file without extension.

Configuration for Nginx looks like this:

server {
	listen 80 default_server;
	set $root_folder /var/www/vhosts/test;
	set $error_folder /usr/share/nginx/html;

	root $root_folder;
	recursive_error_pages on;
	error_page 404 /error-pages/404.html;
	error_page 500 502 503 504 /error-pages/50x.html;

	location ^~ /error-pages/ {
		alias $error_folder/;
	}

	location ~* \.(png|jpg)$ {
		root /;
		try_files $root_folder$uri $error_folder$uri =404;
	}

	location / {
		index index.py index.cgi index.html;
		try_files $uri $uri/ =404;
		if (-f $request_filename) {
			error_page 418 = @cgi_backend;
			return 418;
		}
	}

	location @cgi_backend {
		uwsgi_intercept_errors on;
		include uwsgi_params;
		uwsgi_modifier1 9;
		uwsgi_pass unix:/var/run/uwsgi/test.sock;
	}
}

I can avoid using if in the config passing all requests to the application server, even if the file does not exist. In that case uWSGI would return 404 status, but I prefer to do this check with Nginx. I do not know why there is no special instruction to use a named location inside if-statement, so the workaround is to return an error which is not reported back to the client and process it using error_page. I want Nginx to process all errors even if they come from CGI, for that I need to use
uwsgi_intercept_errors on;
directive. Since I already have generated an error state when I jumped to @cgi_backend named location, I have to enable recursive errors processing
recursive_error_pages on;
My error pages contain images, and those images I prefer to keep outside of the site document root, so I decided to tell Nginx to look for all images in two folders.

Now I can put whatever CGI script I want to /var/www/vhosts/test/ and it will be executed.

Clipboard over VNC

Posted on

When working with Linux over VNC one thing is important to understand about clipboard. VNC client and server do synchronize their clipboards, but in order to do that a separate application must be running:

vncconfig
So if your copy-paste does not work, check if VNC Config is still running.

When I use terminal, I prefer select-copy with left mouse click and paste with right mouse click. Unix gurus like to paste using their middle "button" or simulate it using simultaneous two button click when working on laptops. They say it is standard Unix style. Of course, they are right.

I personally prefer to ignore the standard, because PuTTY-style clipboard operations much more ergonomic. First of all, there is no such thing in common mice as "middle button". It is a wheel. Is is not designed ergonomically to be pressed. It spins. And it scrolls the window. Scrolling is the last thing I want when I try to paste something. Middle "button" emulation using two button click is even less ergonomic. In order to do that on most devices you have to use two times more fingers than otherwise.

To make XTerm behave like PuTTY, we need to override the default translations in X resources. It can be done either on local level in ~/.Xresources or globally. I prefer to add to the XTerm defaults /usr/share/X11/app-defaults/XTerm the following lines:

*VT100.Translations: #override\
	~Ctrl ~Meta <Btn3Down>:ignore() \n\
	~Meta <Btn3Motion>:ignore() \n\
	~Ctrl ~Meta <Btn3Up>:insert-selection(PRIMARY, CUT_BUFFER0)

When working with files in Midnight Commander in order to use the terminal clipboard you have to press Shift button in addition to the mouse operations.

Minimalist GUI Server on CentOS 7

Posted on

Most VPS hosting providers have CentOS platform as template. I have successfully run CentOS 6.5 on 512 MiB. CPU speed is not that important at this point. Here I will setup CentOS 7 - there are some changes in comparison with 6.5 or earlier.

To protect my server from SSH brute force attack I decided to use iptables - it was pre-installed. But I had to install additional packages: iptables-services policycoreutils in order to save the configuration. To enable defence we can do:

# Clear everything from iptables
iptables -F

# Allow all packets for established connections
# (match packets by state where state = ESTABLISHED,RELATED)
iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT

# Record IP address of all new attempts to connect to SSH(22) port
iptables -A INPUT -p tcp --dport 22 -m state --state NEW -m recent --set

# Match new attempts to connect to SSH port from IP address recorded in
# last 180 seconds 4 times or more, update record and drop that attempt
iptables -A INPUT -p tcp --dport 22 -m state --state NEW -m recent \
 --update --seconds 180 --hitcount 4 -j DROP

# Allow new SSH connections that were not matched above
iptables -A INPUT -p tcp --dport 22 -m state --state NEW -j ACCEPT

# Allow all packets from loop-back interface - connections from localhost
iptables -A INPUT -i lo -j ACCEPT

# Allow PING requests
iptables -A INPUT -p icmp --icmp-type echo-request -j ACCEPT

# Deny all other input and forward requests, allow all output
iptables -P INPUT DROP
iptables -P FORWARD DROP
iptables -P OUTPUT ACCEPT
To save the policies we will need two packages and one command:
yum install iptables-services policycoreutils

# Save firewall settings
service iptables save
without policycoreutils I have had an error: "restorecon: command not found"

Next I will "protect" the system from IPv6 attacks. Since I do not need any IPv6 services, I will just disable IPv6 network services. The right way to do it on CentOS 7 is to open /etc/sysctl.conf and add the following lines:

net.ipv6.conf.all.disable_ipv6 = 1
net.ipv6.conf.default.disable_ipv6 = 1

To apply changes and check the result:
systemctl restart network
ip addr show

Change name of the server if you want. On CentOS 7 the right way to do it:

hostnamectl set-hostname (--static|--transient|--pretty) 

Setup local time zone. By default the timezone of VPS usually set up to the actual local timezone of the physical server. So if that is not what you want, replace symbolic link to /usr/share/zoneinfo/<whatever you want> at /etc/localtime

Setup system locale, keyboard map etc. If you have minimal setup, something might be missing.

# List all supported locales
localectl list-locales
# Set locale
localectl set-locale LANG=en_US.UTF-8
# Install keyboard and tools for terminal fonts
yum install kbd kbd-misc
localectl set-keymap us

Next I setup windows manager. To make true minimalist server I will use the simplest windows manager I know - dwm.
I cannot install it though yum, so I have to install prerequisites manually. One way to do it - install whole group for X Windows

yum group install "X Windows System"
but it will install a lot of unnecessary packages. So I prefer another trick. Install dmenu and xterm - I will need them anyway - and they are available through yum though dmenu is in EPEL repository. Enabling EPEL repository in CentOS 7 is simpler than ever:
yum install epel-release
Then I can install dmenu and xterm with all dependencies. Accept GPG key for EPEL with fingerpring 91e9 7d7c 4a5e 96f1 7f3e 888f 6a2f aea2 352c 64e5 Afrer that I have to install four remaining prerequisites for dwm: libX11-devel, libXinerama-devel and of course gcc and make. This installs only necessary packages from the repositories. We have to compile dwm. It does not have any external settings, so every customization is done through config.h file of its source code. So get the source code, unzip it, make and install.
wget http://dl.suckless.org/dwm/dwm-6.0.tar.gz

Now that I have necessary GUI packages installed, I will set up VNC server.

yum install tigervnc-server
I will need a user for VNC service.
useradd someuser
passwd someuser
After that I will need to setup service for the user. Protocol VNC does not provide a way to chose a user name when you connect. Instead you have to chose service port and bind it to the user on the server settings. On CentOS 7 it is done through service files. I have to copy sample service file to the directory and then modify it.
cp /lib/systemd/system/vncserver@.service \
 /etc/systemd/system/vncserver@:1.service
Here I chose port 5901 (display #1). Then I modify its content:

[Unit]
Description=Remote desktop service (VNC)
After=syslog.target network.target

[Service]
Type=forking
# Clean any existing files in /tmp/.X11-unix environment
ExecStartPre=/bin/sh -c '/usr/bin/vncserver -kill %i > /dev/null 2>&1 || :'
ExecStart=/sbin/runuser -l someuser -c "/usr/bin/vncserver %i \
 -geometry 1280x950 -localhost -nolisten tcp"
PIDFile=/home/someuser/.vnc/%H%i.pid
ExecStop=/bin/sh -c '/usr/bin/vncserver -kill %i > /dev/null 2>&1 || :'

[Install]
WantedBy=multi-user.target

				
Here we also can change display resolution. Next I have to setup VNC windows manager for VCN user.
systemctl daemon-reload
systemctl enable vncserver@:1
su someuser

# under someuser run server
vncserver

It will ask for new password. This password VNC server will ask for remote connection. Now we will modify ~/.vnc/xstartup file to setup dwm. For CentOS 6.5 I used this code:
#!/bin/sh
[ -x /etc/vnc/xstartup ] && exec /etc/vnc/xstartup
[ -r $HOME/.Xresources ] && xrdb $HOME/.Xresources
vncconfig -iconic &
xterm -geometry 80x24+10+10 -ls -title "$VNCDESKTOP Desktop" &
while true; do
	dwm >>$HOME/.vnc/.dwm.log 2>&1
done
last cycle was to prevent "dead" screen when you suddenly close your dwm process.

But for CentOS 7 it does not work and leads to CPU overuse. Instead I leave the file untouched but have some modifications that will work for every user on the server. Return to root session. Open or create /etc/sysconfig/desktop and set PREFERRED variable:

#!/bin/sh
PREFERRED=dwm
VNC startup script will run /etc/X11/xinit/xinitrc which sources the desktop file and expect PREFERRED to be the command line of our windows manager. Now we can restart VNC:
systemctl restart vncserver@:1

To use your favorite VNC Viewer first we need to close our root session and open someuser's one. We have setup tigervnc to work only with localhost connection so, we have to setup ssh tunneling. But for security it is disabled if it is open for root. In our case vncserver expects connections on 5901 port.

I do not like xterm default colors, so I opened

/usr/share/X11/app-defaults/XTerm-color
and modified it according to my preferences. The problem is CentOS 7 default scripts do not use it without additional workouts. I needed force xinitrc to load the resource: in /etc/X11/xinit/xinitrc.d/ I created a file xtermrc for automatic startup with the following content:
xrdb -merge /usr/share/X11/app-defaults/XTerm-color

For usable system we need good fonts. I use Liberation Sans, Terminus and Gohu. Liberation Sans and Terminus we can install through yum. Gohu is available on http://font.gohu.org/gohufont-2.0.tar.gz To install it just unpack it to /usr/share/fonts/ Unfortunately, CentOS 7 did something undocumented to its font system. So in order to make the fonts available for xterm we need to add a symlink to the new folder as /etc/X11/fontpath.d/gohufont:unscaled. Some standard CentOS 7 fonts require the same manipulation too.

Now we can restart vncserver, connect, see good dwm interface, hit Alt+Shift+Enter to run xterm. Enjoy.

unique_ptr vs auto_ptr

Posted on

C++11 standard now has a set of new smart pointers classes. unique_ptr is the one that makes auto_ptr deprecated. And there is a good reason. So what are the differences between these two classes from a user point or view?

auto_ptr was designed to serve two scenarios: take ownership of a dynamically allocated object and transfer the object by pointer in function return so that instead of
vector<int>* createVector(); we can write
auto_ptr< vector<int> > createVector(); and make sure that in case of any exception all destructors will be called and the memory will be deallocated.

Another thing worth to mention is what auto_ptr was NOT designed for. It was not designed to store objects in a container or to make copies of itself. It was not designed to be very useful for controlling life time of objects inside other data types. But it is still possible to write code like this:

struct MyClass {
	std::auto_ptr<int> MyPtr;
	MyClass() : MyPtr(new int) {}
};

but it leads to effects that are hardly useful. auto_ptr does not forbid copy construction or assignment, because C++ before C++11 did not have any facility to forbid it and solve the designed scenarios at the same time.

Now we have lvalue references and special new "move" semantics for operations with those references. All this makes possible to create another smart pointer that solves the initial scenarios but makes it hard to misuse it. unique_ptr does not have copy constructor or copy assignment operator. You still can transfer ownership of the pointer but it requires explicit statement: unique_ptr<int> myNewIntPtr = std::move(myOldIntPtr);

Two additional bonuses unique_ptr provides: ability to store pointer to an array (with access to elements by index through []) and possibility to specify, how the object has to be destroyed. The latter introduces one more scenario for the new smart pointer.

Consider some old-style C library with a function returning some old-style C string. The string is returned as char* allocated with malloc. auto_ptr<char> is useless in this case - it only can destroy objects by calling standard delete. The default version of unique_ptr<char> uses the same approach, because it uses default_delete<char> as default the second template parameter. But we can specify another class.

unique_ptr is designed to have no overhead in size or runtime performance in comparison whith classic pointer. It depends on the deleter, so I am going to write some malloc_deleter without any overhead as well. And show its usage with the example of gcc type names un-mangling.

#include <memory>
#include <cstdlib>
#include <cxxabi.h>
#include <stdexcept>
#include <iostream>

// The class does not have any member variables, therefore it does not
// make any overhead in size or performance.
template<void (*fun)(void*)>
struct fun_delete
{
	void operator () (void* p) { return (*fun)(p); }
};

typedef fun_delete<std::free> malloc_delete;

std::unique_ptr<char, malloc_delete> demangle_type_name(const char* name)
{
	int status;
	// The buffer has to be allocated with malloc or null. If null, then
	// the result string is allocated with malloc.
	std::unique_ptr<char, malloc_delete> real_name (
		abi::__cxa_demangle(name, 0, 0, &status));

	if( 0 != status )
		throw std::runtime_error("Fail to demangle");

	return real_name;
}

int main(int argc, char** argv)
{
	std::cout << demangle_type_name(typeid(1)).get() << std::endl;
	std::cout << demangle_type_name(typeid(2.0)).get() << std::endl;
	std::cout << demangle_type_name(typeid("42")).get() << std::endl;
}