Cowsay when logging on via SSH

To give my server a bit of personality I have it render words of wisdom everytime I log on to it through SSH. Also, show its uptime and any tunning tmux sessions.

This is for Debian 8, I’m not sure about Red Hat or other distros.

First install fortune and cowsay:

# aptitude install fortune cowsay

Place the following commands in ~/.ssh/rc:

fortune | cowsay -f $(ls /usr/share/cowsay/cows/ | shuf -n 1)
echo "$(uptime)"
echo ""

tmux list-sessions >/dev/null 2>&1
if [ "$?" = "0" ] ; then
  echo "Running tmux sessions:"
  echo "$(tmux list-sessions)"
  echo "No running tmux sessions."

echo ""

Restart ssh (this will not disconnect any running ssh sessions):

# service ssh restart

Let’s see what happens…



Randomly follow your Twitter friends’ friends

Following a rather radical pruning of my Twitter followings (people who I am following) my timeline has got both better and less exciting. So I wanted to follow some new people but not just random accounts.

I have a list of ununfollowables: the followings I like best. Those people are probably good judges of the people I should follow – or in any case better at it than random followings.

This script looks who they are following and follows those account. The result is that periodically I automatically follow some of my friends’ friends. It keeps a list of those I’ve followed automatically. If it doesn’t work out I unfollow them again. Of course nothing is keeping me from re-following them manually.

The script relies heavily on @sferik’s Twitter client for Ruby called ‘t’. It’s easy to install but it helps if you read the help file and get to know it a bit should you need to troubleshoot or adjust the script.

It is designed to be run from cron and written on a Debian 8 system but it should run on any Linux flavour. Be sure to add t’s path to your cron file.

The script is called on my system.


# Purpose of this script:                                             #
# Follow random followings from followings from the ununfollowables   #
# list, hopefully generating a bit more diversity in your timeline.   #
#                                                                     #
# The ununfollowables list is just a list of people you like.         #
# A 'following' is someone you are following. It is the reverse of    #
# a follower.                                                         #
#                                                                     #
# Requires:                                                           #
#  - Working and authenticated installation of Ruby gem t:            #
#                                      #
#  - Twitter list called 'Ununfollowables' (change as needed)         #
#    containing friends whose followings you might like to follow     #
#  - Twitter list called 'freshmeat' for the new followings           #
#                                                                     # 
# Feel free to use, change and redistribute this script as long as    #
# you include a link to and this text. Enjoy 🙂  #
#                                                                     # 
#  @Vorkbaard, 2017                                                   #

#  Settings                                                           #

# Ununfollowables list

# Freshmeat list
# Number to days to wait before unfollowing if they're not interested

# Number of new random followings to add - must be more than 0.
# Note that this script uses up your api access allowance:
# Also note that a list cannot contain over 5000 members.

# Do not follow people with the following words in their bio.

## MAIL - Make sure your server can send out mail first.
## I'm using xmail but you may want to change it.

# Send mail here:

# The mail's from address

#  End of settings                                                    #

# Change to directory this script is in so we can use relative file locations when running from cron
cd "$(dirname "$0")";

# Retrieve Twitter handle
ME=@$(t whoami | head -4 | tail -1 | cut -d'@' -f2)
if [ "$?" != "0" ] ; then
	echo "Unable to retrieve Twitter handle, please make sure t is installed and working correctly."
	echo "Either that or we've ran out of api calls."

echo "---------------------- $(date) --------------------" >> freshmeat.log
echo "Twitter handle: $ME" >> freshmeat.log

if [ -e mailtext ] ; then rm mailtext; fi

# Ok, here's the gist.

# Unfollow those not interested
# 1. Get list of randomfollowings followed over two weeks ago.
# 2. If they're not interested then unfollow them.
# 3. Remove them from the freshmeat list and the inprocess file.

# Get new users
# 4. Select random user from ununfollowables list.
# 5. Select random following from selected user.
# 6. Check if we're not already following the random following. If we do, select another one.
# 7. Follow the random following.
# 8. Put following on the freshmeat list.

# Create list of leaders so we only need to retrieve it once.
# Leaders are people who you are following but who don't follow you.
echo "Creating list of leaders. Please hold." >> freshmeat.log
t leaders > leaderslist
if [ "$?" != "0" ] ; then 
	echo "Error: probably not enough api calls left. Try again later." >> freshmeat.log
	echo "We ran out of api calls. Better luck next time!" | mailx -a "From: Freshmeat provider <$FROMADDR>" -s "Fresh meat!" $MY_EMAIL

if [ -e freshmeat.inprocess ] ; then
	echo "WAITSECONDS: $WAITSECONDS" >> freshmeat.log
	echo "Getting list of randomfollowings followed over two weeks ago" >> freshmeat.log
	NOW=`date +%s`
	while read RANDOMFOLLOWING; do
		SINCE=$(echo $RANDOMFOLLOWING | cut -d' ' -f2)
		RAFO=$(echo $RANDOMFOLLOWING | cut -d' ' -f1)
		echo "Rafo: $RAFO" >> freshmeat.log
		echo "Since: $SINCE" >> freshmeat.log
		echo "NOW..: $NOW" >> freshmeat.log
		let HDIFF=$DIFF/3600
		echo "Time difference: $DIFF ($HDIFF hours)" >> freshmeat.log
		if [ "$DIFF" -lt "$WAITSECONDS" ] ; then
			echo "Time difference is less than $WAITDAYS days." >> freshmeat.log
			# Not doing anything with them now, just add them to the list for next time
			echo $RANDOMFOLLOWING >> freshmeat.inprocess_new.log
			echo "Time difference is greater than $WAITDAYS days." >> freshmeat.log
			# Check if they're following back
			grep -iq "^$RAFO$" leaderslist
			if [ "$?" == "1" ] ; then
				# They're following back
				echo "$RAFO is following back!" >> freshmeat.log
				echo "$RAFO" >> newfollowers.log
				echo "New connection made: $RAFO -$RAFO" >> mailtext				
				# They're not following back
				echo "$RAFO is not interested so unfollowing them." >> freshmeat.log
				t unfollow $RAFO >/dev/null 2>&1
				echo "Unfollowed $RAFO." >> mailtext			
			echo "Removing $RAFO from freshmeat list." >> freshmeat.log
			t list remove $FRESHMEAT_LIST $RAFO >/dev/null 2>&1
			if [ "$?" == "0" ] ; then
				echo "Removing $RAFO from random follower log file." >> freshmeat.log
				echo "Error unfollowing $RAFO. Perhaps we've ran out of api calls." >> freshmeat.log
		echo "----" >> freshmeat.log
	done <freshmeat.inprocess
	# Recreate freshmeat.inprocess file with only remaining (current) followings
	mv freshmeat.inprocess_new.log freshmeat.inprocess 
	# if not exist freshmeat.inprocess
	echo "freshmeat.inprocess doesn't exist; nothing to compare so skipping the unfollow part." >> freshmeat.log

# Create list of ununfollowables so we only need to retrieve it once
echo "Creating list of ununfollowables. This may take a while." >> freshmeat.log
t list members $ME/$UNUNFOLLOWABLES_LIST > ununfollowableslist
if [ "$?" != "0" ] ; then
	echo "Error: probably not enough api calls left. Try again later." >> freshmeat.log
	echo "We ran out of api calls. Better luck next time!" | mailx -a "From: Freshmeat provider <$FROMADDR>" -s "Fresh meat!" $MY_EMAIL

# Create list of followings so we only need to retrieve it once.
echo "Creating list of followings. Please hold." >> freshmeat.log
t followings > followingslist
if [ "$?" != "0" ] ; then 
	echo "Error: probably not enough api calls left. Try again later." >> freshmeat.log
	echo "We ran out of api calls. Better luck next time!" | mailx -a "From: Freshmeat provider <$FROMADDR>" -s "Fresh meat!" $MY_EMAIL

# Initialize the number of succesfull new followings

printf "\n\n" >> mailtext
echo "New followings" >> mailtext
echo "------------------------------" >> mailtext
while [ $ADDINGS -lt $NEWADDINGS ] ; do
	echo "Succesfull addings: $ADDINGS" >> freshmeat.log

	# Get random ununfollowable
	UNUNFOLLOWABLES=$(cat ununfollowableslist | shuf -n 1)
	echo "Following from ununfollowableslist: $UNUNFOLLOWABLES" >> freshmeat.log

		echo "Getting random following from $UNUNFOLLOWABLES_LIST." >> freshmeat.log
		RANDOMFOLLOWING=$(t followings $UNUNFOLLOWABLE | shuf -n 1)
		echo "Randomfollowing: $RANDOMFOLLOWING" >> freshmeat.log
		if [ -n "$RANDOMFOLLOWING" ] ; then
			# Check if we're not already following this particular random following
			grep -iq "^$RANDOMFOLLOWING$" followingslist
			if [ "$?" == "1" ] ; then
				echo "Not already following, but is this a protected account?" >> freshmeat.log

				# Protected accounts don't have the 'Last update' bit set in their public profile.
				# Also, get to know them a bit by reading their bio. 
				t whois $RANDOMFOLLOWING > tmpwhois

				# Check if the account is protected
				grep -q "Last update" tmpwhois
				if [ "$?" == "0" ] ; then

					# Check if they're not unwanted
					BIO=$(grep "Bio" tmpwhois)
					if echo $BIO | egrep -iqv $NOTFOLLOW; then

						echo "Not already following $RANDOMFOLLOWING AND not a protectect account AND they're not unwanted so following them now." >> freshmeat.log
						t follow $RANDOMFOLLOWING >/dev/null 2>&1
						if [ "$?" == "0" ] ; then
							echo "Follow succesful so adding them to $FRESHMEAT_LIST." >> freshmeat.log
							t list add $FRESHMEAT_LIST $RANDOMFOLLOWING >/dev/null 2>&1
							NOW=`date +%s`
							echo "$RANDOMFOLLOWING $NOW" >> freshmeat.inprocess
							# Increase addings number
							ADDINGS=$(( $ADDINGS + 1 ))
							SHOWBIO="$(echo $BIO|cut -c5-)"
						echo "Not going to follow $NOTFOLLOW." >> freshmeat.log   
					echo "Not going to follow protected account." >> freshmeat.log
				echo "Already following $RANDOMFOLLOWING." >> freshmeat.log
			echo "---- done ----"  >> freshmeat.log
			echo "We've probably ran out of api calls so let's call it a day." >> freshmeat.log



# Addings

# Mail logfile
	# Get number of connections generated
	if [ -e newfollowers.log ] ; then
		FreshFollNr=$(wc -l newfollowers.log | cut -d' ' -f1)
		echo "Freshmeat has generated $FreshFollNr connections." >> mailtext
		echo "Freshmeat has not generated any connections yet." >> mailtext

	# Get number of people in Freshmeat list
	FreshmeatMembersNr=$(t list information $FRESHMEAT_LIST | head -5 | tail -1 | cut -d ' ' -f7)
	echo "There are now $FreshmeatMembersNr people on the $FRESHMEAT_LIST list." >> mailtext

	# Send mail
	cat mailtext | mailx -a "From: Freshmeat provider <$FROMADDR>" -s "Fresh meat!" $MY_EMAIL
	rm mailtext

# Clean up
	if [ -e tmpwhois ] ; then rm tmpwhois; fi
	if [ -e followingslist ] ; then rm followingslist; fi
	if [ -e ununfollowableslist ] ; then rm ununfollowableslist; fi
	if [ -e leaderslist ] ; then rm leaderslist; fi

Command-line TweetDeck clone

Today we’re making a command-line TweetDeck clone! I hope you’ve all brought your Twitter account, your phone and access to your Linux workstation or server. It’s supposed to also work on Mac devices but you’ll have to figure that out for yourselves. While it is perfectly possible to install Ruby on Windows I am not aware of screen or tmux possibilities there and that would take away half the fun.

So Linux it is. Pick any distribution; we’re not picky today. I’m using Debian 8 so this will most likely also work on the Ubuntus and Mint.



Our project consists of four parts:
1 – command-line Twitter client
2 – your custom Twitter application so you can use the Twitter api
3 – tmux for the columns
4 – PuTTY with customized colours

I assume some prior Linux experience; I won’t explain how to install Linux or connect with PuTTY.

We’ll be using @sferik’s excellent command-line power tool for Twitter: ‘t’. Instructions on that site are pretty clear; here I’m just dressing it up a bit.

If you haven’t done so yet, you MUST associate a cell phone number with your Twitter account for this. Do that first, then come back here.

Time to prepare your system for t!

# aptitude install build-essential ruby ruby-dev

If you’re on a system that uses yum instead of apt, try

# yum groupinstall 'Development Tools'

I haven’t tested that though; in any case you need stuff like ‘make’ to compile the Ruby gem.

Once completed you can install t by doing

# gem install t

If you get errors check if Ruby and its development environment are correctly installed.

Now as your regular user you must set up your Twitter account. Not to worry: t will give you clear instructions. It will even open necessary urls in your browser. As I am doing this on a machine without a browser I want to see the urls but not have them openened; I will copy and paste them in a browser on a different machine. To that effect you can use the –display-uri parameter. If you’re on a machine with a gui by all means leave that parameter out.



Now tell t about your account:

$ t authorize --display-uri

t will instruct you on how to proceed.



Press Enter. If you left out the –display-uri parameter then your browser will be opened. If you did provide the –display-uri parameter then you will be shown the url you should open. Copy the url and open it in a browser.

I’ll just assume you haven’t created any Twitter apps before. If you did you know the drill.



Provide a name, a description and a url for your app. Some Twitter apps show this data along with your tweets so keep that in mind.



Congratulations, you are now a Twitter app developer.



Go to the Permissions tab and set the correct permissions. Provide only the persmissions you need, never more.



Go to the Keys and Access Tokens tab and copy the Consumer Key.



Paste the API key on the command line.



Copy the API secret key.



Paste it on the command line.



t will now instruct you to open the Twitter app authorization app. If you supplied the –display-uri parameter it will just show you the url you should open.



Authorize the app.



Copy the six digit code.



Paste the code on the command line.



If all is well t says ‘Authorization successful’.

Test it by sending a tweet:

$ t update 'Tweeting this from the command line!'

…or whatever tweet you like to send 😛



t provides the command for deleting the tweet if necessary. The numeric code is the unique tweet id.




Now for the cosmetic part!


# aptitude install tmux

if tmux isn’t installed on your system yet. It pays to read some documentation on basic tmux operations.

Start tmux and split the screen horizontally:

$ tmux
^b + "



Select the upper pane:

^b + q 0


^b + [up]

Split the upper pane in four separate vertical panes by using

^b + %

Move around the panes with

^b + [up|down|left|right]

Select a specific pane by issuing

^b + q

Select the desired pane number.

You’ll get the hang of it 😉



In the individual panes issue the t commands you want. I like to set it up like this:
– pane one: #dtv OR #durftevragen (search query)
– pane two: timeline
– pane three: mentions
– pane four: dms (not shown in this article, I forgot to set dm permissions for my app ;))

Of course you are free to set it up however you like. In my case these are the commands:

t stream search "#dtv OR #durftevragen"
t stream timeline
watch -d 600 t mentions

The third command is because you can’t stream mentions. Or you can but it didn’t work for me. Anyway the watch command ($ man watch) is helping me out here.



If you’re connecting from PuTTY you can play around with the font, colours, size, etc.



If you’re done select Session, enter a name and click Save so your settings are saved.



First result:



‘watch’ is stripping colour from my mentions so I’m working around that with a manual watch replacement:

$ while true; do clear; t mentions -n 6; sleep 600; done



Victory! We can use the lower horizontal pane to send updates in.


Upgrading Roundcube from the Debian repositories to the current version

Debian 8 in its repository holds version 1.1.5 of Roundcube. That is fine because that’s a stable version and it’s working without mentionable issues. However I wanted to install the current version (1.2.2) because it has a number of features I like. During installation I followed Roundcube’s own howto.

Basically I will just install the new Roundcube next to the old one and afterwards remove the older Roundcube.

I’m assuming your html files are in /var/www.

# cd /var/www

Download the source:
Head over to I’m downloading the 1.2.2 Complete version because it includes necessary third-party packages. On the server:

# wget

Extract the tar file:

# tar xvfz roundcubemail-1.2.2-complete.tar.gz

Rename the directory for typo reduction purposes:

# mv roundcubemail-1.2.2 newcube

Set permissions:

# chown -R www-data:www-data newcube

For security reasons move the temp directory out of the publicly accessible directory and delete the logging directory:

# mv /var/www/newcube/temp /var/roundcubetemp (or whatever directory you fancy).
# rm -rf /var/www/newcube/logs

Because we don’t want to overwrite the old Roundcube version we’ll set up a new database. I’m using the same user as for the old database. Note that you might find the old user’s password in /etc/roundcube/debian-db.php. Of course if you find it easier feel free to create a new user for this database.

# mysql -u root -p
mysql> CREATE DATABASE roundcube122;
mysql> GRANT ALL PRIVILEGES ON `roundcube122`.* TO 'roundcube'@'localhost';
mysql> quit

Import the preconfigured MySQL database into the newly created one:

# mysql -u root -p roundcube122 < /var/www/newcube/SQL/mysql.initial.sql

Continue the installation from your browser: point it to
Because I had the repository version installed all PHP and third-party requirements were already met. If some of them aren’t install and/or configure them; most are just PHP modules or Debian packages. If everything is in order press the Next button.

This brings us to the configuration page. I suggest to change the following settings:

temp_dir: /var/roundcubetemp/
log_driver: syslog
db_dsnw: MySQL, localhost, roundcube122, roundcube, P4ssw0rd (use the old roundcube user’s password here)

The SMTP server details depend on how you set up your SMTP server (duh). So change to your own settings.
smtp_server: localhost
smtp_user/smtp_pass: Use the current IMAP username and password for SMTP authentication

When you’re done click CREATE CONFIG.

If no errors occur click CONTINUE.

Delete the installer folder:

# rm -rf /var/www/newcube/installer

Disable the installer in /var/www/newcube/config/

$config['enable_installer'] = false;

Install curl and git:

# aptitude install curl git


# cd /var/www/newcube
# curl -s | php
# cp composer.json-dist composer.json

To install a plugin first find one you like:
I’ll be installing this one:

In /var/www/newcube/composer.json find this part:

    "require": {
        "php": ">=5.3.7",
        "pear/pear-core-minimal": "~1.10.1",
        "roundcube/plugin-installer": "~0.1.6",
        "": "~1.0.12",
        "": "~1.0.6",
        "": "~0.1.1",
        "": "~1.10.0",
        "": "~1.7.1",
        "": "~1.4.2",
        "roundcube/net_sieve": "~1.5.0"

Add a comma to the second last line and add the plugin and version. The string to add appears at the top of the page. It is also (at the time of writing) the part of the url after ‘’.

    "require": {
        "php": ">=5.3.7",
        "pear/pear-core-minimal": "~1.10.1",
        "roundcube/plugin-installer": "~0.1.6",
        "": "~1.0.12",
        "": "~1.0.6",
        "": "~0.1.1",
        "": "~1.10.0",
        "": "~1.7.1",
        "": "~1.4.2",
        "roundcube/net_sieve": "~1.5.0",         <---- NOTE THE COMMA HERE
        "sblaisot/automatic_addressbook": "~0.4.2" <-- NO COMMA HERE


# php composer.phar install

For more consecutive plugins do

# php composer.phar update

Waarom Ziggo geen hostname uitdeelde, of: wat je moet doen om een mailserver te kunnen draaien op een Ziggo-consumentenlijn

Voor de mensen die alleen op zoek zijn naar de oplossing: hij staat in de een-na-laatste paragraaf.

Sommige mensen beklimmen ijsbergen zonder hulpmiddelen, anderen basejumpen. Ik niet – ik maak mailservers. Ieder zijn hobby.

Voor een mailserver is niet zo veel nodig: de software is gratis; de hardware heb je meestal al staan (elke oude computer volstaat); en wie heeft er nu geen internetverbinding?

Opa vertelt
Vroeger was het heel gebruikelijk om zelf een mailserver te hebben, maar dan heb ik het over de tijd dat het niet heel erg gebruikelijk was om thuis een vaste internetverbinding te hebben (vast in tegenstelling tot dial-up met een modem). Toen internet via xDSL en kabel in opkomst kwamen, kregen meer mensen een vaste verbinding met een vast IP. Tegelijkertijd groeide daarmee het aantal machines die mailserver waren zonder het te weten: geïnfecteerd met malware. Dit resulteerde in enorme bergen spam. Op mijn zakelijke mailserver was op een bepaald moment 98% van alle inkomende mail spam.

Om te voorkomen dat je als spambot werd gebruikt, kon je antivirus en een firewall installeren maar vaak hadden mensen er zelf te weinig last van om er iets tegen te willen doen. Toen kwam Microsoft per Windows XP SP2 met de firewall die standaard aan stond en dat scheelde al een hoop. Later namen ook providers hun verantwoordelijkheid en sloten standaard poort 25 uitgaand op de apparatuur van hun eindgebruikers. Dit hielp nog beter.

Poort 25
Toen ik overstapte van XS4ALL naar UPC omdat me dat €600 in het jaar scheelde, kwam ik erachter dat UPC poort 25 uitgaand niet open had op mijn kabelmodem. Dat was jammer maar ik had nog zakelijke servers om mee te experimenteren. Na een poosje wilde ik het toch ook weer thuis doen dus vroeg ik UPC of ze mijn poort 25 uitgaand open wilden zetten.

Op mijn vraag via de site kreeg ik als standaard antwoord een reclamelink naar de website van UPC. Dus maar even gebeld.

-“Bedoelt u poortmapping?”
“Nee, ik bedoel uitgaand, niet inkomend.”
-“Oh, wilt u een firewall instellen?”
“Nee, ik wil uitgaand verkeer op mijn kabelmodem toestaan naar poort 25 op andere internetadressen. Dat is nu geblokkeerd.”
-“Wij blokkeren niks.”
“Jawel, dat kan ik zien als ik met telnet naar buiten ga.”
-“Even met mijn collega overleggen.”

-“Telnet ondersteunen wij niet.”
“Het gaat niet om telnet maar om mijn mailserver.”
-“Wilt u uw mail instellen?”
“Nee, een server draaien.”
-“U mag geen server draaien.”

Nou ja, je begrijpt dat het geen productief gesprek was. Na heel veel vijven en zessen trof ik toevallig iemand van de webcare op Twitter die mijn vraag begreep en poort 25 voor me open zette. Mijn mailserver kon in productie!

Overigens is het wel degelijk toegestaan een mailserver te draaien. Wat niet mag, is de infrastructuur overbelasten. Dat dit vaak op hetzelfde neerkomt betekent niet dat dit per definitie zo is.

Geen prutser
Natuurlijk ben ik geen compléte prutser en had ik op mijn firewall ingesteld dat poort 25 uitgaand op mijn LAN geblokkeerd was behalve voor mijn mailserver. Ook heb ik andere machines op mijn netwerk zo ingesteld dat er geen dagelijks werk wordt verricht met accounts met adminrechten en gebruik ik port mirroring op mijn switch zodat ik met Wireshark kan controleren of er geen botnetactiviteit plaatsvindt op mijn netwerk. Ik wil niet op blacklists terechtkomen met mijn IP-adres.

Kortom: ik ga verantwoordelijk met mijn internetverbinding om.

Nieuwe modem, nieuwe problemen
Dit ging een jaar of twee goed en toen ging mijn kabelmodem kapot. De internetverbinding bleef wegvallen. Kan gebeuren, dus Ziggo gebeld, zoals UPC inmiddels heette. Geen probleem, nieuwe kabelmodem gekregen en de oude moest terug.

Eerst maar eens de nieuwe aansluiten met een kruiskabeltje zodat mijn oude niet meteen gedisabled wordt. Hij werkte en had een poortforwardingfunctie dus ik sloot ‘m aan in mijn productienetwerk. Toen had Ziggo een verrassing voor me.



Onder andere poort 25 wordt actief geblokkeerd.

Nou ja, lang verhaal kort: je kan deze kabelmodem in “modemmodus” en in “routermodus” zetten. Default is router (met poortmapping, firewall, wifi, guest wlan, enz.) en “modemmodus” is eigen bridge mode: alles 1-op-1 doorgeven over fysieke ethernetpoort 1. Best.

Inkomende en uitgaande mail werkte weer \o/

Maar omdat ik een ander IP-adres had gekregen en de dynamische hostname doorgaans is gebaseerd op het publieke IP-adres (bijvoorbeeld moest ik in mijn Postfix-installatie ook even de waarde van myhostname invullen.

Myhostname is de waarde in de Postfix-config waarin je aangeeft wat je hostname is. Deze waarde (je publieke hostname) wordt meegestuurd met de uitgaande mail en ontvangende mailservers kunnen dan die hostname resolven en controleren of het IP-adres dat ze vinden overeenkomt met het IP-adres waar de mail vandaan komt. Komen die adressen overeen dan is er niks aan de hand maar komen ze niet overeen dan is de kans groot dat de afzender vervalst is en wordt de mail als spam gemarkeerd, of in ieder geval wordt de waarschijnlijkheid dat de mail spam is, opgeschroefd. Met andere woorden: klopt je hostname niet dan zal het overgrote deel van de wereld jouw mail als spam behandelen.

Ook geen probleem: even de hostname aanpassen. Die vind je door een nslookup te doen naar je IP-adres:


De waarde achter ‘name’ is je hostname en die moet in Postfix. Daar kwam de volgende verrassing:


In plaats van een hostname te resolven kreeg ik de melding dat deze überhaupt niet gevonden werd. Nu is het gebruikelijk, normaal en voor iedereen nuttig dat een internetprovider hostnames uitdeelt, zeker als de adressen dynamisch zijn; je hebt dan een ankerpunt. Dit is geen nieuwe uitvinding: hostnames zijn al gebruikelijk zo lang als het internet bestaat en in diverse vormen daarvóór al. Ik ken geen providers die geen hostnames uitdelen.

Dat was lastig want wat doe je in zo’n geval? Geen hostname invullen leidt tot foutmeldingen in Postfix en je mail wordt gegarandeerd als spam aangemerkt. Je moet een hostname invullen.

Even de kabelmodem resetten dan maar. In routermodus misschien? Hm, nee. Terug naar modemmodus. En mijn eigen router. Maar helaas zonder resultaat. Dus even naar de helpdesk gebeld, waar ik te woord werd gestaan door iemand die de vraag denk ik niet goed begreep.

-“Wij ondersteunen geen hostnames.”
“Eh… Ik heb er altijd een van jullie gekregen.”
-“Wij ondersteunen dat niet. Hebben we ook nooit gedaan.”
“Toch wil ik er graag eentje hebben. Kan je me vertellen hoe ik eraan kan komen?”
-“Niet via ons. Goeienavond.”
“Ja, wacht nou even. Kan je misschien iemand zoeken die me verder kan helpen?”
-“Nee, meneer. Wij doen dat niet. Zoek het maar op op internet, dat moet ik zelf ook doen.”

En zo ging het gesprek nog even door. De helpdesker begreep de vraag niet en verviel snel tot het riedeltje “dat ondersteunen wij niet”. Toen de helpdesker steeds grover werd en begon te schreeuwen dat ik het zelf maar uit moest zoeken heb ik het opgegeven. Ik doe zelf ook al vijftien jaar helpdesk en weet dat je soms (in jouw ogen) onredelijke vragen krijgt maar schreeuwen tegen je klanten is nooit de oplossing. Ik heb nog nooit zo’n onbeschofte helpdesker meegemaakt als bij Ziggo.

Hoe zou de snelheid bij XS4ALL zijn?


Crap: 2Mb up max. Dat is niet genoeg voor mijn Nextcloud-service en met inmiddels veertien apparaten in huis die aan de internetlijn lurken is 38Mb down ook niet echt luxe.

Misschien kan de Webcare me helpen. Public shaming is het nieuwe vriendelijk verzoeken om te helpen want je helpt klanten niet omdat je goed bent maar omdat je bij anderen de indruk wilt wekken dat je goed bent.

De Webcare was erg vriendelijk en beloofde dat iemand me zou bellen. Degene die belde begreep de vraag niet goed en zou erop terugkomen. Het werd een soort loopje:

10 beloof terug te bellen
20 sleep 2
30 goto 10

Omdat het zo lang duurde en ik niet zo lang zonder mailserver wou zitten (en omdat het heel cool is) had ik in de tussentijd een server gehuurd bij Strato. Met vast IP-adres, met een normale hostname. Alles werkte daarop meteen, maar het ding kost €9 per maand. Geen rib uit m’n lijf maar wel een beetje jammer dat dit gepruts van Ziggo me €108 per jaar bovenop mijn normale abonnementskosten kost.

Dm’s naar de Webcare zonder dat de Webcare zélf eerst iets vroeg, werden standaard genegeerd. Alleen publiekelijk klagen kon een reactie uitlokken. Die was dan persoonlijk en vriendelijk, dat wel, maar ik vind het jammer dat het zo overduidelijk is dat het echt alleen maar om het beperken van imagoschade gaat in plaats van het fatsoenlijk helpen van je klanten.

Behalve van Ziggo ben ik ook al sinds jaar en dag lid van de Consumentenbond. Ik had een consumentenkwestie dus heb ik de Consumentenbond gevraagd te bemiddelen of juridisch advies te leveren. Hoewel ook de Cobo de expertise niet paraat om de vraag te begrijpen, gaf deze wel aan mee te willen helpen met bemiddeling en eventueel juridische bijstand.

Na telefonisch overleg met een vriendelijke meneer van de Cobo wist ik waar ik aan toe was. Dat heb ik (natuurlijk publiekelijk want ik wou wel graag een reactie) aan de Ziggo Webcare medegedeeld.

Die avond was er een oplossing.

De oplossing
Een vriendelijke helpdesker belde me en legde me het volgende uit: vanwege het tekort aan IPv4-adressen had Ziggo besloten een range adressen die eigenlijk voor voip bedoeld was, toe te wijzen aan kabelmodems. Voip-client hebben (blijkbaar) geen hostname nodig dus aan deze adressen waren geen hostnames gekoppeld. Mijn kabelmodem werd in een andere pool geplaatst, kreeg uit die range een adres en het probleem was opgelost.

Eind goed, al goed? Nou, nee: hoewel het directe technische probleem opgelost is en de Webcare- en helpdeskmedewerker die me uiteindelijk geholpen hebben dit vriendelijk en zeer professioneel hebben gedaan, zitten we wel met een provider die blijkbaar alleen normale service kan verlenen als je dreigt met juridische stappen. Net als bij antibiotica is dat een onwenselijke situatie: als het te vaak voorkomt dan treedt er resistentie op en is Ziggo straks ook dáármee niet meer tot service te bewegen. Een zeer kwalijke zaak dus. Zodra er een andere provider is die op mijn adres een goeie snelheid kan bieden, ben ik weg bij Ziggo.

Blocking relay hammering on Postfix with Fail2ban

After installing Postfix on a new VPS I noticed that server was under continuous attack by people trying to use it as an open relay. The server obviously was configured not to allow relaying for external parties so they were politely shown the door by Postfix:

Oct 14 10:51:03 h2621265 postfix/smtpd[12328]: NOQUEUE: reject: RCPT from[]: 454 4.7.1 <>: Relay access denied; from=<> to=<> proto=SMTP helo=<>
Oct 14 10:51:04 h2621265 postfix/smtpd[12328]: NOQUEUE: reject: RCPT from[]: 454 4.7.1 <>: Relay access denied; from=<> to=<> proto=SMTP helo=<>
Oct 14 10:51:04 h2621265 postfix/smtpd[12328]: NOQUEUE: reject: RCPT from[]: 454 4.7.1 <>: Relay access denied; from=<> to=<> proto=SMTP helo=<>
Oct 14 10:51:04 h2621265 postfix/smtpd[12328]: NOQUEUE: reject: RCPT from[]: 454 4.7.1 <>: Relay access denied; from=<> to=<> proto=SMTP helo=<>
Oct 14 10:51:04 h2621265 postfix/smtpd[12328]: NOQUEUE: reject: RCPT from[]: 454 4.7.1 <>: Relay access denied; from=<> to=<> proto=SMTP helo=<>
Oct 14 10:51:05 h2621265 postfix/smtpd[12328]: NOQUEUE: reject: RCPT from[]: 454 4.7.1 <>: Relay access denied; from=<> to=<> proto=SMTP helo=<>
Oct 14 10:51:05 h2621265 postfix/smtpd[12328]: NOQUEUE: reject: RCPT from[]: 454 4.7.1 <>: Relay access denied; from=<> to=<> proto=SMTP helo=<>

However since there were a lot of them (sometimes five per second) my logfiles were growing rapidly and Postfix was being kept quite busy. I had installed Fail2ban, which is a program that reads logfiles and takes (mostly iptables) action upon certain repeated entries, for example

I wanted to use Fail2ban to block IPs that kept trying to relay mail from outside.

In /etc/fail2ban/jail.local find the Postfix section:


enabled  = false
port     = smtp,ssmtp,submission
filter   = postfix
logpath  = /var/log/mail.log

Change false to true.

In /etc/fail2ban/filter.d/postfix.conf find

failregex = ^%(__prefix_line)sNOQUEUE: reject: RCPT from \S+\[<HOST>\]: 554 5\.7\.1 .*$
            ^%(__prefix_line)sNOQUEUE: reject: RCPT from \S+\[<HOST>\]: 450 4\.7\.1 : Helo command rejected: Host not found; from=<> to=<> proto=ESMTP helo= *$
            ^%(__prefix_line)sNOQUEUE: reject: VRFY from \S+\[<HOST>\]: 550 5\.1\.1 .*$
            ^%(__prefix_line)simproper command pipelining after \S+ from [^[]*\[<HOST>\]:?$

Directly underneath add a new regex, so that it reads:

failregex = ^%(__prefix_line)sNOQUEUE: reject: RCPT from \S+\[<HOST>\]: 554 5\.7\.1 .*$
            ^%(__prefix_line)sNOQUEUE: reject: RCPT from \S+\[<HOST>\]: 450 4\.7\.1 : Helo command rejected: Host not found; from=<> to=<> proto=ESMTP helo= *$
            ^%(__prefix_line)sNOQUEUE: reject: VRFY from \S+\[<HOST>\]: 550 5\.1\.1 .*$
            ^%(__prefix_line)simproper command pipelining after \S+ from [^[]*\[<HOST>\]:?$
            ^%(__prefix_line)sNOQUEUE: reject: RCPT from (.*)\[<HOST>\]: 454 4\.7\.1\.*

Now restart Fail2ban:

# service fail2ban restart

Watch the offenders being blocked before they even hit Postfix:

# watch -d -n 10 fail2ban-client status postfix

Looks like this at my server:

Every 10,0s: fail2ban-client status

Status for the jail: postfix
|- filter
|  |- File list:        /var/log/mail.log
|  |- Currently failed: 13
|  `- Total failed:     945
`- action
   |- Currently banned: 10
   |  `- IP list:
   `- Total banned:     72

Those addresses get a time out of ten minutes before they are allowed to try again.

Bonus: check out the recidive jail in Fail2ban. If an address is found to get blocked again and again it gets sentenced to longer jail time, like a week or a month.

No more relay hammering storm on my server!

ASSP on Debian 8/Ubuntu 16.04


ASSP stands for Anti-Spam SMTP Proxy and that’s exactly what it is. You install it as a proxy between the internet and your mailserver and it filters out spam for you. For more information on ASSP check out my previous article on ASSP, which is much more verbal and also discusses in some detail how to operate your ASSP installation.

This article explains how to set up ASSP on Debian 8 “Jessie”. It will most probably work on any Debian derevative as well, like Ubuntu or Linux Mint.

Update: added notes for installing on Ubuntu 16.04.1.

Ubuntu doesn’t have aptitude installed by default so if you’re running Ubuntu you must either install it (# apt-get install aptitude) or substitute aptitude for apt-get in this article.

ASSP can use a number of database backends, including:
– flat text files (this is the default)
– BerkeleyDB (easy to set up but not managed)
– MySQL (a bit more complicated to set up but works better in the long run)
– many other databases

We’ll start out with text files and migrate to MySQL once everything is running.

You will need to set up your own mailserver (e.g. Exchange, Domino, Postfix) and have it relay outgoing mail through the ASSP server. We’ll set up ASSP on a separate machine. It’s perfectly possible to run ASSP on your mailserver itself but isolating it on a separate machine makes for easier troubleshooting.

System requirements
During setup ASSP may complain that it needs at least four processors and two DNS servers. It will work with less but the complaints are valid: in order to secure smooth operations you *will* need a decent server. Also the DNS servers need to be servers on your LAN, not on the internet. External DNS servers may return non-standard replies to dnsbl queries and even if they don’t they may start doing so in the future without warning. For this test setup I’m using just one DNS server but in a corporate environment in which you’re depending on DNS for your daily network operations (say Active Directory or any type of LDAP based network) your DNS server(s) must be sufficiently responsive to service both LDAP and ASSP queries.

Why not use Exim4 instead of Postfix? Because I am more familiair with Postfix. Use Exim4 if you like.

Network lay-out
For this article I will be using three machines: – router – ASSP + Postfix – mailserver (Postfix) to which the end-users connect
Domain: testnet.lab
User: vorkbaard – but feel free to add your own

Do keep the IP addresses in mind when copying and pasting example code into your own server.

– Debian 8.6
– ASSP 2.5.1 16177

Installing the server

When installing Debian you’ll be asked which server roles you would like to install. Ubuntu and other derevatives may differ from Debian itself but in any case you do not need to select anything in particular. SSH will be handy and the so-called standard system utilities.

No particular software collections necessary:
[*] SSH Server
[*] Standard system utilities

Note for Ubuntu 16: you may install the Mailserver role here, which will install Postfix.

Debian Jessie's Task Selector
Debian Jessie’s Task Selector

If you’re using a VM it is advisable to install ntpdate so your mail and logs will use the correct time stamps:

# aptitude install ntpdate ntp
# service ntp stop && ntpdate && service ntp start

After exporting or importing the VM do

# ntpdate

to sync the time.
(Pick a close ntp server from the list at


# aptitude install postfix

If asked to replace Exim, choose Yes. Type of server: Internet Site. System mail name: like the explanation on-screen says: if a mail address on the local host is, the value would be example org.

In the file /etc/postfix/ find:

smtp    inet    n   -   -   -   -   smtpd

and change it to:

125     inet    n   -   -   -   -   smtpd

Set message size limit
We need to take a careful look at the maximum allowed e-mail size because if ASSP and Postfix are not using the same size strange errors may occur. I have had mails stuck in the queue indefinitely because Postfix’s maximum size was smaller than ASSP’s.

For this article we’ll be going with a 20MB e-mail limit.

In the file /etc/postfix/ change the following value (or add it if it doesn’t exist):

message_size_limit = 26214400

Secure so only the ASSP server may use Postfix: in /etc/postfix/ change mynetworks to:

mynetworks = [::ffff:]/104 [::1]/128

Take care to use your own server’s address in the above line where I wrote
At the end of the file add:

smtpd_client_restrictions = permit_mynetworks, reject
smtpd_delay_reject = no
transport_maps = hash:/etc/postfix/transport

This tells Postfix:
– to allow the addresses in the mynetworks value
– to reject immediately if not allowed
– to use a hash of the file /etc/postfix/transport to look for routing instructions.

Look for the value of mydestination and remove your mail domain (note: this is already done in Ubuntu 16):

mydestination =, assp.testnet.lab, localhost.testnet.lab, localhost


mydestination = assp.testnet.lab, localhost.testnet.lab, localhost

If you don’t do this Postfix will assume it is the final recipient of incoming mail for your domain, which it isn’t: it must be routed through ASSP and delivered to your ‘real’ mailserver.

Create the file /etc/postfix/transport and add to it: smtp:

This tells Postfix that mail to the domain should be routed to using the smtp protocol. Again, use your own domain name and mailserver IP address.

Load the transport file in Postfix and reload Postfix:

# postmap /etc/postfix/transport
# postfix reload

The postmap command creates a hash file for the /etc/postfix/transport. Our file contains only one entry but if there a lot more the hashing would make it easier to read for the computer, thus faster. It’s just the way Postfix does things.

Note that if you change the transport file you need to rehash it by rerunning the above postmap command.

Perl modules from Debian’s repositories

A lot of Perl modules exist in Debian’s repositories. The format is usually: net::dns::perl becomes libnet-dns-perl. To install the lot of them:

# apt-get install libnet-dns-perl libauthen-sasl-perl libmail-spf-perl \
          libregexp-optimizer-perl libfile-readbackwards-perl \
          libnetaddr-ip-perl libnet-cidr-lite-perl libmail-dkim-perl \
          libnet-ldap-perl libunicode-string-perl \
          libemail-mime-perl libtext-unidecode-perl \
          liblingua-stem-snowball-perl libsys-cpu-perl libthreads-perl \
          libschedule-cron-perl libdigest-sha-perl libmime-types-perl \
          libclamav-client-perl libarchive-zip-perl libberkeleydb-perl \
          liblingua-identify-perl libsys-cpuload-perl \
          libthreads-shared-perl libunicode-linebreak-perl


We’ll be installing as many Perl modules as possible from Debian’s repositories. This ensures they will play nice with the rest of the OS and be updated automatically.

Some modules have dependencies that need to be fulfilled first.

Module:				   Dependency (needs):
OCR modules:                       libgd2-xpm-dev
Crypt::OpenSSL::AES:               libssl-dev					
Image::OCR::Tesseract:             tesseract-ocr and imagemagick
PDF::OCR and PDF::OCR2:            xpdf
installing Perl modules from CPAN: make and build-essential
# aptitude install make build-essential libgd2-xpm-dev libssl-dev tesseract-ocr imagemagick xpdf

Perl modules

Some Perl modules are not available as a Debian repository package. We need to install those with CPAN.

First, upgrade CPAN:

# cpan
Would you like to configure as much as possible automatically? [yes]
CPAN> install CPAN
CPAN> reload cpan

Now let’s install the modules.

cpan> install Digest::SHA1 LWP::Simple Net::IP::Match::Regexp Net::SMTP Net::SenderBase Net::Syslog Thread::State File::PathInfo LEOCHARRE::DEBUG LEOCHARRE::CLI Tie::RDBM Sys::CpuAffinity Sys::MemInfo Unicode::GCString Mail::DKIM::Verifier PDF::Burst PDF::GetImages Crypt::OpenSSL::AES Image::OCR::Tesseract PDF::OCR PDF::OCR2 Email::Send

This may take a while. Get coffee.

LWP will ask if you want to run tests. Answer it No. Get more coffee.

Mail::SPF::Query is used for upgrade compatibility with ASSP V1. V2 only uses Mail::SPF. It is possible to force install Mail::SPF::Query and it will work but unless you’re upgrading from V1 (which we’re not; this is a clean install) it is better to disable useMailSPFQuery, not install Mail::SPF::Query and instead enable useMailSPF and install libmail-spf. Enabling and disabling these options can be done in ASSP’s web interface later on.

If useMailSPFQuery:=0 ASSP may become unstable, at least that’s what happens when I install it. Force install Mail::SPF::Query and leave useMailSPFQuery:=1. I went with

cpan> force install Mail::SPF::Query

If you’re going to use ClamAV for virus scanning, do

cpan> force install File::Scan::ClamAV

(My previous article explains in more detail how to set up ClamAV with ASSP.)

cpan> quit

Installing ASSP

# aptitude install unzip
$ wget -O

Extract to /usr/share/assp/. You could put it anywhere but I’m using this path.

# unzip -d /usr/share

Make the Perl scripts executable:

# chmod +x /usr/share/assp/*.pl

Create a dedicated system user for assp:

# useradd assp -r

Set file permissions

# chown -R assp:assp /usr/share/assp

ASSP will change the permissions a bit. That’s ok. It’s started as root and its code changes it to the assp user.

Start ASSP:

# perl /usr/share/assp/ &

Press Ctrl + C to fork the process to the background to free your console.
Watch for errors and warnings in /usr/share/assp/logs/maillog.txt.
Point your browser to The default username is root and the password is nospam4me.

Configuring ASSP

The bare minimum to getting ASSP running:

[Network Setup / Incoming MailNetwork Flow]
SMTP Listen Port (listenPort): 25
SMTP Destination (smtpDestination): 125

[Relaying / Outgoing and Local Mailrelaying not allowed]
Relay Host (relayHost): (this is Postfix we set up earlier!)
Relay Port (relayPort): 225
Allow Relay Connection from these IP’s (allowRelayCon):

[SMTP Session Limits]
Max Size of Local Message (maxSize): 20971520
Max Size of External Message (maxSizeExternal): 20971520

[Recipients/Local Domains/Transparent Recipients and Domains ]
Local Domains (localDomains):

[ ] Enable Delaying/Greylisting (in any case for now while we’re testing)

Prepend Spam Subject (spamSubject): [SPAM]
[v] Prepend Spam Tag (spamTag)
[v] All Test Mode ON (allTestMode)

Notification Email To (Notify):

[DNS Setup]
[v] Use Local DNS (UseLocalDNS)
DNS Name Servers (DNSServers):
Use at least two dns servers for production environments. Using one dns server will result in an error message ‘incorrect ‘DNSServers’ – possibly unchanged’. It will still work.

[Server Setup]
Run as UID (RunAsUser): assp
Run as GID (RunAsGroup): assp
My Name (myName):
Override the Server SMTP Greeting (myGreeting): MYNAME
[v] Set ASSP File Permission on Startup (setFilePermOnStart)
[v] Check ASSP File Permission on Startup (checkFilePermOnStart)

Do not forget to press Apply Changes after making changes. If you need to restart ASSP scroll all the way up and click Shutdown/Restart. Click the Proceed button and be patient. You can follow the shutdown process from your terminal.

If your browser says the page cannot be found ASSP has stopped and you may restart it.

Now is a good time to test connectivity. Tell your mailserver to relay outgoing mail to and verify that incoming and outgoing mail is working. In case of problems check /var/log/mail.log and /usr/share/assp/logs/[b]maillog.txt.

Migrating the databases to MySQL

Install mySql:

# aptitude install mysql-server mysql-client

The installer will ask you for a password. Choose a hard password and remember it.

Set up a database and a user for assp:

# mysql -u root -p
mysql> create database assp;
mysql> create user 'assp'@'' identified by 'pwd';
mysql> GRANT ALL PRIVILEGES ON `assp`.* TO 'assp'@'';
mysql> quit

Enable [Network Setup / Incoming Mail] > Disable all new SMTP and Proxy Network Connections (DisableSMTPNetworking). So check the checkbox.
[Apply Changes]

Monitor the ASSP database for changes:

# watch mysql -u root -pMySqlPassword -e \'show tables\' assp

If your MySql root password is pwd do

# watch mysql -u root -ppwd -e \'show tables\' assp

Initially this will not show any output because ASSP has not made any tables yet.

Set all needed DB parameters [File Paths and Database]
database hostname or IP (myhost):
database driver name (DBdriver): mysql
database name (mydb): assp
database username (myuser): assp
database password (mypassword): pwd (use the assp user’s database’s password here)

[Apply Changes], verify that ASSP is not throwing errors and that the database settings remain set in the web interface.

Set Email Whitelist Database File (whitelistdb) to: DB:
Press [Apply Changes].

Restart ASSP and verify that the watch command now shows the whitelist table (can take a few seconds).

If that works correctly then ASSP is working well with MySQL and you can set the other lists to DB: as well:
Email Redlist Database File (redlistdb)
Personal Blacklist Database File (persblackdb)
Delaying Database (delaydb)
LDAP Database (ldaplistdb)

Start assp

Uncheck the DisableSMTPNetworking checkbox.

Tips and tricks

* Subscribe to the ASSP mailing list.
* Troubleshooting Postfix:
– Postfix logs to /var/log/mail.log
– ASSP logs to /usr/share/assp/logs/maillog.txt and /usr/share/assp/logs/bmaillog.txt (errors). That is, if you installed ASSP in /usr/share/assp.
– # postconf -n shows all non-default settings in Postfix’s configuration
– Do not start the line ‘125 inet n – – – – smtpd’ with one or more spaces.
– Postfix does not care about the order of the directives in
* Check my previous article on ASSP for a more elaborate discussion of the software.

Install GNU social on Debian 8

GNU social is an open source social media platform. Users on your own server can follow users on other servers and vice versa, essentially making it a distributed social media platform. It doesn’t do a whole lot but what it does it does well.

I’m installing on Debian 8.

Install Debian with the web server role installed. If you are installing on an existing server without a web server do

# tasksel, select web server and choose Ok.

Create the database

Create a database for GNU social:

# mysql -u root -p
CREATE USER gnusocial;
SET PASSWORD FOR gnusocial=PASSWORD("Passw0rd");
GRANT ALL PRIVILEGES on gnusocial.* TO gnusocial@localhost IDENTIFIED BY "Passw0rd";

Download the code

At the moment GNU social is being packaged for Debian so for the time being we should use git to download the code.

Install git:

# aptitude install git

Change to the web root and download GNU social:

# cd /var/www
# git clone

Configure Apache

In /etc/apache2/sites-available/000-default.conf change

# ServerName
DocumentRoot /var/www/html


DocumentRoot /var/www/gnu-social
<Directory /var/www/gnu-social/>
    AllowOverride All
    Order Deny,Allow
    Allow from all

Replace by your own domain name.

Enable pretty URL’s: enable the rewrite mod in Apache and copy the htaccess file:

# a2enmod rewrite
# mv /var/www/gnu-social/htaccess.sample /var/www/gnu-social/.htaccess

Set permissions on the GNU social folder:

# chown -R www-data:www-data /var/www/gnu-social

Reload Apache to reflect the changes:

# service apache2 reload

Configuration via browser

Open the site in your browser:

Database settings:
Hostname: localhost
Name: gnusocial
DB username: gnusocial
DB password: Passw0rd

After this setup GNU social throws me a bunch of errors but it works anyway.


To install a theme check /var/www/gnu-social/theme/ to see which themes are available. Pick one and in /var/www/gnu-social/config.php add the line

$config['site']['theme'] = 'yourthemename';

Install the latest version of SABnzbd for multiple users on Debian 8

This article explains how to set up SABnzbd on Debian 8. I suppose it will also work on Ubuntu and other Linux flavours since there aren’t a whole lot of dependencies.

We’ll install SAB’s Python code in a central location so all users will use the same code. Each user will have their own download queue and locations, preferences, history files, etc. We’ll encrypt the webinterface with SSL certificates from Let’s Encrypt.

If you found this article useful please click some of the ads around here 🙂

Here’s what we’ll cover:

  • Why the Python code instead of the Debian package?
  • Install dependencies and extra tools
  • Download and extract SABnzbd’s Python code
  • Create an ini file
  • Webbased setup
  • Secure web interface with Let’s Encrypt SSL certificates
  • Add users
  • Managing the SAB processes
  • Upgrading and tweaks
  • Domain name
    I assume you have a registered domain name. If not: get one or skip the Let’s Encrypt part since Let’s Encrypt only works with registered domain names. Use self-signed certificates instead. In this article I use Substitute it by your own domain name.

    Port numbers
    By default the SABnzbd webinterface runs on port 8080. All port numbers I use in this article are arbitrairy. By convention don’t use lower port numbers (<1024). Make sure you get your port mapping right.

    Because SABnzbd will be running under the users' respective accounts it is important to distinguish between things done as root and as a regular user. My useraccount is called vorkbaard and I'll be using it as an example. Substitute your own.

    Commands executed as root are indicated by #. Use sudo to run them or just log in as root or do

    $ sudo su


    $ su root

    to change user contexts.

    Commands executed as a regular user are indicated by $.

    Why the Python code instead of the Debian package?

    Because while stable the Debian package is rather outdated. At the moment of writing the Debian package is at version 0.7.18 while the current version is 1.0.1. Now to be fair the functionality is alright in the Debian ersion. I’m in it for the eye candy and SABnzbd is not the most critical part of my server so if it breaks there’s no urgency to fix it.


    This page lists SAB’s dependencies:
    To install them in Debian, run

    # aptitude install python2.7 python-cheetah python-openssl python-support python-yenc unzip p7zip-full par2

    On my server SABnzbd was complaining about a “problematic UNRAR” because I had installed unrar-free. The non-free version of unrar stopped the complaints:
    In /etc/apt/sources.list make sure you have non-free added to your repositories:

    deb jessie main contrib non-free
    deb-src jessie main contrib non-free
    deb jessie/updates main contrib non-free
    deb-src jessie/updates main contrib non-free
    # jessie-updates, previously known as 'volatile'
    deb jessie-updates main contrib non-free
    deb-src jessie-updates main contrib non-free


    # aptitude update

    to update the package cache.

    Then install unrar:

    # aptitude install unrar

    Download and extract SABnzbd’s Python code

    Head over to and copy the Python Source location. I’m installing in /opt on my server because I feel it should go there. But any location with the correct permissions will do.

    Go to /opt and download SABnzbd:

    # cd /opt
    # wget

    Extract the tarball and rename the directory:

    # tar -xzf SABnzbd-1.0.1-src.tar.gz
    # mv SABnzbd-1.0.1 sabnzbd

    Create an ini file

    We should now create a stub ini file to tell SABnzbd it should a) not fire up your web browser when it starts and b) should be accessible from outside, not just the local server. The purpose of this is to start up SABnzbd on the server but continue the webbased setup from your workstation.

    Now the sabnzbd.ini file is distinct for each user. So it goes in the user directory. We’ll cover adding more users later; this is just the initial setup.

    As a user create a file called ~/sabnzbd.ini
    And add this to it:

    host =
    auto_browser = 0

    Later on you may prefer to use hidden ini file. In that case just name them ~/.sabnzbd.ini.

    Start SABnzbd

    Then fire up SABnzbd. The -f switch specifies which ini file you want to use. The -d switch tells SAB to go into daemon mode. Personally I like to run it interactively when setting it up (so no -d switch) in a separate terminal session so I can keep an eye on it.

    $ python /opt/sabnzbd/ -f ~/sabnzbd.ini


    Keep an eye on the process for warnings, errors and missing dependencies. Press Ctrl+C if you need stop the process – note that this will stop SABnzbd so it will no longer be available from its webinterface. If you stop SAB from the webinterface the process will stop as well.

    If all is well SAB will start up normally (if not: stop the process, fix what needs fixing and try again) and you will be able to access it on its default port number 8080. Follow the webbased wizard.


    Secure web interface with Let’s Encrypt SSL certificates

    Here is a fun part. I’m not familiar with Let’s Encrypt on non-Debian systems but Let’s Encrypt keeps a fairly good documentation. Also because it is still under development so things may change.

    If you do not have a registered domain name you could use self-signed certificates (Let’s Encrypt will not work withou a registered domain name) but you’ll be forever dodging browser warnings.

    Install Let’s Encrypt
    If you haven’t done so install Apache. If you prefer a different web server that’s ok, check Let’s Encrypt’s website.

    # aptitude install apache2

    In /etc/apache2/sites-available/000-default.conf change

    # ServerName

    to your own domain name:

    ServerName <-- use your own domain

    Reload Apache:

    # service apache2 reload

    Make sure you can reach it from outside.

    Let’s Encrypt is easily installed from Debian Jessie’s backport repository:

    # echo deb jessie-backports main > /etc/apt/sources.list.d/backports.list
    # aptitude update
    # aptitude install letsencrypt python-letsencrypt-apache

    The installation had some dependency issues on my server and aptitude offered to resolve them. It offered a bunch of solutions and I chose the one that did not keep anything at their current version and/or uninstalled stuff.

    Start Let’s Encrypt:

    # letsencrypt run

    Afterwards verify your site is accessible over https.

    Set permissions
    If all is well your certificates will now reside in /etc/letsencrypt/live/ We must copy the certificates to /opt/sabnzbd/admin/ and set the correct permissions. Why /opt/sabnzbd/admin? Because sabnzbd/admin is the original certificate location for SAB. Keeping things in their expected locations makes it easier to troubleshoot.

    Unfortunately this involves compromising security a bit because your SAB-enabled users will be able to read your server’s private key. This is not optimal but we can reduce the risk to a minimum.

    Create the admin folder:

    # mkdir /opt/sabnzbd/admin

    Create a group called sab and add all users that should be able to use SAB (so as not to give anyone access):

    # groupadd sab

    For each user:

    # usermod -a -G sab vorkbaard

    Note that the -G parameter must be a capital G. The lowercase g would change the user’s primary group, not add an extra group.

    Give the sab group read permissions on /opt/sabnzbd/admin:

    # chown root:sab /opt/sabnzbd/admin

    Set traverse rights for the sab users (i.e. they should be able to open the folder):

    # chmod 610 /opt/sabnzbd/admin

    Renewing the certificates
    Now we’ll create a little script to renew the certificates and copy them over to the admin folder:

    Create a file /root/

    # Renew the Let's Encrypt certificates
    /usr/bin/letsencrypt renew –-agree-tos
    # Copy the new certs to SABnzbs
    cp /etc/letsencrypt/live/*.pem /opt/sabnzbd/admin
    chmod 440 /opt/sabnzbd/admin/*.pem
    chown root:sab /opt/sabnzbd/admin/*.pem

    Schedule it with cron:

    # crontab -e

    Add this line:

    @daily /root/

    In cron make sure you end your last line with an end of line sign – just add a new, empty line. Otherwise the last line will not run, no error will get logged and you will spend countless hours troubleshooting. You are welcome.

    Run the script while you’re at it and all should be in order.

    # /root/

    If you screw this up the SABnzbd will throw an error at you when starting its Python script.


    From the webinterface set the following options:
    [x] Enable HTTPS
    HTTPS Port: 8081 (note that this is arbitrary but it must be different from the HTTP port)
    HTTPS Certificate: /opt/sabnzbd/admin/cert.pem
    HTTPS Key: /opt/sabnzbd/admin/privkey.pem
    HTTPS Chain Certificates: /opt/sabnzbd/admin/chain.pem

    The chain certificate is not always necessary (depends on the browser and its support) but it does no harm.

    Save then restart SABnzbd. Keep an eye on the process for errors. If all went well you can now connect to the https version of your SABnzbd process!


    Add users

    If you’re going to add users you must keep a couple of things in mind:
    – You do not want your users to interfere on each others setups. In other words: they must be separated. So we’ll use separate SAB processes for each users. To accomplish this we must use the –new parameter when starting SAB.
    – Because two processes cannot run on the same port number we must dedicate distinct port numbers for each user.
    – Each user must have her own sabnzbd.ini file.
    – SAB users must be members of the sab group in order to read the certificates.

    Set up your first account in a generic way, then close SABnzbd and copy your sabnzbd.ini file to /opt/sabnzbd/ for easy access. Keep this file as a template and copy it to all SAB-enabled users. Make sure to change the port and https_port values. Afterwards also change the API and NZB keys from the web interface and perhaps reset their password, e-mail address, and so on.

    Just for inspirational purposes I will describe how to add a user and enable SABnzbd for her/him.

    Copy the ini file:

    # cp /home/user/vorkbaard/sabnzbd.ini /opt/sabnzbd/sabbznd.ini.generic

    Clean out the password. In /opt/sabnzbd/sabnzbd.ini.generic set:

    password = ""

    Create a new user:

    # useradd -G sab -m tinus

    -G sab adds the new user to the sab group; -m creates the user’s home directory.

    Bestow unto Tinus a password:

    # passwd tinus

    Copy the generic ini file to Tinus’s home folder and set permissions:

    # cp /opt/sabnzbd/sabnzbd.ini.generic /home/tinus/sabnzbd.ini
    # chown tinus: /home/tinus/sabnzbd.ini
    # chmod 770 /home/tinus/sabnzbd.ini

    In /home/tinus/sabnzbd.ini change:

    https_port = 8082
    port = 8083

    The port numbers should be unique.

    After you start the new user’s SABnzbd process (see the next section) log in to her SAB webinterface and set a password and other personal options.

    Managing the SAB processes

    Create a file /etc/init.d/ and add:

    # Provides: multisab
    # Required-Start: $remote_fs $syslog
    # Required-Stop: $remote_fs $syslog
    # Default-Start: 2 3 4 5
    # Default-Stop: 0 1 6
    # Short-Description: Start sab at boot time
    # Description: Sabnzbdplus for multiple users
    # userlist format: "username|port number|api key [space] username|port number|api key"
    userlist="tinus|8083|CsZ2HCbpHd5z7XvDlp7QPfViqnc4rfyC vorkbaard|8081|q89pYfUkvGgbQvW5SQYkwR1Lj2FQjIz2"
    case "$1" in
      for userstr in $userlist
        name=$(cut -d'|' -f1 <<< $userstr)
        /usr/bin/sudo -u $name -H /usr/bin/python /opt/sabnzbd/ -d -f /home/$name/sabnzbd.ini --new
      for userstr in $userlist
        name=$(cut -d'|' -f1 <<< $userstr)
        port=$(cut -d'|' -f2 <<< $userstr)
        akey=$(cut -d'|' -f3 <<< $userstr)
        /usr/bin/wget --no-check-certificate --delete-after $shutdownurl
      echo "Usage: $0 {start|stop}"
      exit 1
    exit 0

    wget will complain about ‘localhost’ not being the in the cert’s name, hence the –no-check-certificate switch. Using localhost will prevent problems with non-functional internet connections, nat reflection and changing domain names. You could use http://localhost:<http port number> but that way you would need to keep the non-ssl port opened. Anyway, I use localhost.

    Change the userlist string so it contains your own users and their own API keys. It is ok to use incorrect API keys to start the SABnzbd processes but to stop them (gracefully) you need the right ones. If you’ve just added a user then use a fictional API key, start the multisab service, log in to the user’s SABnzbd webinterface, find the API key and paste the key in the script.

    Make the script executable:

    # chmod +x

    Register it as a service that should start at normal boots and stop at poweroffs and such:

    # update-rc.d defaults

    You can now also control it with

    # service multisab start


    # service multisab stop

    Upgrading and tweaks

    When upgrading:
    – Make sure to read the release notes. You may very well be able to keep your old ini files but you never know.
    – Keep your /opt/sabnzbd/admin folder or recreate it.

    Stop the multisab service:

    # service multisab stop

    Change to the opt partition and download and extract the new version:

    # cd /opt
    # wget
    # tar -xzf SABnzbd-1.0.2-src.tar.gz

    Rename the old directory

    # mv /opt/sabnzbd /opt/sabnzbd.old

    Rename the new directory

    # mv /opt/SABnzbd-1.0.2 /opt/sabnzbd

    Copy the admin directory, preserving al permissions

    # cp -rp /opt/sabnzbd.old/admin /opt/sabnzbd/
    Start the multisab service
    # service multisab start

    Reload SABnzbd in your browser and verify everthing works and the new version is active.

    - Turn off http - once everything works I suggest you turn off unencrypted http access by enabling HTTPS, leaving the HTTPS Port field empty and entering the https value in the SABnzbd Port field.
    - Set a sensible caching: If you have enough memory set it to 500M or so.
    - There are a bunch of things you can do to make SABnzbd faster, have it play nicer and overall just behave better. Poke around in the settings and check the SABnzbd site.

    Enjoy 🙂

    Installing a mailserver on Debian 8 – Part 5: Web interface: Roundcube

    How to install a complete mailserver on Debian 8, featuring Postfix, Dovecot, MySQL, Spamassassin, ClamAV, Roundcube and Fail2ban.

    ~ the howto that actually works ~

    Part 1: Introduction
    Part 2: Preparations: Apache, Let’s Encrypt, MySQL and phpMyAdmin
    Part 3: MTA: Postfix
    Part 4: IMAP server: Dovecot
    Part 5: Web interface: Roundcube
    Part 6: Spam filtering: SpamAsasssin
    Part 7: Antivirus: ClamAV and ClamSMTP
    Part 8: Quota and other Roundcube settings
    Part 9: Using mail with a remote IMAP client (i.e. Thunderbird)
    Part 10: Counter brute-force attacks with Fail2ban
    Part 11: Sources, config files, colouring and comments

    On this page

    Tell the webserver about Roundcube
    Change the default session key
    Remove Server field from logon screen
    Set a user passwords
    Changing the password from Roundcube

    Comments are on the last page.

    On this page
    On this page


    For Roundcube’s installation remember that you need to have backports set up. Alternatively download Roundcube yourself but that will render the next part of this article party invalid. At the moment of writing the Debian backport repo contains the most recent stable Roundcube version (1.1.4).

    # aptitude install roundcube roundcube-plugins

    Configure database for roundcube with dbconfig-common? ==> Yes.


    Database type to be used by roundcube: mysql


    Database password: your MySQL root password


    I had a random password generated.


    Tell the webserver about Roundcube

    In /etc/roundcube/apache.conf uncomment

    Alias /roundcube /var/lib/roundcube

    Reload Apache’s config:

    # service apache2 reload

    Change the default session key

    In /etc/roundcube/ change the sample key used for remembering passwords:

    // this key is used to encrypt the users imap password which is stored
    // in the session record (and the client cookie if remember password is enabled).
    // please provide a string of exactly 24 chars.
    $config['des_key'] = '321UseYourOwnKeyHere4567';

    Remove Server field from logon screen

    In /etc/roundcube/ change

    $config['default_host'] = '';


    $config['default_host'] = 'localhost';

    This will remove the Server field on Roundcube’s logon screen since we’re only ever going to use it to view mail on the same server Roundcube is installed on.

    Monitor /var/log/apache2/error.log and /var/log/roundcube/error.log for errors.

    At this point you can browse to


    However your only account has no password set yet so let’s do that first.

    Set a user passwords

    doveadm is a command line Dovecot administration tool. Read man doveadm c.q. man doveadm-pw for more information.

    # doveadm pw -s SHA512-CRYPT

    Enter your password, then confirm. Doveadm will generate a string that starts with “{SHA512-CRYPT}$6$”. Copy the entire string except for “{SHA512-CRYPT}”, so incuding the “$6$” and in phpMyAdmin paste it in the password field for your user.






    On the command line:

    # mysql -u root -p
    mysql> UPDATE `postfix`.`addresses` SET `pwd` = '$6$QohFKnpbY8fKjw0e923d0501zmhd7YlfQtyBFk6SXGu8GK7H8Vtt1poOs2x6hFPmwU7.z4g7ZCnvGk0yRU4vZGkDW/1hGT5dI82it51' WHERE `email` = "";
    mysql> quit

    Log in to the Roundcube web interface with the user’s full e-mail address as the username, the password you have just entered twice (not the encrypted version obviously but the thing you typed).


    If you can’t log in check your /var/log/mail.log and /var/log/roundcube/errors. /var/log/roundcube/errors always generates some PHP errors on my machine. I think they’re the results of buggy PHP but they don’t seem to prevent Roundcube from working properly. Point is, while you can use Roundcubes logfiles to troubleshoot but don’t freak out about errors if things work properly.

    Changing the password from Roundcube

    I would like my users to be able to change their own e-mail passwords from Roundcube. A plugin can be enabled for that. In /etc/roundcube/ find the plugin array and change it to:

    // List of active plugins (in plugins/ directory)
    $config['plugins'] = array(

    Save the file and now if you go to the Settings section in Roundcube’s web interface you’ll find that a Password button has appeared. It doesn’t work yet though. Open up /etc/roundcube/plugins/password/ and have it look like this:

    // See /usr/share/roundcube/plugins/password/ for instructions
    // Check the access right of the file if you put sensitive information in it.
    $config['password_driver'] = 'sql';
    $config['password_confirm_current'] = true;
    $config['password_minimum_length'] = 6;
    $config['password_require_nonalpha'] = true;
    $config['password_log'] = false;
    $config['password_login_exceptions'] = null;
    $config['password_hosts'] = array('localhost');
    $config['password_force_save'] = true;
    // SQL Driver options
    $config['password__db_dsn'] = 'mysql://roundcube:@localhost/roundcubemail';
    // SQL Update Query with encrypted password using random 8 character salt
    $config['password_query'] = 'UPDATE postfix.addresses SET pwd=ENCRYPT(%p, CONCAT(\'$6$\',SUBSTRING((SHA(RAND())), -16))) WHERE email=%u LIMIT 1';

    If this looks complicated that’s because it is. I had countless hours of fun with this. Luckily I had Dovecot’s logging turned all the way up and I was monitoring /var/log/mail.log so that eventually I got it right. What we’re doing here is SHA512 encrypting the password the user typed, adding that to “$6$” (remember that’s how Dovecot identifies SHA512 encryption) and adding salt to it. The difficult part for me was the number and the position of brackets.

    Also /usr/share/roundcube/plugins/password/ contained some useful hints.

    You may have noticed we’re using the roundcube MySQL user for this so that needs to have permissions to change users’ passwords:

    # mysql -u root -p
    mysql>GRANT SELECT (`email`), UPDATE (`pwd`) ON `postfix`.`addresses` TO 'roundcube'@'localhost';

    I cheated here; I did this from phpMyAdmin. The result is the same though: user roundcube must be able to select from the email field and update the password field.