iRedMail on Nginx

This is my experiment to get iRedMail to work with Nginx. In the end I got everything to work other than awstats, although with some caveats. I don’t like awstats very much and it seemed quite troublesome to get it setup. There is a mode to run awstats in that lets it just generate static files, which to me seem to be a better solution. I did testing only on Debian 6.0.7, although it should also work in Ubuntu just fine. It was also limited testing on brand new VMs.

So I am starting out with a brand new Debian 6.0.7 system. First things first we setup our hosts and hostname file. For my test environment I used mail.debian.test as my test environment. Then I grabbed the latest iRedMail which happened to be 0.8.3 at the time of writing this. I did this via wget in a ssh session. I had to install bzip2 to “tar -xf” it, so a quick “apt-get install bzip2” resolved that. I then ran the iRedMail installer and let it complete.

Now to stop apache services for good:

Optionally we can run “apt-get remove apache2” to get rid of apache binaries as well.

Now, I needed Nginx and php5-fpm (as I prefer fpm). This takes a little work as Debian 6.0.7 doesn’t have it in its default sources. This would have been easier on Ubuntu.

What I did her is first install nginx and curl. Then I added dotdeb to the sources list, added its key and then updated my sources. Finally I was able to install fpm.
Now that the applications are in place, I need to write their configuration files. Here is the list of files I will be using:
Nginx’s iRedMail site configuration file
php5’s FPM iRedMail web pool file
iRedMail init.d file to launch the iredadmin pyton webservice

During additional testing I uploaded the files and just used curl to put them into place. The init.d script is borrowed from the web (exactly where I can’t remember as I used bites and pieces from multiple places). However I don’t feel the need to write out or explain in great detail all off the changes.

You will need to modify the nginx file (/etc/nginx/sites-available/iRedMail) to contain the correct domain. As well you will need an additional dns enter for iredadmin.domain.tld (in my case iredadmin.debian.test). If this is your only/first ssl site or you prefer it to be default you will need to adjust the ssl section. I added comments to explain that. Nginx expects a default website and if none exist it won’t start.

As for the additional domain, I tried my best, but it seems there is no way to have the perl script to be aware its in a sub directory and pass the correct urls to its output templates. Although the template has the capability to do a homepath variable, this seems to be set from ctx in perl which from my limited knowledge I don’t believe is changeable via environment/server variables. I also didn’t see a way to change that in any setting. Hopefully the iRedMail developers can make this change in future versions.
The good news is the iRedMail developers had foresight to setup the script to run very smoothly as a stanalone python web server via a cgi socket. So no additional work to make that run is needed. I had hoped to use the iredapd service to launch this, but it appears to crash and fail horribly. So I setup a second instance to do this.

Now just a little more work to activate the new service, link the file as a live nginx site and restart some services.

Thats it. Now when I hit mail.debian.test I get the webmail portal. When I access iredadmin.debian.test I get the admin portal. phpmyadmin is also setup on mail.debian.test/phpmyadmin

Setting this up for Ubuntu should be easier, as 12.04 has php5-fpm in its packages so there is no need to add in the dotdeb resources. Everything else would be the same for it.

Nginx has always been flaky for me while doing Ipv6 services. I intended to include them but it just wasn’t playing nicely enough. Sometimes just doing [::]:80 to a listen will make it listen. Other times I have to specify it twice (and it doesn’t complain). Then again if I try it on 443 using [::]:443 nginx may not want to start at all, while it accepted [::]:80 just fine. So because of how picky it can be at times, I just opted to go with ipv4 only support here.

Convert TS3 from sqlite to mysql database

I run a teamspeak server and it uses teamspeak3.  However when I set it up, I didn’t bother getting any further than getting it running.  Now I find out that its using sqlite for a database and that database is taking up a lot of data for useless logs.

First step was to figure out how to convert the database.  After some thankless google searches I found something that worked (after my own edits to it):

sqlite3 ts3server.sqlitedb .dump | egrep -vi ‘^(BEGIN TRANSACTION|PRAGMA|COMMIT|INSERT INTO “devices”|INSERT INTO “sqlite_sequence”|DELETE FROM “sqlite_sequence”)’ | perl -pe ‘s/INSERT INTO \”(.*)\” VALUES/INSERT INTO \1 VALUES/’ | perl -pe ‘s/AUTOINCREMENT/auto_increment/’ | perl -pe ‘s/varchar\)/varchar\(255\)\)/’ > tsdb.sql

Basically it dumps the database, then we remove the things that mysql doesn’t understand or are useless for mysql, and finally it fixes some stuff up so its a proper database script acceptable by mysql.  Then I just went to importing it.  I setup a teamspeak database and user before I did this.

mysql -u teampseak -p teamspeak < tsdb.sql

For the next part, I was just testing.  First I created a ts3server.ini file, then added the agrument into it:


I tried to start up the server but failed.  It seems from google searches others are getting this error as well:

|CRITICAL|DatabaseQuery |   | unable to load database plugin library “”, halting

It turns out that it needs a library file on the server.  You can find this out with the ldd command:

$ ldd =>  (0x00007fffa27ff000) => not found => /usr/lib/ (0x00007f272c3f8000) => /lib/ (0x00007f272c174000) => /lib/ (0x00007f272bf5d000) => /lib/ (0x00007f272bbda000)
/lib64/ (0x00007f272c919000)

So I had hopped that I had some sort of file:


I had located those two files, but I couldn’t get them to work.  Suggestions from searching showed people symlinking the .15 version to their teamspeak home directory.  I tried to just use the .16, but no go.  Back to google to find out how to get that file for my version of ubuntu.  I tried to do a apt-get on “libmysqlclient15off” as suggested name elsewhere, but no luck for my ubuntu version.  I found out I could just pull it right from the package server directly.  That works out for me 🙂  I use 64 bit, so I got the 64 bit version:

$ wget

$ dpkg -i libmysqlclient15off_5.1.30really5.0.83-0ubuntu3_amd64.deb

Tried to restart teamspeak, still no luck.  So I tried the symlink suggestion (while working in my teamspeak install location):

$ ln -s /usr/lib/

Finally it worked, but gave errors because I never setup the ini file that contained the mysql user details (ts3db_mysql.ini).  So I created that and restarted teamspeak again.  The format of the file is as follows:


Finally, things where working :).  After that I also used the “createinifile=1” attribute when I started the server so it would dump all current contents of my configuration into a ini file.

I setup my log folder for teamspeak via a symlink (as you can’t move it to /var/log directly since it was running as a unprivileged user) to a folder in /var/log (I called mine ts3).  I wanted to setup autorotation of the log files (since the server almost never goes down and I don’t want a 100 mb log file :P).  Alas, it seems to of gotten the best of me so far.  I haven’t had time to figure out how to get it to auto rotate the log files out.

The only other issue is teamspeak also seems to log files into the database (two places!).  I just ran this manually, but I may have to setup a cron script to auto do this for me later on:

DELETE from log WHERE log_timestamp > unix_timestamp() – 2592000

That little command will delete all logs older than 30 days.  Which is more than good for me.  I haven’t even read the logs since I set it up.

Mysql queries using offsets without limits

While working on a project, I came across the need for a script to run a loop through a table and process some commands.  However, due to the size of the table, this surely would take longer than the default 30 seconds that is setup in most configurations.  However, I didn’t want to do any limits.  I wanted my script to determine when it was nearing the timeout and then stop, otherwise try to process more.  So a standard LIMIT in mysql wouldn’t do it.

Much to my surprise, MySQL doesn’t offer a way to just do a OFFSET.  You have to use the LIMIT with OFFSET or none at all.  This was really annoying as I thought I would have to go back to limiting the query size.

Well, then I realized, that this could be solved another way.  I just added a column and populated it with an incremental numbers.  Then I told my script to ORDER BY that column id using ascending.  Now with that in play, I simply just added a WHERE to my query and told it not to do anything below a certain id.  The certain id comes from a variable that is passed from the user and cleaned up (safety first!).  The script after processing the needed commands, updates this variable.  Finally when it is time to pause the script so it doesn’t time out, that variable is sent with the forwarding url.  This method allows the script to pick up where it left off again when it starts up.

Seems like a very simple work around, although if I didn’t have the id column, it wouldn’t of worked.

Moving mysql

Time to move my mysql data directory to another drive.  So its up to some simple commands to get me started.

First my my.conf file.

$ sudo mv /etc/mysql/my.cnf /home/configs
$ sudo
ln -s /home/configs/my.cnf /etc/mysql/

I should note that the way I installed mysql (apt-get), a debian.cnf file is created.  I haven’t even bothered to see if this file is actually used by ubuntu.  But none the less I need to copy it as it contains a mysql user/password for use by the system.  Which isn’t really safe considering it is a root account.  Setting open_basedir restrictions help with that though. As well in the mysql.conf folder a mysqld_safe_syslog.conf file exists, I don’t use safe mode so I don’t care about it.

$ sudo mv /etc/mysql/debian.cnf /home/configs
$ sudo ln -s /home/configs/debian.cnf /etc/mysql/

Now for a quick test, I restarted mysql via the restart command.  Very helpful command and is easier to type then using init.d

$ restart mysql

Everything still works.  So now for the final touch.  Moving the directory.

$ service mysql stop
$ mv /var/lib/mysql /home/data
$ ln -s /home/data/mysql /var/lib
$ service mysql start

Now I won’t lie, at this point something went horribly wrong.  I have yet to figure out why.  I have done this many times before and never had an issue.  After trying everything I could think of to get mysql started, get rid of the errors and even moving it back, I still had no luck.  I ended up restarting the entire box and after that things just worked.  So I tried again and then everything worked just fine the second time around.  I have no clue why it failed the first time.

Just to add a finishing touch, I edited the /home/configs/my.cnf and changed datadir in it to point to /home/data/conf

That takes care of that.  Next is to figure out all the configuration files I need to duplicate over for my mail setup.  Hopefully after all that, my web site should be able to easily switch from ubuntu to another operating system and be up and running in no time.

Securing database user credentials

A random thought has hit me.  Most people try to keep their MySQL user credentials secure.  But why?  If a server has been setup properly, it becomes a mute point.

The idea occurred me when thinking about opening a sites source code up.  If I opened the site up, I could give them access to my settings and configuration files.  These files also contain mysql user credentials.  So either I attempt to remove those, or I disallow access.  However, I then wondered why even worry.

I will use my own site as an example.  If I give out my MySQL user credentials to my inactive forum, what good would it do someone?  phpMyAdmin is secured behind a HTTP_AUTH page (over SSL) before you can supply the MySQL user credentials.  I have configured all my MySQL users to only allow localhost connections, so only connections from my server alone are allowed.
So if somebody had my MySQL user credentials, they would be completely useless.  If they managed to exploit the server and upload files that do malicious stuff, they would most likely be able to have that script find and read the settings file.  That being if it was somewhere in the open_basedir restrictions for that site.  If they managed to exploit the server, they could do more damage then logging into mysql.  Although since only I have a login to my site (secured behind SSH),  I have very few files that apache can edit or write to that is web accessible.  To fix any mysql damage they did, all I need to do is restore all mysql data (users as well) from a backup.  File damage is much worse as it is easier to leave a backdoor into the system then.

Although I don’t run any control panel and use phpMyAdmin simply for ease of access, other sites that run admin panels such as cPanel also apply.  Unless they have the cPanel login information, the user installed phpMyAdmin for some reason or configured their mysql users to have outside connections, the data is useless.  With the exception being if an attacker was able to upload a malicious file

For shared servers, this could be a worry if your MySQL credentials are publicly known and a hacker happens to also have a site on your shared server.  So my above points will have little value if your server is shared.  Shared servers carry a risk and that risk means attempting to protect all your credentials more heavily, as an attacker could simply be on the same server as you.

Of course this all depends on the server admin and webmaster having properly setup things such as access to phpMyAdmin and other scripts before hand.  However I think this still provides a good point that even if MySQL credentials are publicly known, they still don’t offer much.