Last modified 4 years ago Last modified on 02/28/13 21:55:08 /

This server was switched off on 28th February 2013 on ticket:500

This was the live server for, and, a list of the sites which were is available on the server was hosted at

This is a debian Xen virtual server with 3GB ram, 32GB HDD, single partition, 4 processors and one IP address,

Munin starts for the server are available on the webarchitects monitoring server and on the transition network development server.

The notes about the old live server are here: LiveServer and the move to was done via ticket:147.

For admin related issues contact chris@….


  1. Optimise and monitor also what php accelerator should we use? Filecache for the moment because of problems encountered with both memcache and apc. Tweak MySQL defaults?
  1. Install for generating nice usage graphs from the apache logs and exim logs, see ticket:160
  1. After testing on the dev server, install Varnish, see ticket:161


The server is running the default debian apache2:

/usr/sbin/apache2 -v
  Server version: Apache/2.2.16 (Debian)
  Server built:   Feb  5 2012 21:35:42
/usr/sbin/apache2 -l 
  Compiled in modules:

The main configuration file is /etc/apache2/apache2.conf and the virtual hosts are sym linked from /etc/apache2/sites-enabled, the key settings in the apache2.conf file relate to the maximun number of apache processes allowed, this is limited by the availale RAM:

<IfModule mpm_prefork_module>
    StartServers              6
    MinSpareServers           4
    MaxSpareServers           6
    MaxClients               25
    MaxRequestsPerChild   10000

The MaxClients was incresed to 25 from 18 after additional RAM was made available to the server, see ticket:397

After making any changes to the Apache configuration best do a configtest first to make sure the configuration is OK:

sudo /usr/sbin/apache2ctl configtest

And then to restart the apache server:

sudo /usr/sbin/apache2ctl restart


Redirects for parked wiki:DomainNames are configured in /etc/apache2/sites-available/

Most domain names are listed in the man VirtualHost in this file which redirects to

Redirect /

There are some additional VirtualHosts, one for

Redirect /

One for, with these redirects:

RedirectMatch permanent ^/Bellingen(.*)
RedirectMatch permanent ^/Lewes(.*)
RedirectMatch permanent ^/Totnes(.*)
RedirectMatch permanent ^/Brixton(.*)

Redirect /

And one for, with this redirect:

Redirect /


The HTTPS VirtualHosts have the following directives (see ticket:409):

SSLEngine on
SSLProtocol all -SSLv2
SSLHonorCipherOrder On
SSLCertificateFile      /etc/ssl/
SSLCertificateChainFile /etc/ssl/

The file contains both the certificate and the key (these are the files from

cat >
cat >>

And the gandi.pem contains the the chain of root certificates:

wget -O GandiStandardSSLCA.crt
wget -O UTNAddTrustServer_CA.crt
wget -O AddTrustExternalCARoot.crt

openssl x509 -inform DER -in GandiStandardSSLCA.crt -out GandiStandardSSLCA.pem
openssl x509 -inform DER -in AddTrustExternalCARoot.crt -out AddTrustExternalCARoot.pem
openssl x509 -inform DER -in UTNAddTrustServer_CA.crt -out UTNAddTrustServer_CA.pem

cat GandiStandardSSLCA.pem > gandi.pem
cat UTNAddTrustServer_CA.pem >> gandi.pem
cat AddTrustExternalCARoot.pem >> gandi.pem

The above was documented as a result of ticket:165, see also wiki:SecurityInfo.

To generate a new certificate, follow the gandi instructions (the only required field is the Common Name):

cd /etc/ssl/
mkdir 2011; cd 2011
openssl req -nodes -newkey rsa:2048 -keyout -out
  Generating a 2048 bit RSA private key
  Country Name (2 letter code) [AU]:
  State or Province Name (full name) [Some-State]:
  Locality Name (eg, city) []:
  Organization Name (eg, company) [Internet Widgits Pty Ltd]:
  Organizational Unit Name (eg, section) []:
  Common Name (eg, YOUR name) []:*
  Email Address []:
  Please enter the following 'extra' attributes
  to be sent with your certificate request
  A challenge password []:
  An optional company name []:


See ticket:224.

Install Varnish 2.1 via the repository:

curl | apt-key add -
aptitude install lsb-release
echo "deb $(lsb_release -s -c) varnish-2.1" >> /etc/apt/sources.list.d/varnish.list
aptitude update
aptitude install varnish

Edit these things in the main config file, /etc/default/varnish :

#DAEMON_OPTS="-a :6081 \
#             -T localhost:6082 \
#             -f /etc/varnish/default.vcl \
#             -S /etc/varnish/secret \
#             -s file,/var/lib/varnish/$INSTANCE/varnish_storage.bin,1G"

DAEMON_OPTS="-a :80 \
             -T localhost:81 \
             -f /etc/varnish/default.vcl \
             -S /etc/varnish/secret \
             -s file,/var/lib/varnish/$INSTANCE/varnish_storage.bin,512M"

The memory was set to 256M and was increased after more RAM was added to the server, see ticket:397.

And in /etc/varnish/default.vcl the following:

backend default {
    .host = "";
    .port = "8080";
    .connect_timeout = 600s;
    .first_byte_timeout = 600s;
    .between_bytes_timeout = 600s;
acl purge {
acl local {
  "localhost";         // myself
  "";         // myself
  "";       // this machines main ip address

# chris
sub vcl_recv {
    # remove all cookies
    unset req.http.Cookie;

    ## Pass cron jobs and server-status
    if (req.url ~ "cron.php") {
      if (client.ip ~ local) {
        return (pass);
      else {
        error 403 "Access Denied";
    if (req.url ~ "/server-status$") {
      if (client.ip ~ local) {
        return (pass);
      else {
        error 403 "Access Denied";
    if (req.url ~ "apc_info.php") {
      if (client.ip ~ local) {
        return (pass);
      else {
        error 403 "Access Denied";

    # Normalize the Accept-Encoding header
    # as per:
    if (req.http.Accept-Encoding) {
      if (req.url ~ "\.(jpg|png|gif|gz|tgz|bz2|tbz|mp3|ogg)$") {
        # No point in compressing these
        remove req.http.Accept-Encoding;
      } elsif (req.http.Accept-Encoding ~ "gzip") {
        set req.http.Accept-Encoding = "gzip";
     } elsif (req.http.Accept-Encoding ~ "deflate") {
        set req.http.Accept-Encoding = "deflate";
      } else {
        # unkown algorithm
        remove req.http.Accept-Encoding;

    ## Default request checks
    if (req.request != "GET" &&
    req.request != "HEAD" &&
    req.request != "PUT" &&
    req.request != "POST" &&
    req.request != "TRACE" &&
    req.request != "OPTIONS" &&
    req.request != "DELETE" &&
    req.request != "PURGE") {
    # Non-RFC2616 or CONNECT which is weird.
    return (pipe);
    if (req.request != "GET" && req.request != "HEAD" && req.request != "PURGE") {
    # We only deal with GET, PURGE and HEAD by default
    return (pass);

    # Check the incoming request type is "PURGE", not "GET" or "POST"
    if (req.request == "PURGE") {
      # Check if the ip coresponds with the acl purge
      if (!client.ip ~ purge) {
      # Return error code 405 (Forbidden) when not
        error 405 "Not allowed.";
      # Purge all objects from cache that match the incoming url and host
      purge("req.url == " req.url " && == ";
      # Return a http error code 200 (Ok)
      error 200 "Purged.";

    # Grace to allow varnish to serve content if backend is lagged
    #set obj.grace = 5m;

# remove all cookies
# chris
sub vcl_fetch {
    unset beresp.http.set-cookie;

# chris
sub vcl_deliver {
if (obj.hits > 0) {
set resp.http.X-Varnish-Cache = "HIT";
else {
set resp.http.X-Varnish-Cache = "MISS";

To supress the varnish purge output generated when Drupal flushes it's cache in the logcheck emails create /etc/logcheck/ignore.d.server/local-rules with the following in it:

# varnish
^\w{3} [ :0-9]{11} [._[:alnum:]-]+ varnishd\[[0-9]+\]: CLI telnet

And then symlink it:

cd /etc/logcheck/violations.ignore.d
ln -s ../ignore.d.server/local-rules 


The security hole for things that apache only made available to local host was fixed on ticket:357#comment:11


The php-apc package is installed and info about how it is preforming is at it's protected using htauthentication, ask chris@… for the username / password if you need it.

The configuration is in /etc/php5/conf.d/apc.ini and the settings were taken from here
apc.enabled = 1
apc.shm_size = 256
apc.include_once_override = 0
apc.mmap_file_mask = /dev/zero

These were changed following problems that were caused for Piwik, see ticket:393

The shm_size was set to 128M but was increased to 256M when the available RAM was increased, see ticket:397.

The wiki:NewLiveServer#mediawiki site is set to use APC via this setting in /web/

$wgMainCacheType = CACHE_ACCEL;

Drupal can be set to use it via /web/ but it doesn't appear to improve performance over the filecache and also it generates lots of errors in the Drupal logs like this:

unlink(/tmp/cache_views_lock) [<a href='function.unlink'>function.unlink</a>]: No such file or directory in /web/ on line 124.

See this thread for more on this problem:


The Mediawiki site at is running on (see ticket:147 and ticket:148 for the move).

There is also a wiki:DevelopmentServer#Mediawiki version of this site at -- when upgrading Mediawiki please test the upgrade on the dev server first.

Mediawiki is installed in /web/ and the apache VirtualHost configuration is in /etc/apache2/sites-available/

To upgrade the site to the latest version of Mediawiki, from you could follow the instructions from or use the mediawiki-upgrade script which takes the latest version of Mediawiki as an argument on the command line and then does everything for you, including upgrading the installed extensions using subversion:

kiwi:~# mediawiki-upgrade 1.16.0

Mediawiki was upgraded to 1.18.1 on ticket:394

The main configuration file for Mediawiki is /web/ and this are the things that have been changed from their default values:

$wgScript           = "/index.php";
$wgRedirectScript   = "/redirect.php";
$wgArticlePath      = "/$1";

$wgLogo             = "/images/wiki.png";

$wgEmergencyContact = "";
$wgPasswordSender = "";

$wgRightsPage = "Copyright"; # Set to the title of a wiki page that describes your license/copyright
$wgRightsUrl = "";
$wgRightsText = "Creative Commons Attribution-Share Alike 2.0 UK: England & Wales";
$wgRightsIcon = "/images/cc-by-sa.png";

# file types for uploads
$wgUploadSizeWarning = 6000 * 3000;
$wgMimeDetectorCommand = "file -bi";
$wgFileExtensions = array( 'avi', 'mp3', 'rm', 'mpg', 'mpeg', 'mp4', 'svg', 'png', 'gif', 'jpg', 'jpeg', 'pdf', 'rtf', 'doc', 'txt', 'ppt', 'odp', 'odc', 'odf', 'odg', 'odi', 'odif', 'odm', 'ods', 'odt', 'otc', 'otf', 'otg', 'oth', 'oti', 'otp', 'ots', 'ott', 'psd', 'ai', 'eps', 'tif');

# No anonymous editing allowed -
$wgGroupPermissions['*']['edit'] = false;

# Prevent new user registrations except by sysops
$wgGroupPermissions['*']['createaccount'] = false;

# allow users to be banned
$wgSysopUserBans = true;


require_once( "$IP/extensions/FCKeditor/FCKeditor.php" );


The cron job for the site is set up for user chris and it contains:

# m h  dom mon dow   command
*/30 * * * * /usr/sbin/ab -n 1 >/dev/null 2>&1  
* */1 * * * /usr/sbin/ab -n 1 >/dev/null 2>&1  

ab is apachebench.


To backup the Mysql database and the files for the web sites to the wiki:DevelopmentServer run the /usr/local/bin/backup2kiwi script, it puts the files in /home/live/quince on and these files are used by the scripts on kiwi to update the Drupal and Mediwiki sites with the latest data from the live sites.

A copy of this script is attached to this page: attachment:backup2kiwi


/usr/bin/mysql_secure_installation has been run to secure the server:

In order to log into MySQL to secure it, we'll need the current
password for the root user.  If you've just installed MySQL, and
you haven't set the root password yet, the password will be blank,
so you should just press enter here.

Enter current password for root (enter for none): 
OK, successfully used password, moving on...

Setting the root password ensures that nobody can log into the MySQL
root user without the proper authorisation.

You already have a root password set, so you can safely answer 'n'.

Change the root password? [Y/n] n
 ... skipping.

By default, a MySQL installation has an anonymous user, allowing anyone
to log into MySQL without having to have a user account created for
them.  This is intended only for testing, and to make the installation
go a bit smoother.  You should remove them before moving into a
production environment.

Remove anonymous users? [Y/n] y
 ... Success!

Normally, root should only be allowed to connect from 'localhost'.  This
ensures that someone cannot guess at the root password from the network.

Disallow root login remotely? [Y/n] y
 ... Success!

By default, MySQL comes with a database named 'test' that anyone can
access.  This is also intended only for testing, and should be removed
before moving into a production environment.

Remove test database and access to it? [Y/n] y
 - Dropping test database...
ERROR 1008 (HY000) at line 1: Can't drop database 'test'; database doesn't exist
 ... Failed!  Not critical, keep moving...
 - Removing privileges on test database...
 ... Success!

Reloading the privilege tables will ensure that all changes made so far
will take effect immediately.

Reload privilege tables now? [Y/n] y
 ... Success!

Cleaning up...

All done!  If you've completed all of the above steps, your MySQL
installation should now be secure.

Thanks for using MySQL!


A MySQL Backup script from is installed in /usr/local/bin and it's set to create backups in /var/backups/mysql/

It needed the libmime-lite-perl debian package to be installed.

To run it:


These lines have been changed from the original at :

$admin_email_to              = '';
$admin_email_from            = '';
$cnf_file                    = '/root/.my.cnf';
$site_name                   = '';
$mysql_backup_dir            = '/var/backups/mysql';
@skip_tables                 = qw[cache cache_block cache_content cache_emfield_xml cache_filter cache_form cache_hierarchical_select cache_location cache_media_youtube_status cache_menu cache_mollom cache_page cache_path cache_rules cache_update cache_views cache_views_data sessions search_dataset search_index search_node_links search_total watchdog];


Backupninja has been installed and set up -- it's set to backup files to another server in the same rack and then this backups up the data to a server in another colo. The main configuration file is /etc/backupninja.conf and the files containing the list of things to be backed up are in /etc/backup.d/. 60 days worth of backups are saved. It is set to backup MySQL, omitting the following tables (see /etc/backup.d/20.mysql and ticket:370):

nodata = live.cache live.cache_block live.cache_content live.cache_emfield_xml live.cache_filter live.cache_form live.cache_hierarchical_select live.cache_location live.cache_media_youtube_status live.cache_menu live.cache_mollom live.cache_page live.cache_path live.cache_rules live.cache_update live.cache_views live.cache_views_data live.sessions live.search_dataset live.search_index live.search_node_links live.search_total live_sharingengine.cache live_sharingengine.cache_block live_sharingengine.cache_content live_sharingengine.cache_filter live_sharingengine.cache_form live_sharingengine.cache_location live_sharingengine.cache_menu live_sharingengine.cache_page live_sharingengine.cache_path live_sharingengine.cache_update live_sharingengine.cache_views live_sharingengine.cache_views_data live_sharingengine.search_dataset live_sharingengine.search_index live_sharingengine.search_node_links live_sharingengine.search_total live_sharingengine.sessions live_workspaces.cache live_workspaces.cache_block live_workspaces.cache_content live_workspaces.cache_filter live_workspaces.cache_form live_workspaces.cache_hierarchical_select live_workspaces.cache_location live_workspaces.cache_menu live_workspaces.cache_page live_workspaces.cache_views live_workspaces.cache_views_data live_workspaces.captcha_sessions live_workspaces.sessions live_workspaces.search_dataset live_workspaces.search_index live_workspaces.search_node_links live_workspaces.search_total

And to backup the following directories (see /etc/backup.d/90.rdiff):

include = /var/spool/cron/crontabs
include = /var/backups
include = /etc
include = /root
include = /home
include = /usr/local/*bin
include = /var/lib/dpkg/status*
include = /web
exclude = /home/*/.gnupg
exclude = /home/*/.local/share/Trash
exclude = /home/*/.Trash
exclude = /home/*/.thumbnails
exclude = /home/*/.beagle
exclude = /home/*/.aMule
exclude = /home/*/gtk-gnutella-downloads


See for the php info, the php.ini file is /etc/php5/apache2/php.ini

PECL Uploadprogress was installed as suggested here:

aptitude install php5-dev
pecl install uploadprogress

And this was added to the php.ini file:

The, default php.ini files which had these changes:

expose_php = Off
memory_limit = 256M

Was moved to php.ini.dist.tweaked and then /usr/share/doc/php5-common/examples/php.ini-recommended was copied to /etc/php5/apache2/php.ini and a new /etc/php5/apache2/conf.d/uploadprogress.ini file was created with this in it:

And /etc/php5/apache2/php.ini was edited and these things were changed:

expose_php = Off
max_execution_time = 60     ; Maximum execution time of each script, in seconds
max_input_time = 120 ; Maximum amount of time each script may spend parsing request data
memory_limit = 256M      ; Maximum amount of memory a script may consume (128MB)
error_log = syslog
post_max_size = 40M
upload_max_filesize = 24M
default_charset = "utf-8"

The /etc/php5/cli/php.ini file had these values changed:

memory_limit = 512M


Due to errors like this being sent out by logwatch:

Nov 29 20:16:54 quince suhosin[26422]: ALERT - configured POST variable limit exceeded - dropped variable '4[edit field_event_type]' (attacker 'XXX.XXX.XXX.XXX', file '/web/')

Dec  3 15:03:17 quince suhosin[14383]: ALERT - configured request variable name length limit exceeded - dropped variable 'enabled_pattern*field_patterns_related_larger*pattern*field_patterns_related_smaller' (attacker 'XXX.XXX.XXX.XXX', file '/web/')

Dec  7 10:08:56 quince suhosin[7269]: ALERT - configured POST variable name length limit exceeded - dropped variable '{"$":{"memLimit":2000,."autoFlush":true,."crossDomain":true,."includeProtos":false,."includeFunctions":false}}' (attacker 'XXX.XXX.XXX.XXX', file '/web/')

These variables were changed in /etc/php5/conf.d/suhosin.ini as per this suggestion:

; = 200 = 10000

;suhosin.request.max_vars = 200
suhosin.request.max_vars = 10000

;suhosin.request.max_varname_length = 64
suhosin.request.max_varname_length = 256

; = 64 = 512

; = 256 = 2048

; = 65000 = 260000


This is available here: it's protected using htauthentication because there are a lot of attacks launched against phpmyadmin, ask chris@… for the username / password if you need it.


The memcache configuration file is /etc/memcached.conf the settings which have been changed from the default are:

# Start with a cap of 64 megs of memory. It's reasonable, and the daemon default
# Note that the daemon will grow to this size, but does not start out holding this much
# memory
-m 256

Memcache was set to use 128M of RAM but this was increased to 256M after additional RAM was added to the machine, see ticket:397.

The use of memcache by Drupal is provided by the [|Memcache API and Integration module] and configured in /web/

# JK - issue #300 (memcache large dataset issues) trying memcache instraed of cacherouter
$conf = array(
  'cache_inc' => './sites/all/modules/memcache/',
  'memcache_servers' => array('' => 'default'),
  'memcache_bins' => array(
    'cache' => 'default',
    'cache_content' => 'database',
    'cache_form' => 'database',
    'cache_views' => 'database'

Note that 'cache_content', 'cache_form', and 'cache_views' all now are sent to the database as is Drupal default. All other cache requests use Memcache.

NB: It's not clear if there is any gain from using memcache with one server, see this thread: -- JK note: This all depends on how busy MySQL is; if we've got a busy server then memcache will help a lot, if not, well it'll help less!


In addition to the plugins available by default these were installed:


The server has vsftpd running for updating the site, email mailto:chris@… if you need the username and password for the account to upload content.

vsftpd is configured via the /etc/vsftpd.conf file.


Logcheck is installed and this sends syslog messages which are not matched by the filters in /etc/logcheck/ignore.d.server to root by email. Local rules in /etc/logcheck/ignore.d.server/local-rules have been added to reduce the quantity of emails generated by varnish and drupal entries:

# varnish
^\w{3} [ :0-9]{11} [._[:alnum:]-]+ varnishd\[[0-9]+\]: CLI telnet
# drupal
^\w{3} [ :0-9]{11} [._[:alnum:]-]+ drupal\[[0-9]+\]:


Installed via ticket:396#comment:17, example usage:

jim@quince:/web/$ sudo -i
quince:~# cd /web/
quince:/web/ drush cc
Enter a number to choose which cache to clear.
  [0] : Cancel
  [1] : all
  [2] : theme
  [3] : menu
  [4] : css+js

We could install the munin drupal plugin that uses drush if we have a need for it.

piwik running at see wiki:PiwikServer

Upgraded to 1.7.1 on ticket:393


This checks for updated packages and they are reported via the apt and apt_all munin plugins.