Ticket #896 (closed maintenance: fixed)

Opened 10 months ago

Last modified 9 months ago

Chive access to TN Drupal DB

Reported by: chris Owned by: chris
Priority: major Milestone: Maintenance
Component: Live server Keywords:
Cc: ade, paul Estimated Number of Hours: 0.0
Add Hours to Ticket: 0 Billable?: yes
Total Hours: 2.27

Description

Ade would like to give the developers of the new Transition Network WordPress site access to the live database via Chive.

Change History

comment:1 Changed 10 months ago by chris

  • Add Hours to Ticket changed from 0.0 to 0.45
  • Total Hours changed from 0.0 to 0.45

Paul: do you know which database on PuffinServer is the live TN Drupal database?

Creating a MySQL database user with read only access, first track down the settings.php files to find the database name, there are 150 settings.php files on the server:

locate settings.php | wc -l
150

The files looks to be /data/disk/tn/platforms/transitionnetwork.org/sites/default/default.settings.php however that only contains an example:

$db_url = 'mysql://username:password@localhost/databasename';

Looking in /data/disk/tn/aegir/distro/019/sites/tn.puffin.webarch.net/settings.php we have:

if (isset($_SERVER['db_name'])) {
  /**
   * The database credentials are stored in the Apache or Nginx vhost config
   * of the associated site with SetEnv (fastcgi_param in Nginx) parameters.
   * They are called here with $_SERVER environment variables to
   * prevent sensitive data from leaking to site administrators
   * with PHP access, that potentially might be of other sites in
   * Drupal's multisite set-up.
   * This is a security measure implemented by the Aegir project.
   */
  $databases['default']['default'] = array(
    'driver' => $_SERVER['db_type'],
    'database' => $_SERVER['db_name'],
    'username' => $_SERVER['db_user'],
    'password' => $_SERVER['db_passwd'],
    'host' => $_SERVER['db_host'],
    /* Drupal interprets $databases['db_port'] as a string, whereas Drush sees
     * it as an integer. To maintain consistency, we cast it to a string. This
     * should probably be fixed in Drush.
     */
    'port' => (string) $_SERVER['db_port'],
  );
  $db_url['default'] = $_SERVER['db_type'] . '://' . $_SERVER['db_user'] . ':' . $_SERVER['db_passwd'] . '@' . $_SERVER['db_host'] . ':' . $_SERVER['db_port'] . '/' . $_SERVER['db_name'];
}

So if this is the right settings.php we need to track down the correct Nginx conf file (there are dozens... there is no ends of obfuscation that thanks to BOA...).

The main config file /etc/nginx/nginx.conf includes /etc/nginx/conf.d/*.conf which in turn includes /var/aegir/config/server_master/nginx/vhost.d/* and the /var/aegir/config/server_master/nginx/vhost.d/chive.master.puffin.webarch.net file contains the Chive servername, chive.master.puffin.webarch.net however this is a 403: https://chive.master.puffin.webarch.net/ so some work would be needed to get Chive working on the server it seems, still no luck finding the Ngnix config file for the live site...

Looking at the databases on the server we have:

MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| bookerstage20150   |
| bookerstage201_2   |
| bookerstagesamtr   |
| masterpuffinwe_0   |
| mysql              |
| newstransitionne   |
| performance_schema |
| teststgtransit_0   |
| tnpuffinwebarchn   |
| transitionnetw_0   |
+--------------------+
11 rows in set (0.00 sec)

It might be easier and a lot quicker to ask Paul which is the live database and then dump it, copy it to ParrotServer and grant access to it there vi phpMyAdmin, we could also drop the table/row with the passwords in it for added safety, adding Paul to this ticket and bold line at the top of this ticket.

comment:2 Changed 10 months ago by chris

  • Cc paul added

Paul, which of these database on PuffinServer is the https://www.transitionnetwork.org/ live database?

MariaDB [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| bookerstage20150   |
| bookerstage201_2   |
| bookerstagesamtr   |
| masterpuffinwe_0   |
| mysql              |
| newstransitionne   |
| performance_schema |
| teststgtransit_0   |
| tnpuffinwebarchn   |
| transitionnetw_0   |
+--------------------+
11 rows in set (0.00 sec)

comment:3 Changed 10 months ago by chris

  • Add Hours to Ticket changed from 0.0 to 0.35
  • Status changed from new to closed
  • Resolution set to fixed
  • Total Hours changed from 0.45 to 0.8

Paul ignore the above, I worked out a work-around

Another way to work out which is the live database -- it is probably the biggest one, so looking at the dumps from last night:

cd /var/backups/mysql/sqldump/
ls -lah | grep "Jan 18"
-rw------- 1 root root 229M Jan 18 01:00 bookerstage20150.sql
-rw------- 1 root root 3.0M Jan 18 01:00 bookerstage201_2.sql
-rw------- 1 root root 224M Jan 18 01:01 bookerstagesamtr.sql
-rw------- 1 root root  58M Jan 18 01:00 information_schema.sql
-rw------- 1 root root 5.0M Jan 18 01:01 masterpuffinwe_0.sql
-rw------- 1 root root 513K Jan 18 01:01 mysql.sql
-rw------- 1 root root 4.2M Jan 18 01:01 newstransitionne.sql
-rw------- 1 root root  17K Jan 18 01:01 performance_schema.sql
-rw------- 1 root root 310K Jan 18 01:01 teststgtransit_0.sql
-rw------- 1 root root 5.0M Jan 18 01:01 tnpuffinwebarchn.sql
-rw------- 1 root root 447M Jan 18 01:02 transitionnetw_0.sql

It looks like transitionnetw_0.sql is the one, so:

cp  transitionnetw_0.sql /home/chris/
chown chris:chris /home/chris/transitionnetw_0.sql

Then reconnect with ssh -A and:

scp transitionnetw_0.sql parrot.webarch.net:

Then on ParrotServer:

mysql> CREATE DATABASE tnro;
Query OK, 1 row affected (0.00 sec)

mysql> CREATE USER 'tnro'@'localhost' IDENTIFIED BY 'XXXX';
Query OK, 0 rows affected (0.07 sec)

mysql> GRANT SELECT ON tnro.* TO 'tnro'@'localhost';
Query OK, 0 rows affected (0.00 sec)

Import the database:

cat /home/chris/transitionnetw_0.sql | mysql tnro

Drop the users table (couldn't think of a quick way to delete the pass row from the users table):

mysql> DROP TABLE users;
Query OK, 0 rows affected (0.03 sec)

That should do the job, the database, without the users data, can be accessed, read-only via https://parrot.transitionnetwork.org/phpmyadmin

comment:4 Changed 9 months ago by chris

  • Add Hours to Ticket changed from 0.0 to 0.42
  • Total Hours changed from 0.8 to 1.22

Updating the copy of the database on ParrotServer, on PuffinServer:

ssh -A puffin
sudo -i
cp /var/backups/mysql/sqldump/transitionnetw_0.sql /home/chris/
chown chris:chris /home/chris/transitionnetw_0.sql
exit
scp transitionnetw_0.sql parrot:

On ParrotServer:

sudo -i
cat /home/chris/transitionnetw_0.sql | mysql tnro

Then remove the users password hashes (just to be safe):

mysql tnro
mysql> UPDATE users set pass="";
Query OK, 21644 rows affected (1.70 sec)
Rows matched: 21896  Changed: 21644  Warnings: 0

comment:5 Changed 9 months ago by chris

  • Add Hours to Ticket changed from 0.0 to 0.1
  • Total Hours changed from 1.22 to 1.32

One odd thing, 6 weeks ago the datadase dump was 447M, see trac:ticket/896#comment:3 but now it is 1.8G:

ls -lah /var/backups/mysql/sqldump/transitionnetw_0.sql
-rw------- 1 root root 1.8G Mar  2 01:23 /var/backups/mysql/sqldump/transitionnetw_0.sql

I have opened a new ticket for this: ticket:907.

comment:6 Changed 9 months ago by chris

  • Add Hours to Ticket changed from 0.0 to 0.15
  • Total Hours changed from 1.32 to 1.47

So this size of the database has caused a diskspace issue on ParrotServer:

df -h
Filesystem      Size  Used Avail Use% Mounted on
rootfs          9.9G  9.2G  241M  98% /
udev             10M     0   10M   0% /dev
tmpfs           307M  292K  307M   1% /run
/dev/xvda2      9.9G  9.2G  241M  98% /
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           818M     0  818M   0% /run/shm
/dev/xvda3      8.9G  5.5G  3.0G  66% /home

And this is without a dump of the tnro database, the size of the current MySQL backups:

ls -lah /var/backups/mysql/sqldump/
total 1.4G
drwx------ 2 root root 4.0K Jan 19 01:05 .
drwx------ 3 root root 4.0K Apr 30  2013 ..
-rw------- 1 root root 2.5M Mar  2 01:02 annesley.sql
-rw------- 1 root root 9.3M Mar  2 01:02 conference15.sql
-rw------- 1 root root 7.8M Mar  2 01:02 cop21.sql
-rw------- 1 root root 3.7M Mar  2 01:02 information_schema.sql
-rw------- 1 root root 542K Mar  2 01:02 mysql.sql
-rw------- 1 root root  17K Mar  2 01:02 performance_schema.sql
-rw------- 1 root root  11K Mar  2 01:02 phpmyadmin.sql
-rw------- 1 root root  63M Jun  2  2014 recon.sql
-rw------- 1 root root  57M Mar  2 01:02 reconomy.sql
-rw------- 1 root root  53M Mar  2 01:03 tc.sql
-rw------- 1 root root 1.3K Mar  2 01:03 test.sql
-rw------- 1 root root 424M Mar  2 01:06 tnro.sql
-rw------- 1 root root 1.9M Mar  2 01:06 ts.sql
-rw------- 1 root root 807M Mar  2 01:08 ttt.sql
-rw------- 1 root root 1.9M Mar  2 01:08 wpdev.sql

So moving this directory to /home:

mkdir /home/sqldump
chmod 700 /home/sqldump
rsync -av /var/backups/mysql/sqldump/ /home/sqldump/
rm -rf /var/backups/mysql/sqldump
cd /var/backups/mysql
ln -s /home/sqldump 
df -h
Filesystem      Size  Used Avail Use% Mounted on
rootfs          9.9G  7.8G  1.7G  83% /
udev             10M     0   10M   0% /dev
tmpfs           307M  292K  307M   1% /run
/dev/xvda2      9.9G  7.8G  1.7G  83% /
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           818M     0  818M   0% /run/shm
/dev/xvda3      8.9G  6.9G  1.6G  82% /home

That should keep the server going for a while.

comment:7 Changed 9 months ago by chris

  • Add Hours to Ticket changed from 0.0 to 0.5
  • Total Hours changed from 1.47 to 1.97

We still have a disk space issue, in part because I kept a copy of the tnro db from before it increased to 1.8G:

df -h
Filesystem      Size  Used Avail Use% Mounted on
rootfs          9.9G  8.1G  1.4G  86% /
udev             10M     0   10M   0% /dev
tmpfs           307M  292K  307M   1% /run
/dev/xvda2      9.9G  8.1G  1.4G  86% /
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           818M     0  818M   0% /run/shm
/dev/xvda3      8.9G  8.5G     0 100% /home

The sqldump dir:

cd /home/sqldump/
ls -lah
total 3.1G
drwx------  2 root root 4.0K Mar  3 07:31 .
drwxr-xr-x 19 root root 4.0K Mar  2 10:30 ..
-rw-------  1 root root 2.5M Mar  3 01:04 annesley.sql
-rw-------  1 root root 9.3M Mar  3 01:04 conference15.sql
-rw-------  1 root root 7.9M Mar  3 01:04 cop21.sql
-rw-------  1 root root 3.7M Mar  3 01:04 information_schema.sql
-rw-------  1 root root 542K Mar  3 01:04 mysql.sql
-rw-------  1 root root  17K Mar  3 01:04 performance_schema.sql
-rw-------  1 root root  11K Mar  3 01:04 phpmyadmin.sql
-rw-------  1 root root  57M Mar  3 01:04 reconomy.sql
-rw-------  1 root root  53M Mar  3 01:04 tc.sql
-rw-------  1 root root 1.3K Mar  3 01:05 test.sql
-rw-------  1 root root 1.8G Mar  3 01:13 tnro.sql
-rw-------  1 root root 424M Mar  2 01:06 tnro.sql.old
-rw-------  1 root root 1.9M Mar  3 01:13 ts.sql
-rw-------  1 root root 806M Mar  3 01:15 ttt.sql
-rw-------  1 root root 1.9M Mar  3 01:15 wpdev.sql

So moving some things back and symlinking some files.

cd /var/backups/mysql/
ls -lah
total 8.0K
drwx------ 2 root root 4.0K Mar  2 10:33 .
drwxr-xr-x 3 root root 4.0K Mar  2 06:26 ..
lrwxrwxrwx 1 root root   13 Mar  2 10:33 sqldump -> /home/sqldump
rm sqldump
mkdir sqldump
chmod 700 sqldump/
rsync -av --exclude="tnro*" /home/sqldump/ /var/backups/mysql/sqldump/
sending incremental file list
./
annesley.sql
conference15.sql
cop21.sql
information_schema.sql
mysql.sql
performance_schema.sql
phpmyadmin.sql
reconomy.sql
tc.sql
test.sql
ts.sql
ttt.sql
wpdev.sql

sent 988219936 bytes  received 262 bytes  19964044.40 bytes/sec
total size is 988098475  speedup is 1.00
cd sqldump/
ln -s /home/sqldump/tnro.sql
cd /home/sqldump/
rm annesley.sql conference15.sql cop21.sql information_schema.sql mysql.sql performance_schema.sql phpmyadmin.sql reconomy.sql tc.sql test.sql ts.sql ttt.sql wpdev.sql 
df -h
Filesystem      Size  Used Avail Use% Mounted on
rootfs          9.9G  9.0G  421M  96% /
udev             10M     0   10M   0% /dev
tmpfs           307M  292K  307M   1% /run
/dev/xvda2      9.9G  9.0G  421M  96% /
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           818M     0  818M   0% /run/shm
/dev/xvda3      8.9G  7.6G  921M  90% /home

That should buy a little time, creating a database for the old backup:

mysql> CREATE DATABASE tnro2; 
Query OK, 1 row affected (0.00 sec)

mysql> GRANT SELECT ON tnro2.* TO 'tnro'@'localhost';
Query OK, 0 rows affected (0.04 sec)

Import the data:

cat tnro.sql.old | mysql tnro2

Create another symlink:

cd /var/backups/mysql/sqldump/
ln -s /home/sqldump/tnro2.sql

Delete the old dump file:

rm /home/sqlbackup/tnro.sql.old

Diskspace situation:

df -h
Filesystem      Size  Used Avail Use% Mounted on
rootfs          9.9G  9.3G   46M 100% /
udev             10M     0   10M   0% /dev
tmpfs           307M  292K  307M   1% /run
/dev/xvda2      9.9G  9.3G   46M 100% /
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           818M     0  818M   0% /run/shm
/dev/xvda3      8.9G  8.3G  156M  99% /home

So need to move another file:

cd /var/backups/mysql/sqldump
mv ttt.sql /home/sqldump/
ln -s /home/sqldump/ttt.sql 

Disk space situation:

df -h
Filesystem      Size  Used Avail Use% Mounted on
rootfs          9.9G  8.7G  717M  93% /
udev             10M     0   10M   0% /dev
tmpfs           307M  292K  307M   1% /run
/dev/xvda2      9.9G  8.7G  717M  93% /
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           818M     0  818M   0% /run/shm
/dev/xvda3      8.9G  7.9G  537M  94% /home

That should be OK for a little while, but later this month we will need to do one of these three options:

  1. Delete the tnro and tnro2 databases.
  2. Add additional disk space to ParrotServer
  3. Build a bigger replacement for ParrotServer, running Debian Jessie.

I think option 3. makes most sense as it would also bring some other enhancements.

comment:8 Changed 9 months ago by chris

  • Add Hours to Ticket changed from 0.0 to 0.3
  • Total Hours changed from 1.97 to 2.27

The live database backup has been reduced from 1.8G to 379M so copying this to ParrotServer will also solve the disk space issues there.

So, on PuffinServer:

ssh -A puffin
sudo -i
cp /var/backups/mysql/sqldump/transitionnetw_0.sql /home/chris/
chown chris:chris /home/chris/transitionnetw_0.sql
exit
scp transitionnetw_0.sql parrot:
sudo rm transitionnetw_0.sql

On ParrotServer:

sudo -i
cat /home/chris/transitionnetw_0.sql | mysql tnro
mysql
mysql> DROP DATABASE tnro2;
Query OK, 293 rows affected (3.14 sec)

Then remove the users password hashes and the dump (just to be safe):

mysql tnro
mysql> UPDATE users set pass="";
Query OK, 21644 rows affected (1.70 sec)
Rows matched: 21896  Changed: 21644  Warnings: 0
rm /home/chris/transitionnetw_0.sql

And I have done a backup of the databases and disk space wise everything looks OK:

df -h
Filesystem      Size  Used Avail Use% Mounted on
rootfs          9.9G  8.7G  744M  93% /
udev             10M     0   10M   0% /dev
tmpfs           307M  292K  307M   1% /run
/dev/xvda2      9.9G  8.7G  744M  93% /
tmpfs           5.0M     0  5.0M   0% /run/lock
tmpfs           818M     0  818M   0% /run/shm
/dev/xvda3      8.9G  6.5G  2.0G  78% /home
Note: See TracTickets for help on using tickets.