Migration

This document describes migration process from previous Mentat production release 0.4.20 to 2.x series. For upgrading from within the 2.x series please see the Upgrading section.

Warning

Prerequisite for migration is a successfull installation of all Mentat system packages.

New version of Mentat system comes with quite a lot of new features and most importantly uses PostgreSQL as database backend. Consider doing clean installation on different host to reduce number of leftover deprecated files.

You have following migration options:

  1. Migration to remote host
  2. Local migration

Remote migration is a superset of local migration, you have to move all the necessary data to remote host and then continue according to the local migration checklist.

Migration to remote host

In case you are migrating to another server, there are additional actions that need to be taken besides those when migrating to new version on a single host. The most important part is to synchronize data and configurations. But you also need to take into consideration possible host renaming and readdressing, so you might need to alter DNS records as well.

For the data transfer, it is sufficient to use tools like scp or rsync and copy all relevant data to target destination. Make sure to turn the whole Mentat system off first on both servers to ensure data integrity. To make the downtime as short as possible you may consider using rsync multiple times, first iteration will be used to perform crude synchronization, than stop the Mentat system and perform final data synchronization.

As an example you may use following checklist:

# Step 0: Update TTL on relevant DNS records to reasonably low value in case
#         you are also renaming. This should not be necessary when you are also
#         readdressing (swapping two servers altogether).

# Step 1: Make sure Mentat system is stopped on target host server (both real-time
#         and cronjob modules). If you are using Warden as IDEA message source,
#         stop that as well.
root@target$ service warden_filer_receiver stop
root@target$ mentat-controller.py --command stop
root@target$ mentat-controller.py --command disable

# Step 2: Pre-synchronize data directory. You may tweak and use example script
#         mentat-sync-files.sh described in the next section for more complex
#         file synchronization process.
root@source$ rsync --archive --update --delete --progress /var/mentat root@target:/var

# Step 3: Stop the Mentat system on source host (both real-time and cronjob modules).
#         If you are using Warden as IDEA message source, stop that as well.
#         This is necessary so that you have clean initial state and for data
#         integrity.
root@target$ service warden_filer_receiver stop
root@source$ mentat-controller.py --command stop
root@source$ mentat-controller.py --command disable

# Step 4: Re-synchronize data directory. Again you may tweak and use example script
#         mentat-sync-files.sh described in the next section for more complex
#         file synchronization process.
root@source$ rsync --archive --update --delete --progress /var/mentat root@target:/var

# Step 5: Migrate the MongoDB database to target host (sadly this step requires
#         that MongoDB is installed on target host as well, you may remove it
#         after successfull migration). You may tweak and use example scripts
#         mentatdb-dump-all.sh and mentat-sync-mongodb.sh for more complex
#         database synchronization process.
root@source$ mkdir -p /var/tmp/mentatdb_dump_all
root@source$ cd /var/tmp/mentatdb_dump_all
root@source$ mongodump --db mentat_stats --collection statistics
root@source$ mongodump --db mentat --collection users
root@source$ mongodump --db mentat --collection groups
root@source$ mongodump --db mentat --collection reports_ng
root@source$ cd /var/tmp
root@source$ tar -czvf mentatdb_dump_all.tar.gz mentatdb_dump_all
root@source$ rm -rf /var/tmp/mentatdb_dump_all
root@source$ scp /var/tmp/mentatdb_dump_all.tar.gz root@target:/var/tmp/
root@target$ cd /var/tmp
root@target$ tar -xzvf /var/tmp/mentatdb_dump_all.tar.gz
root@target$ cd /var/tmp/mentatdb_dump_all/dump
root@target$ mongorestore --drop --db mentat_stats mentat_stats/statistics.bson
root@target$ mongorestore --drop --db mentat mentat/users.bson
root@target$ mongorestore --drop --db mentat mentat/groups.bson
root@target$ mongorestore --drop --db mentat mentat/reports_ng.bson
root@target$ rm -f /var/tmp/mentatdb_dump_all.tar.gz
root@target$ rm -rf /var/tmp/mentatdb_dump_all

That is it, you now have all the data on target host. Please continue to the section Local migration to finish the migration process.

Local migration

At this point you should have the new version of Mentat system installed and if you were migrating to remote host, you should have all the steps from section Migration to remote host completed.

So now you have all the data on the target host server and basically all that remains is to perform database migration from MongoDB to PostgreSQL. There are migration scripts prepared to do just that, so the whole migration process is as simple as executing them.

# Step 0: Reconfigure Mentat system by comparing old and new configuration files.
#         This should be done manually, because most of the modules have new
#         configuration options and you should consider tweaking some of them.
#         This step is really important, you may encounter weird errors in case
#         some outdated configuration stays active, especially in following files:
#               - mentat-backup.py.conf
#               - mentat-cleanup.py.conf
#               - mentat-controller.py.conf
#               - mentat-dbmngr.py.conf
#               - mentat-netmngr.py.conf
#               - mentat-reporter.py.conf
#               - mentat-statistician.py.conf

# Step 1: Migrate system metadata tables (users, groups, reports, statistics, etc.).
/etc/mentat/scripts/sqldb-migrate-data.py

# Step 2: Migrate IDEA events. This step is optional, depending on your setup
#         it might take A LOT of time and it might be better to just skip it
#         and start from scratch with empty event database, or migrate just
#         a chunk of the whole database.
/etc/mentat/scripts/sqldb-migrate-events.py

# Step 3: Remove possible local reports generated by local reporter.
rm -rf /var/mentat/reports/reporter

# Step 4: Install reports generated by legacy reporter. New reporter does use
#         a different naming scheme for report attachments (to be more consistent),
#         so it is necessary to rename the old ones.
mv /var/mentat/reports/reporter-ng /var/mentat/reports/reporter
find /var/mentat/reports/reporter -type f -exec rename 's/_/-/g' {} \;
/etc/mentat/scripts/mentat-reports-order.py

# Step 5: Get rid of old log and runlog files (optional, but recommended, you
#         may encounter weird errors when these do not get cleaned).
find /var/mentat/log -name=*.log* -delete
find /var/mentat/run -name=*.runlog -delete
find /var/mentat/run -name=*.pstate -delete
find /var/mentat/run -name=*.state -delete

After successfull local migration it is time to start everything up again and make sure everything is in order.

# Step 6: Since you are still technically in downtime, this might be a good
#         opportunity to reboot the host server and make sure averything boots
#         back up, install new server firmware etc.

# Step 7: Start the Mentat system (both real-time and cronjob modules).
#         If you are using Warden as IDEA message source, start that as well.
mentat-controller.py --command enable
mentat-controller.py --command start
service warden_filer_receiver start

# Step 8: Make sure messages are passing through all real-time messages
#         processing modules by inspecting the log files. Look for obvious
#         errors and warnings.
tail -f /var/mentat/log/mentat-inspector.py.log
tail -f /var/mentat/log/mentat-inspector-b.py.log
tail -f /var/mentat/log/mentat-enricher.py.log
tail -f /var/mentat/log/mentat-storage.py.log
grep ERROR /var/mentat/log/mentat-inspector.py.log
grep ERROR /var/mentat/log/mentat-inspector-b.py.log
grep ERROR /var/mentat/log/mentat-enricher.py.log
grep ERROR /var/mentat/log/mentat-storage.py.log

# Step 9: Access the web interface and check that everything is in order.

# Step 10: If you migrated to another host and mangled with DNS, do not forget
#          to put things back as they were.

Usefull scripts

This section contains usefull migration scripts for your convenience. We have used many scripts and cronjobs during migration of our production Mentat installation and here are those that might be usefull in general.

mentatdb-dump-all.sh

Install this script somewhere on the source Mentat host, for example as file /root/mentatdb-dump-all.sh. You can use it to perform local dump of all Mentat MongoDB database collections that are relevant for migration.

#!/bin/bash
# UTILITY SCRIPT FOR LOCAL DUMPS OF MONGODB DATABASE COLLECTIONS RELEVANT FOR
# MIGRATION TO MENTAT-NG 2.X
#
# Author: Jan Mach <jan.mach@cesnet.cz>
# License: MIT
#

# Adjust these settings according to your needs:
MENTATDUMPDIR="/var/tmp"
MENTATDUMPFILENAME="mentatdb_dump_all"

echo ""
echo "[SRC] Cleanup after possible previous execution."
rm -f "${MENTATDUMPDIR}/${MENTATDUMPFILENAME}.tar.gz"
rm -rf "${MENTATDUMPDIR}/${MENTATDUMPFILENAME}"

echo ""
echo "[SRC] Preparing work environment."
mkdir -p "${MENTATDUMPDIR}/${MENTATDUMPFILENAME}"
cd "${MENTATDUMPDIR}/${MENTATDUMPFILENAME}"

echo ""
echo "[SRC] Dumping relevant database collections."
mongodump --db mentat_stats --collection statistics
mongodump --db mentat --collection users
mongodump --db mentat --collection groups
mongodump --db mentat --collection reports_ng
cd "${MENTATDUMPDIR}"
tar -czvf "${MENTATDUMPFILENAME}.tar.gz" "${MENTATDUMPFILENAME}"

echo ""
echo "[SRC] Post-execution cleanup."
rm -rf "${MENTATDUMPDIR}/${MENTATDUMPFILENAME}"

echo ""
echo "[SRC] Your database migration dump is ready: ${MENTATDUMPDIR}/${MENTATDUMPFILENAME}.tar.gz"

mentat-sync-mongodb.sh

Install this script somewhere on the target Mentat host, for example as file /root/mentat-sync-mongodb.sh. You can use it to perform restoration of all essential Mentat MongoDB database collections.

#!/bin/bash
# UTILITY SCRIPT FOR FETCHING LATEST MONGODB DATABASE DUMP FROM CURRENT PRODUCTION SERVER
# AND IMPORTING IT INTO LOCAL MONGODB DATABASE DURING MIGRATION TO MENTAT-NG 2.X
#
# Author: Jan Mach <jan.mach@cesnet.cz>
# License: MIT
#

# Adjust these settings according to your needs:
MENTATMONGODBDUMPSCRIPT="/root/mentatdb-dump-all.sh"
MENTATPRODSSHUSER="root"
MENTATPRODSSHSERVER="mentat.domain.org"
MENTATDUMPDIR="/var/tmp"
MENTATDUMPFILENAME="mentatdb_dump_all"

echo ""
echo "[TGT] Creating fresh dump on current production server."
ssh  "${MENTATPRODSSHUSER}@${MENTATPRODSSHSERVER}" "${MENTATMONGODBDUMPSCRIPT}"

echo ""
echo "[TGT] Cleanup after possible previous execution."
rm -f "${MENTATDUMPDIR}/${MENTATDUMPFILENAME}.tar.gz"
rm -rf "${MENTATDUMPDIR}/${MENTATDUMPFILENAME}"

echo ""
echo "[TGT] Fetching latest database dump from production server ${MENTATPRODSSHSERVER}."
scp "${MENTATPRODSSHUSER}@${MENTATPRODSSHSERVER}:${MENTATDUMPDIR}/${MENTATDUMPFILENAME}.tar.gz" "${MENTATDUMPDIR}/"

echo ""
echo "[TGT] Extracting fetched database dump."
cd "${MENTATDUMPDIR}"
tar -xzvf "${MENTATDUMPDIR}/${MENTATDUMPFILENAME}.tar.gz"

echo ""
echo "[TGT] Importing relevant database collections."
cd "${MENTATDUMPDIR}/${MENTATDUMPFILENAME}/dump"
mongorestore --drop --db mentat_stats mentat_stats/statistics.bson
mongorestore --drop --db mentat mentat/users.bson
mongorestore --drop --db mentat mentat/groups.bson
mongorestore --drop --db mentat mentat/reports_ng.bson

echo ""
echo "[TGT] Post-execution cleanup."
rm -f "${MENTATDUMPDIR}/${MENTATDUMPFILENAME}.tar.gz"
rm -rf "${MENTATDUMPDIR}/${MENTATDUMPFILENAME}"

Additionally you might want to install Cronjob for performing periodical dumps and then automatically pull and import those dumps on target Mentat host. This can be usefull for trial period before actual migration to test whether everything is working:

root@source:~# crontab -e

# And add something similar to this:
# Create, fetch and import latest MongoDB dump every four hours.
5 */4 * * * /root/mentat-sync-mongodb.sh

mentat-sync-files.sh

Install this script somewhere on the target Mentat host, for example as file /root/mentat-sync-files.sh. You can use it to perform filesystem data migration and installation tasks.

#!/bin/bash
# UTILITY SCRIPT FOR PERIODICAL MENTAT FILESYSTEM DATA SYCHRONIZATION AND
# INSTALLATION DURING MIGRATION TO MENTAT-NG 2.X
#
# Author: Jan Mach <jan.mach@cesnet.cz>
# License: MIT
#

# Adjust these settings according to your needs:
MENTATPRODSSHUSER="root"
MENTATPRODSSHSERVER="mentat.domain.org"

if [ "x${1}x" = "x--skip-installx" ]; then
    SKIPINSTALL="yes"
else
    SKIPINSTALL="no"
fi

echo ""
echo "[TGT] Synchronizing Mentat data directories (except reports)."
rsync --progress --archive --update --delete --force --exclude=www --exclude=log --exclude=spool --exclude=reports --exclude=run --exclude=maintenance --exclude=cache "${MENTATPRODSSHUSER}@${MENTATPRODSSHSERVER}:/var/mentat/" /var/mentat/

echo ""
echo "[TGT] Synchronizing Mentat report directories."
rsync --progress --archive --update --delete --force --exclude=reporter "${MENTATPRODSSHUSER}@${MENTATPRODSSHSERVER}:/var/mentat/reports/" /var/mentat/reports

if [ "${SKIPINSTALL}" = "yes" ]; then
    echo "[TGT] Skipping legacy report installation."
else
    echo ""
    echo "[TGT] Installing reports generated by legacy reporter."
    rsync --progress --archive --update --delete --force /var/mentat/reports/reporter-ng/ /var/mentat/reports/reporter
    find /var/mentat/reports/reporter -type f -exec rename 's/_/-/g' {} \;
    /etc/mentat/scripts/mentat-reports-order.py

    echo ""
    echo "[TGT] Cleanup of local log, runlog, state and pstate files."
    find /var/mentat/log -name '*.log*' -delete
    find /var/mentat/run -name '*.runlog' -delete
    find /var/mentat/run -name '*.pstate' -delete
    find /var/mentat/run -name '*.state' -delete
fi

Additionally you might want to install Cronjob for performing periodical executions of this script to showrten the actual data migration time:

root@source:~# crontab -e

# And add something similar to this:
# Synchonize Mentat filesystem data every four hours.
35 * * * * /root/mentat-sync-files.sh --skip-install

mentat-tweakdb.sql

Install this script somewhere on the target Mentat host, for example as file /root/mentat-tweakdb.sql. You can use it to perform additional tweaks of database contents, for exampple for changing the default reporting settings for all groups.

UPDATE settings_reporting SET timezone = 'Europe/Prague';
UPDATE settings_reporting SET locale = 'cs';

Then execute it like this:

psql -f mentat-tweakdb.sql mentat_main

mentat-sync-config.sh

Install this script somewhere on the target Mentat host, for example as file /root/mentat-sync-config.sh. You can use it as inspiration for copying various important configuration files between old and new production servers. After executing this script you will find the most important configurations alongside the local ones for convenient migration process.

#!/bin/bash
# UTILITY SCRIPT FOR FETCHING MOST IMPORTANT CONFIGURATION FROM CURRENT PRODUCTION
# SERVER TO LOCAL ONE (MIGRATION TARGET) DURING MIGRATION TO MENTAT-NG 2.0
#
# Author: Jan Mach <jan.mach@cesnet.cz>
# License: MIT
#

# Adjust these settings according to your needs:
MENTATPRODSSHUSER="root"
MENTATPRODSSHSERVER="mentat.domain.org"

echo ""
echo "[TGT] Switching Shibboleth configurations."
rsync --progress --archive --update --delete --force "${MENTATPRODSSHUSER}@${MENTATPRODSSHSERVER}:/etc/shibboleth/" /etc/shibboleth-new/
chown _shibd:_shibd /etc/shibboleth-new/sp-*.pem
cp -r /etc/shibboleth /etc/shibboleth-old
rsync --progress --archive --update --delete --force /etc/shibboleth/ "${MENTATPRODSSHUSER}@${MENTATPRODSSHSERVER}:/etc/shibboleth-new/"
ssh "${MENTATPRODSSHUSER}@${MENTATPRODSSHSERVER}" chown _shibd:_shibd /etc/shibboleth-new/sp-*.pem
ssh "${MENTATPRODSSHUSER}@${MENTATPRODSSHSERVER}" cp -r /etc/shibboleth /etc/shibboleth-old

echo ""
echo "[TGT] Switching server certificates."
rsync --progress --archive --update --delete --force "${MENTATPRODSSHUSER}@${MENTATPRODSSHSERVER}:/etc/ssl/servercert/" /etc/ssl/servercert-new/
cp -r /etc/ssl/servercert /etc/ssl/servercert-old
rsync --progress --archive --update --delete --force /etc/ssl/servercert/ "${MENTATPRODSSHUSER}@${MENTATPRODSSHSERVER}:/etc/ssl/servercert-new/"
ssh "${MENTATPRODSSHUSER}@${MENTATPRODSSHSERVER}" cp -r /etc/ssl/servercert /etc/ssl/servercert-old

echo ""
echo "[TGT] Copying Mentat configuration from current production server ${MENTATPRODSSHSERVER}."
rsync --progress --archive --update --delete --force "${MENTATPRODSSHUSER}@${MENTATPRODSSHSERVER}:/etc/mentat/" /etc/mentat-oldprod/

echo ""
echo "[TGT] Copying Warden receiver configuration from current production server ${MENTATPRODSSHSERVER}."
rsync --progress --archive --update --delete --force "${MENTATPRODSSHUSER}@${MENTATPRODSSHSERVER}:/etc/warden_filer.cfg" /etc/warden_filer.cfg-new

Cleanup

When you are comfortable enough, after successfull migration you may remove old Perl based Mentat packages:

aptitude purge libcesnet-toolkit-perl mentat-client mentat-common mentat-dev mentat-hawat mentat-server

Another component, that is not required for Mentat system 2.x series is MongoDB. Depending on the repository you have installed it from, you might execute command similar to these:

# Native Debian package:
aptitude purge mongodb

# MongoDB, inc. Debian package:
aptitude purge mongodb-org

Also make sure to remove now unnecessary APT source files:

rm /etc/apt/sources.list.d/cesnet-certs.list
rm /etc/apt/sources.list.d/mentat.list
rm /etc/apt/sources.list.d/mongodb-org.list

Following directories are not used in Mentat version 2.0.0 and later:

rm -rf /var/mentat/reports/briefer
rm -rf /var/mentat/reports/reporter-ng
rm -rf /var/mentat/www

What is next?

You have just successfully migrated Mentat system to latest version, so what is next?

  • If you want to take quick tour of the Mentat system, you might wish to read and follow the Quickstart documentation page.