This document describes everything there is to know regarding Relax-and-Recover, an Open Source bare-metal disaster recovery and system migration solution designed for Linux.
1. Introduction
Relax-and-Recover is the leading Open Source bare metal disaster recovery solution. It is a modular framework with many ready-to-go workflows for common situations.
Relax-and-Recover produces a bootable image which can recreate the system’s original storage layout. Once that is done it initiates a restore from backup. Since the storage layout can be modified prior to recovery, and disimilar hardware and virtualization is supported, Relax-and-Recover offers the flexibility to be used for complex system migrations.
Currently Relax-and-Recover supports various boot media (incl. ISO, PXE, OBDR tape, USB or eSATA storage), a variety of network protocols (incl. sftp, ftp, http, nfs, cifs) as well as a multitude of backup strategies (incl. IBM TSM, HP Data Protector, Symantec NetBackup, EMC NetWorker [Legato], SEP Sesam, Galaxy [Simpana], Bacula, Bareos, RBME, rsync, duplicity, Borg).
Relax-and-Recover was designed to be easy to set up, requires no maintenance and is there to assist when disaster strikes. Its setup-and-forget nature removes any excuse for not having a disaster recovery solution implemented.
Recovering from disaster is made very straight-forward by a 2-step recovery process so that it can be executed by operational teams when required. When used interactively (e.g. when used for migrating systems), menus help make decisions to restore to a new (hardware) environment.
Extending and integrating Relax-and-Recover into complex environments is made possible by its modular framework. Consistent logging and optionally extended output help understand the concepts behind Relax-and-Recover, troubleshoot during initial configuration and help debug during integration.
Professional services and support are available.
1.1. Relax-and-Recover project
The support and development of the Relax-and-Recover project takes place on Github:
- Relax-and-Recover website
- Github project
In case you have questions, ideas or feedback about this document, you can contact the development team on the Relax-and-Recover mailinglist at: rear-users@lists.relax-and-recover.org.
Note
|
Note that you have to be subscribed to be able to send mails to the Relax-and-Recover mailinglist. You can subscribe to the list at: http://lists.relax-and-recover.org/mailman/listinfo/rear-users |
1.2. Design concepts
Based on experience from previous projects, a set of design principles were defined, and improved over time:
-
Focus on easy and automated disaster recovery
-
Modular design, focused on system administrators
-
For Linux (and possibly Unix operating systems)
-
Few external dependencies (Bash and standard Unix tools)
-
Easy to use and easy to extend
-
Easy to integrate with real backup software
The goal is to make Relax-and-Recover as least demanding as possible, it will require only the applications necessary to fulfill the job Relax-and-Recover is configured for.
Furthermore, Relax-and-Recover should be platform independent and ideally install just as a set of scripts that utilizes everything that the Linux operating system provides.
1.3. Features and functionality
Relax-and-Recover has a wide range of features:
-
Improvements to HP SmartArray and CCISS driver integration
-
Improvements to software RAID integration
-
Disk layout change detection for monitoring
-
One-Button-Disaster-Recovery (OBDR) tape support
-
DRBD filesystem support
-
Bacula or Bareos tape support
-
Multiple DR images per system on single USB storage device
-
USB ext3/ext4 support
-
GRUB[2] bootloader re-implementation
-
UEFI support
-
ebiso support (needed by SLES UEFI ISO booting)
-
Add Relax-and-Recover entry to local GRUB configuration (optional)
-
Nagios and webmin integration
-
Syslinux boot menu
-
Storing rescue/backup logfile on rescue media
-
Restoring to different hardware
-
RHEL5, RHEL6 and RHEL7 support
-
SLES 11 and SLES 12 support
-
Debian and Ubuntu support
-
Various usability improvements
-
Serial console support auto-detected
-
Lockless workflows
-
USB udev integration to trigger mkrescue on inserting USB device
-
Beep/UID led/USB suspend integration
-
Migrate UUID from disks and MAC addressed from network interfaces
-
Integrates with Disaster Recovery Linux Manager (DRLM)
-
Data deduplication with Borg as backend
-
Block device level backup/restore
2. Getting started
2.1. Software requirements
Relax-and-Recover aims to have as little dependencies as possible, however over time certain capabilities were added using utilities and specific features, causing older distributions to fall out of support. We try to avoid this where practically possible and be conservative to add new dependencies.
The most basic requirement for Relax-and-Recover is having bash
, and
ubiquitous Linux tools like:
-
dd (coreutils)
-
ethtool
-
file
-
grep
-
gzip
-
ip (iproute[2])
-
mount (util-linux-ng)
-
ps (procps)
-
sed
-
ssh (openssh-clients)
-
strings (binutils)
-
tar
-
…
Optionally, some use-cases require other tools:
-
lsscsi and sg3_utils (for OBDR tape support)
-
mkisofs or genisoimage (for ISO output support)
-
syslinux (for ISO or USB output support)
-
syslinux-extlinux (for USB support)
-
ebiso (for SLES UEFI booting)
In some cases having newer versions of tools may provide better support:
-
syslinux >= 4.00 (provides menu support)
-
parted
In case we are using BACKUP=NETFS
with nfs or cifs we might need also:
-
nfs-client
-
cifs-utils
2.2. Distribution support
As a project our aim is not to exclude any distribution from being supported, however (as already noted) some older distributions fell out of support over time and there is little interest from the project or the community to spend the effort to add this support.
On the other hand there is a larger demand for a tool like Relax-and-Recover from the Enterprise Linux distributions, and as a result more people are testing and contributing to support those distributions.
Currently we aim to support the following distributions by testing them regularly:
-
Red Hat Enterprise Linux and derivatives: RHEL5, RHEL6 and RHEL7
-
SUSE Linux Enterprise Server 11 and 12
-
Ubuntu LTS: 12, 13, 14 and 15
Distributions dropped as supported:
-
Ubuntu LTS <12
-
Fedora <21
-
RHEL 3 and 4
-
SLES 9 and 10
-
openSUSE <11
-
Debian <6
Distributions known to be unsupported are:
-
Ubuntu LTS 8.04 (as it does not implement
grep -P
)
2.3. Known limitations
Relax-and-Recover offers a lot of flexibility in various use-cases, however it does have some limitations under certain circumstances:
-
Relax-and-Recover depends on the software of the running system. When recovering this system to newer hardware, it is possible that the hardware support of the original system does not support the newer hardware.
This problem has been seen when restoring an older RHEL4 with an older HP Proliant Support Pack (PSP) to more recent hardware. This PSP did not detect the newer HP SmartArray controller or its firmware.
-
Relax-and-Recover supports recovering to different hardware, but it cannot always automatically adapt to this new environment. In such cases it requires a manual intervention to e.g.
-
modify the disklayout.conf to indicate the number of controller, disks or specific custom desires during restore
-
reduce the partition-sizes/LV-sizes when restoring to smaller storage
-
pull network-media or configure the network interfaces manually
-
-
Depending on your back-up strategy you may have to perform actions, like:
-
insert the required tape(s)
-
perform commands to restore the backup
-
2.4. Installation
You can find the RPM and DEB packages from our web site at http://relax-and-recover.org/download/
The latest stable versions of Fedora and SLES can be installed via yum
and zypper
2.4.1. From RPM packages
Simply install (or update) the provided packages using
the command: rpm -Uhv rear-1.17-1.fc20.noarch.rpm
You can test your installation by running rear dump
:
[root@system ~]# rear dump
Relax-and-Recover 1.12.0svn497 / 2011-07-11
Dumping out configuration and system information
System definition:
ARCH = Linux-x86_64
OS = GNU/Linux
OS_VENDOR = RedHatEnterpriseServer
OS_VERSION = 5.6
...
2.4.2. From DEB packages
On a Debian system (or Ubuntu) you can download the DEB packages from our download page and install it with the command:
dpkg -i rear*.deb
On Debian (Ubuntu) use the following command to install missing dependencies:
apt-get -f install
2.4.3. From source
The latest and greatest sources are available at GitHub location : https://github.com/rear/rear
To make local copy with our github repository just type:
git clone git@github.com:rear/rear.git
2.5. File locations
Remember the general configuration file is found at /usr/share/rear/conf/default.conf
. In that file you find all variables used by rear
which can be overruled by redefining these in the /etc/rear/site.conf
or /etc/rear/local.conf
files. Please do not modify the default.conf
file itself, but use the site.conf
or local.conf
for this purpose.
Note
|
Important note about the configuration files inside ReaR. Treat these as Bash scripts! ReaR will source these configuration files, and therefore, if you make any syntax error against Bash scripting rules ReaR will break. |
3. Configuration
The configuration is performed by changing /etc/rear/local.conf or /etc/rear/site.conf.
There are two important variables that influence Relax-and-Recover and
the rescue image. Set OUTPUT
to your preferred boot method and define
BACKUP
for your favorite BACKUP
strategy.
In most cases only these two settings are required.
3.1. Rescue media (OUTPUT)
The OUTPUT
variable defines where the rescue image should be send to.
Possible OUTPUT
setting are:
- OUTPUT=RAMDISK
-
Copy the kernel and the initramfs containing the rescue system to a selected location.
- OUTPUT=ISO
-
Create a bootable ISO9660 image on disk as rear-$(hostname).iso
- OUTPUT=PXE
-
Create on a remote PXE/NFS server the required files (such as configuration file, kernel and initrd image)
- OUTPUT=OBDR
-
Create a bootable OBDR tape including the backup archive. Specify the OBDR tape device by using
TAPE_DEVICE
. - OUTPUT=USB
-
Create a bootable USB disk (using extlinux). Specify the USB storage device by using
USB_DEVICE
. - OUTPUT=RAWDISK
-
Create a bootable raw disk image on as
rear-$(hostname).raw.gz
. Supports UEFI boot if syslinux/EFI or Grub 2/EFI is installed. Supports Legacy BIOS boot if syslinux is installed. Supports UEFI/Legacy BIOS dual boot if syslinux and one of the supported EFI bootloaders are installed.
3.1.1. Using OUTPUT_URL with ISO, RAMDISK or RAWDISK output methods
When using OUTPUT=ISO
, OUTPUT=RAMDISK
or OUTPUT=RAWDISK
you should provide
the backup target location through the OUTPUT_URL
variable. Possible
OUTPUT_URL
settings are:
- OUTPUT_URL=file://
-
Write the ISO image to disk. The default is in /var/lib/rear/output/.
- OUTPUT_URL=fish//
-
Write the ISO image using
lftp
and the FISH protocol. - OUTPUT_URL=ftp://
-
Write the ISO image using
lftp
and the FTP protocol. - OUTPUT_URL=ftps://
-
Write the ISO image using
lftp
and the FTPS protocol. - OUTPUT_URL=hftp://
-
Write the ISO image using
lftp
and the HFTP protocol. - OUTPUT_URL=http://
-
Write the ISO image using
lftp
and the HTTP (PUT) procotol. - OUTPUT_URL=https://
-
Write the ISO image using
lftp
and the HTTPS (PUT) protocol. - OUTPUT_URL=nfs://
-
Write the ISO image using
nfs
and the NFS protocol. - OUTPUT_URL=sftp://
-
Write the ISO image using
lftp
and the secure FTP (SFTP) protocol. - OUTPUT_URL=rsync://
-
Write the ISO image using
rsync
and the RSYNC protocol. - OUTPUT_URL=sshfs://
-
Write the image using sshfs and the SSH protocol.
- OUTPUT_URL=null
-
Do not copy the ISO image from /var/lib/rear/output/ to a remote output location.
OUTPUT_URL=null
is useful when another program (e.g. an external backup program) is used to save the ISO image from the local system to a remote place, or withBACKUP_URL=iso:///backup
when the backup is included in the ISO image to avoid a (big) copy of the ISO image at a remote output location. In the latter case the ISO image must be manually saved from the local system to a remote place.OUTPUT_URL=null
is only supported together withBACKUP=NETFS
.
The default boot option of the created ISO is boothd / "boot from first harddisk". If you want to change this, e.g. because you integrate REAR into some automation process, you can change the default using ISO_DEFAULT={manual,automatic,boothd}
3.2. Backup/Restore strategy (BACKUP)
The BACKUP
setting defines our backup/restore strategy. The BACKUP
can be handled via internal archive executable (tar
or rsync
) or by an external backup program (commercial or open source).
Possible BACKUP
settings are:
- BACKUP=TSM
-
Use IBM Tivoli Storage Manager programs
- BACKUP=DP
-
Use HP DataProtector programs
- BACKUP=FDRUPSTREAM
-
Use FDR/Upstream
- BACKUP=NBU
-
Use Symantec NetBackup programs
- BACKUP=NSR
-
Use EMC NetWorker (Legato)
- BACKUP=BACULA
-
Use Bacula programs
- BACKUP=BAREOS
-
Use Bareos fork of Bacula
BAREOS_FILESET=Full Only if you have more than one fileset defined for your clients backup jobs, you need to specify which to use for restore
- BACKUP=GALAXY
-
Use CommVault Galaxy (5, probably 6)
- BACKUP=GALAXY7
-
Use CommVault Galaxy (7 and probably newer)
- BACKUP=GALAXY10
-
Use CommVault Galaxy 10 (or Simpana 10)
- BACKUP=BORG
-
Use BorgBackup (short Borg) a deduplicating backup program to restore the data.
- BACKUP=NETFS
-
Use Relax-and-Recover internal backup with tar or rsync (or similar). When using
BACKUP=NETFS
andBACKUP_PROG=tar
there is an option to selectBACKUP_TYPE=incremental
orBACKUP_TYPE=differential
to let rear make incremental or differential backups until the next full backup day e.g. viaFULLBACKUPDAY="Mon"
is reached or when the last full backup is too old after FULLBACKUP_OUTDATED_DAYS has passed. Incremental or differential backup is currently only known to work withBACKUP_URL=nfs
. Other BACKUP_URL schemes may work but at leastBACKUP_URL=usb
requires USB_SUFFIX to be set to work with incremental or differential backup. - BACKUP=REQUESTRESTORE
-
No backup, just ask user to somehow restore the filesystems.
- BACKUP=EXTERNAL
-
Use a custom strategy by providing backup and restore commands.
- BACKUP=DUPLICITY
-
Use duplicity to manage backup (see http://duplicity.nongnu.org). Additionally if duply (see http://duply.net) is also installed while generating the rescue images it is part of the image.
- BACKUP=RBME
-
Use Rsync Backup Made Easy (rbme) to restore the data.
- BACKUP=RSYNC
-
Use rsync to foresee in backup and restore of your system disks.
- BACKUP=BLOCKCLONE
-
Backup block devices using dd or ntfsclone
3.3. Using NETFS as backup strategy (internal archive method)
When using BACKUP=NETFS
you should provide the backup target location through
the BACKUP_URL
variable. Possible BACKUP_URL
settings are:
- BACKUP_URL=file://
-
To backup to local disk, use
BACKUP_URL=file:///directory/path/
- BACKUP_URL=nfs://
-
To backup to NFS disk, use
BACKUP_URL=nfs://nfs-server-name/share/path
- BACKUP_URL=tape://
-
To backup to tape device, use
BACKUP_URL=tape:///dev/nst0
or alternatively, simply defineTAPE_DEVICE=/dev/nst0
- BACKUP_URL=cifs://
-
To backup to a Samba share (CIFS), use
BACKUP_URL=cifs://cifs-server-name/share/path
. To provide credentials for CIFS mounting use a /etc/rear/cifs credentials file and defineBACKUP_OPTIONS="cred=/etc/rear/cifs"
and pass along:username=_username_ password=_secret password_ domain=_domain_
- BACKUP_URL=sshfs://
-
To backup over the network with the help of sshfs. You need the fuse-sshfs package before you can use FUSE-Filesystem to access remote filesystems via SSH. An example of defining the
BACKUP_URL
could be:BACKUP_URL=sshfs://root@server/export/archives
- BACKUP_URL=usb://
-
To backup to USB storage device, use
BACKUP_URL=usb:///dev/disk/by-label/REAR-000
or use a real device node or a specific filesystem label. Alternatively, you can specify the device usingUSB_DEVICE=/dev/disk/by-label/REAR-000
.If you combine this with
OUTPUT=USB
you will end up with a bootable USB device.
Optional settings:
- BACKUP_PROG=rsync
-
If you want to use rsync instead of tar (only for
BACKUP=NETFS
). Do not confuse this with theBACKUP=RSYNC
backup mechanism. - NETFS_KEEP_OLD_BACKUP_COPY=y
-
If you want to keep the previous backup archive. Incremental or differential backup and NETFS_KEEP_OLD_BACKUP_COPY contradict each other so that
NETFS_KEEP_OLD_BACKUP_COPY
must not be true in case of incremental or differential backup. - TMPDIR=/bigdisk
-
Define this variable in
/etc/rear/local.conf
if directoru/tmp
is too small to contain the ISO image, e.g. when usingOUTPUT=ISO BACKUP=NETFS BACKUP_URL=iso://backup ISO_MAX_SIZE=4500 OUTPUT_URL=nfs://lnx01/vol/lnx01/linux_images_dr
The TMPDIR
is picked up by the mktemp
command to create the BUILD_DIR
under /bigdisk/tmp/rear.XXXX
Please be aware, that directory /bigdisk
must exist, otherwise, rear
will bail out when executing the mktemp
command.
The default value of TMPDIR
is an empty string, therefore, by default BUILD_DIR
is /tmp/rear.XXXX
Another point of interest is the ISO_DIR
variable to choose another location of the ISO image instead of the default location (/var/lib/rear/output
).
Note
|
With USB we refer to all kinds of external storage devices, like USB
keys, USB disks, eSATA disks, ZIP drives, etc… |
3.4. Using RSYNC as backup mechanism
When using BACKUP=RSYNC
you should provide the backup target location through
the BACKUP_URL
variable. Possible BACKUP_URL
settings are:
BACKUP_URL=rsync://root@server/export/archives
BACKUP_URL=rsync://root@server::/export/archives
4. Scenarios
4.1. Bootable ISO
If you simply want a bootable ISO on a central server, you would do:
OUTPUT=ISO OUTPUT_URL=http://server/path-to-push/
4.2. Bootable ISO with an external (commercial) backup software
If you rely on your backup software to do the full restore of a system then you could define:
OUTPUT=ISO BACKUP=[TSM|NSR|DP|NBU|GALAXY10|SEP|DUPLICITY|BACULA|BAREOS|RBME|FDRUPSTREAM]
When using one of the above backup solution (commercial or open source) then there is no need to use rear mkbackup
as the backup workflow would be empty. Just use rear mkrescue
ReaR will incorporate the needed executables and libraries of your chosen backup solution into the rescue image of ReaR.
4.3. Bootable ISO and storing archive on NFS/NAS
To create an ISO rescue image and using a central NFS/NAS server to store the archive, you could define:
OUTPUT=ISO BACKUP=NETFS BACKUP_URL=nfs://remote-nfs-server/exports #BACKUP_PROG_CRYPT_ENABLED=1 #BACKUP_PROG_CRYPT_KEY="my_Secret_pw"
The above example shows that it is even possible to protect the archives with a password. Be aware, that the password entry BACKUP_PROG_CRYPT_KEY="my_Secret_pw"
will be deleted in the /etc/rear/local.conf
file within the rescue image. So, it is important to remember the password (if you use the rescue image months later could cause problems I guess).
4.4. Bootable USB device with backup to USB
If you want a bootable USB device with a (tar) backup to USB as well, you would use:
BACKUP=NETFS OUTPUT=USB USB_DEVICE=/dev/disk/by-label/REAR-000
4.5. Bootable tape drive (OBDR) with backup to tape
If you want an OBDR image and backup on tape, and use GNU tar for backup/restore, you would use:
BACKUP=NETFS OUTPUT=OBDR TAPE_DEVICE=/dev/nst0
4.6. Bootable tape drive (OBDR) and Bacula restore
If you want an OBDR image on tape, and the Bacula tools to recover your backup, use:
BACKUP=BACULA OUTPUT=OBDR TAPE_DEVICE=/dev/nst0
4.7. ReaR with Borg back end
-
Install Borg backup (https://borgbackup.readthedocs.io/en/stable/installation.html).
Important
|
We strongly recommend to use Borg standalone binary (https://github.com/borgbackup/borg/releases) as it includes all necessities for Borg operations.
If you decide to go for different type of Borg installation types, make sure you include all needed files for Borg runtime into ReaR rescue/recovery system.
E.g. by using COPY_AS_IS_BORG=( '/usr/lib64/python3.4*' '/usr/bin/python3*' '/usr/bin/pyvenv*' '/usr/lib/python3.4*' '/usr/lib64/libpython3*' ) |
4.7.1. Borg → SSH
-
Setup ssh key infrastructure for user that will be running backup. Issuing following command must work without any password prompts or remote host identity confirmation:
ssh <BORGBACKUP_USERNAME>@<BORGBACKUP_HOST>
-
Example local.conf:
OUTPUT=ISO OUTPUT_URL=nfs://foo.bar.xy/mnt/backup/iso BACKUP=BORG BORGBACKUP_HOST="foo.bar.xy" BORGBACKUP_USERNAME="borg_user" BORGBACKUP_REPO="/mnt/backup/client" BORGBACKUP_REMOTE_PATH="/usr/local/bin/borg" # Automatic archive pruning # (https://borgbackup.readthedocs.io/en/stable/usage.html#borg-prune) BORGBACKUP_PRUNE_WEEKLY=2 # Archive compression # (https://borgbackup.readthedocs.io/en/stable/usage.html#borg-create) BORGBACKUP_COMPRESSION="lzma,9" # Slowest backup, best compression # Repository encryption # (https://borgbackup.readthedocs.io/en/stable/usage.html#borg-init) BORGBACKUP_ENC_TYPE="keyfile" export BORG_PASSPHRASE="S3cr37_P455w0rD" COPY_AS_IS_BORG=( "/root/.config/borg/keys/" ) # Borg environment variables # (https://borgbackup.readthedocs.io/en/stable/usage.html#environment-variables) export BORG_RELOCATED_REPO_ACCESS_IS_OK="yes" export BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK="yes"
4.7.2. Borg → USB
-
Example local.conf:
OUTPUT=USB BACKUP=BORG USB_DEVICE=/dev/disk/by-label/REAR-000 BORGBACKUP_REPO="/my_borg_backup" BORGBACKUP_UMASK="0002" BORGBACKUP_PRUNE_WEEKLY=2 BORGBACKUP_ENC_TYPE="keyfile" export BORG_PASSPHRASE="S3cr37_P455w0rD" export BORG_RELOCATED_REPO_ACCESS_IS_OK="yes" export BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK="yes" COPY_AS_IS_EXCLUDE=( "${COPY_AS_IS_EXCLUDE[@]}" ) COPY_AS_IS_BORG=( '/root/.config/borg/keys/' ) SSH_UNPROTECTED_PRIVATE_KEYS="yes" SSH_FILES="yes"
Important
|
If using BORGBACKUP_ENC_TYPE="keyfile", don’t forget to make your
encryption key available for case of restore!
(using COPY_AS_IS_BORG=( "/root/.config/borg/keys/" ) is a option to consider).
Be sure to read https://borgbackup.readthedocs.io/en/stable/usage.html#borg-init,
and make your self familiar how encryption in Borg works. |
-
Executing
rear mkbackup
will create Relax-and-Recover rescue/recovery system and start Borg backup process. Once backup finishes, it will also prune old archives from repository, if at least one ofBORGBACKUP_PRUNE_*
variables is set. -
To recover your system, boot Relax-and-Recover rescue/recovery system and trigger
rear recover
. You will be prompted which archive to recover from Borg repository, once ReaR finished with layout configuration.
``` … Disk layout created. Starting Borg restore
4.7.3. Borg archives list
Host: foo.bar.xy Repository: /mnt/backup/client
[1] rear_1 Sun, 2016-10-16 14:08:16 [2] rear_2 Sun, 2016-10-16 14:32:11
[3] Exit
Choose archive to recover from:
```
4.8. Backup/restore alien file system using BLOCKCLONE and dd
4.8.1. Configuration
-
First we need to set some global options to local.conf
``` # cat local.conf OUTPUT=ISO BACKUP=NETFS BACKUP_OPTIONS="nfsvers=3,nolock" BACKUP_URL=nfs://beta.virtual.sk/mnt/rear ```
-
Now we can define variables that will apply only for targeted block device
``` # cat alien.conf BACKUP=BLOCKCLONE # Define BLOCKCLONE as backup method BACKUP_PROG_ARCHIVE="alien" # Name of image file BACKUP_PROG_SUFFIX=".dd.img" # Suffix of image file BACKUP_PROG_COMPRESS_SUFFIX="" # Clear additional suffixes
BLOCKCLONE_PROG=dd # Use dd for image creation BLOCKCLONE_PROG_OPTS="bs=4k" # Additional options that will be passed to dd BLOCKCLONE_SOURCE_DEV="/dev/sdc1" # Device that should be backed up
BLOCKCLONE_SAVE_MBR_DEV="/dev/sdc" # Device where partitioning information is stored (optional) BLOCKCLONE_MBR_FILE="alien_boot_strap.img" # Output filename for boot strap code BLOCKCLONE_PARTITIONS_CONF_FILE="alien_partitions.conf" # Output filename for partition configuration BLOCKCLONE_ALLOW_MOUNTED="yes" # Device can be mounted during backup (default NO) ```
4.8.2. Running backup
-
Save partitions configuration, bootstrap code and create actual backup of /dev/sdc1
``` # rear -C alien mkbackuponly ```
-
Running restore from ReaR restore/recovery system
``` # rear -C alien restoreonly
Restore alien.dd.img to device: [/dev/sdc1] # User is always prompted for restore destination Device /dev/sdc1 was not found. # If destination does not exist ReaR will try to create it (or fail if BLOCKCLONE_SAVE_MBR_DEV was not set during backup) Restore partition layout to (^c to abort): [/dev/sdc] # Prompt user for device where partition configuration should be restored Checking that no-one is using this disk right now … OK
Disk /dev/sdc: 5 GiB, 5368709120 bytes, 10485760 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
>>> Script header accepted. >>> Script header accepted. >>> Script header accepted. >>> Script header accepted. >>> Created a new DOS disklabel with disk identifier 0x10efb7a9. Created a new partition 1 of type HPFS/NTFS/exFAT and of size 120 MiB.
/dev/sdc2: New situation:
Device Boot Start End Sectors Size Id Type /dev/sdc1 4096 249855 245760 120M 7 HPFS/NTFS/exFAT
The partition table has been altered. Calling ioctl() to re-read partition table. Syncing disks. ```
4.9. Using Relax-and-Recover with USB storage devices
Using USB devices with Relax-and-Recover can be appealing for several reasons:
-
If you only need to have a bootable rescue environment, a USB device is a cheap device for storing only 25 to 60MB to boot from
-
You can leave the USB device inserted in the system and opt-in booting from it only when disaster hits (although we do recommend storing rescue environments off-site)
-
You can store multiple systems and multiple snapshots on a single device
-
In case you have plenty of space, it might be a simple solution to store complete Disaster Recovery images (rescue + backup) on a single device for a set of systems
-
For migrating a bunch of servers having a single device to boot from might be very appealing
-
We have implemented a specific workflow: inserting a REAR-000 labeled USB stick will invoke
rear udev
and adds a rescue environment to the USB stick (updating the bootloader if needed)
However USB devices may be slow for backup purposes, especially on older systems or with unreliable/cheap devices.
4.9.1. Configuring Relax-and-Recover for USB storage devices
The below configuration (/etc/rear/local.conf) gives a list of possible options when you want to run Relax-and-Recover with USB storage.
BACKUP=BACULA OUTPUT=USB USB_DEVICE=/dev/disk/by-label/REAR-000
Important
|
On RHEL4 or older there are no /dev/disk/by-label/ udev aliases,
which means we cannot use device by label. However it is possible
to use by-path references, however this makes it very specific
to the USB port used. We opted to use the complete device-name,
which can be dangerous if you may have other /dev/sdX devices
(luckily we have CCISS block devices in /dev/cciss/). |
4.9.2. Preparing your USB storage device
To prepare your USB device for use with Relax-and-Recover, do: rear format /dev/sdX
This will create a single partition, make it bootable, format it with ext3,
label it REAR-000
and disable warnings related filesystem check for the
device.
4.9.3. USB storage as rescue media
Configuring Relax-and-Recover to have Bacula tools
If the rescue environment needs additional tools and workflow, this can be
specified by using BACKUP=BACULA
in the configuration file
/etc/rear/local.conf:
BACKUP=BACULA OUTPUT=USB USB_DEVICE=/dev/disk/by-label/REAR-000
Making the rescue USB storage device
To create a rescue USB device, run rear -v mkrescue
as shown below after
you have inserted a REAR-000 labeled USB device. Make sure the device name
for the USB device is what is configured for USB_DEVICE
.
[root@system ~]# rear -v mkrescue
Relax-and-Recover 1.12.0svn497 / 2011-07-11
Creating disk layout.
Creating root filesystem layout
Copying files and directories
Copying program files and libraries
Copying kernel modules
Creating initramfs
Finished in 72 seconds.
Warning
|
Doing the above may replace the existing MBR of the USB device. However any other content on the device is retained. |
Booting from USB storage device
Before you can recover our DR backup, it is important to configure the BIOS to
boot from the USB device. In some cases it is required to go into the BIOS setup
(F9
during boot) to change the boot-order of devices. (In BIOS setup select
Standard Boot Order (IPL)
)
Once booted from the USB device, select the system you like to recover from the list. If you don’t press a key within 30 seconds, the system will try to boot from the local disk.
+---------------------------------------------+
| "Relax-and-Recover v1.12.0svn497" |
+---------------------------------------------+
| "Recovery images" |
| "system.localdomain" > |
| "other.localdomain" > |
|---------------------------------------------|
| "Other actions" |
| "Help for Relax-and-Recover" |
| "Boot Local disk (hd1)" |
| "Boot BIOS disk (0x81)" |
| "Boot Next BIOS device" |
| "Hardware Detection tool" |
| "Memory test" |
| "Reboot system" |
| "Power off system" |
+---------------------------------------------+
"Press [Tab] to edit options or [F1] for help"
"Automatic boot in 30 seconds..."
Warning
|
Booting from a local disk may fail when booting from a USB device.
This is caused by the fact that the GRUB bootloader on the local
disk is configured as if it is being the first drive (hd0) but
it is in fact the second disk (hd1) . If you do find menu entries
not working from GRUB, please remove the root (hd0,0) line from
the entry. |
Then select the image you would like to recover.
+---------------------------------------------+
| "system.localdomain" |
+---------------------------------------------+
| "2011-03-26 02:16 backup" |
| "2011-03-25 18:39 backup" |
| "2011-03-05 16:12 rescue image" |
|---------------------------------------------|
| "Back" |
| |
| |
| |
| |
| |
| |
| |
| |
+---------------------------------------------+
"Press [Tab] to edit options or [F1] for help"
"Backup using kernel 2.6.32-122.el6.x86_64"
"BACKUP=NETFS OUTPUT=USB OUTPUT_URL=usb:///dev/disk/by-label/REAR-000"
Tip
|
When browsing through the images you get more information about the image at the bottom of the screen. |
Restoring from USB rescue media
Then wait for the system to boot until you get the prompt.
On the shell prompt, type rear recover
.
You may need to answer a few questions depending on your hardware configuration and whether you are restoring to a (slightly) different system.
RESCUE SYSTEM:/ # rear recover
Relax-and-Recover 1.12.0svn497 / 2011-07-11
NOTICE: Will do driver migration
To recreate HP SmartArray controller 3, type exactly YES: YES
To recreate HP SmartArray controller 0, type exactly YES: YES
Clearing HP SmartArray controller 3
Clearing HP SmartArray controller 0
Recreating HP SmartArray controller 3|A
Configuration restored successfully, reloading CCISS driver... OK
Recreating HP SmartArray controller 0|A
Configuration restored successfully, reloading CCISS driver... OK
Comparing disks.
Disk configuration is identical, proceeding with restore.
Type "Yes" if you want DRBD resource rBCK to become primary: Yes
Type "Yes" if you want DRBD resource rOPS to become primary: Yes
Start system layout restoration.
Creating partitions for disk /dev/cciss/c0d0 (msdos)
Creating partitions for disk /dev/cciss/c2d0 (msdos)
Creating software RAID /dev/md2
Creating software RAID /dev/md6
Creating software RAID /dev/md3
Creating software RAID /dev/md4
Creating software RAID /dev/md5
Creating software RAID /dev/md1
Creating software RAID /dev/md0
Creating LVM PV /dev/md6
Creating LVM PV /dev/md5
Creating LVM PV /dev/md2
Creating LVM VG vgrem
Creating LVM VG vgqry
Creating LVM VG vg00
Creating LVM volume vg00/lv00
Creating LVM volume vg00/lvdstpol
Creating LVM volume vg00/lvsys
Creating LVM volume vg00/lvusr
Creating LVM volume vg00/lvtmp
Creating LVM volume vg00/lvvar
Creating LVM volume vg00/lvopt
Creating ext3-filesystem / on /dev/mapper/vg00-lv00
Mounting filesystem /
Creating ext3-filesystem /dstpol on /dev/mapper/vg00-lvdstpol
Mounting filesystem /dstpol
Creating ext3-filesystem /dstpol/sys on /dev/mapper/vg00-lvsys
Mounting filesystem /dstpol/sys
Creating ext3-filesystem /usr on /dev/mapper/vg00-lvusr
Mounting filesystem /usr
Creating ext2-filesystem /tmp on /dev/mapper/vg00-lvtmp
Mounting filesystem /tmp
Creating ext3-filesystem /boot on /dev/md0
Mounting filesystem /boot
Creating ext3-filesystem /var on /dev/mapper/vg00-lvvar
Mounting filesystem /var
Creating ext3-filesystem /opt on /dev/mapper/vg00-lvopt
Mounting filesystem /opt
Creating swap on /dev/md1
Creating DRBD resource rBCK
Writing meta data...
initializing activity log
New drbd meta data block successfully created.
Creating LVM PV /dev/drbd2
Creating LVM VG vgbck
Creating LVM volume vgbck/lvetc
Creating LVM volume vgbck/lvvar
Creating LVM volume vgbck/lvmysql
Creating ext3-filesystem /etc/bacula/cluster on /dev/mapper/vgbck-lvetc
Mounting filesystem /etc/bacula/cluster
Creating ext3-filesystem /var/bacula on /dev/mapper/vgbck-lvvar
Mounting filesystem /var/bacula
Creating ext3-filesystem /var/lib/mysql/bacula on /dev/mapper/vgbck-lvmysql
Mounting filesystem /var/lib/mysql/bacula
Creating DRBD resource rOPS
Writing meta data...
initializing activity log
New drbd meta data block successfully created.
Creating LVM PV /dev/drbd1
Creating LVM VG vgops
Creating LVM volume vgops/lvcachemgr
Creating LVM volume vgops/lvbackup
Creating LVM volume vgops/lvdata
Creating LVM volume vgops/lvdb
Creating LVM volume vgops/lvswl
Creating LVM volume vgops/lvcluster
Creating ext3-filesystem /opt/cache on /dev/mapper/vgops-lvcachemgr
Mounting filesystem /opt/cache
Creating ext3-filesystem /dstpol/backup on /dev/mapper/vgops-lvbackup
Mounting filesystem /dstpol/backup
Creating ext3-filesystem /dstpol/data on /dev/mapper/vgops-lvdata
Mounting filesystem /dstpol/data
Creating ext3-filesystem /dstpol/databases on /dev/mapper/vgops-lvdb
Mounting filesystem /dstpol/databases
Creating ext3-filesystem /dstpol/swl on /dev/mapper/vgops-lvswl
Mounting filesystem /dstpol/swl
Creating ext3-filesystem /dstpol/sys/cluster on /dev/mapper/vgops-lvcluster
Mounting filesystem /dstpol/sys/cluster
Disk layout created.
The system is now ready to restore from Bacula. You can use the 'bls' command
to get information from your Volume, and 'bextract' to restore jobs from your
Volume. It is assumed that you know what is necessary to restore - typically
it will be a full backup.
You can find useful Bacula commands in the shell history. When finished, type
'exit' in the shell to continue recovery.
WARNING: The new root is mounted under '/mnt/local'.
rear>
Restoring from Bacula tape
Now you need to continue with restoring the actual Bacula backup, for this you
have multiple options of which bextract
is the most easy and
straightforward, but also the slowest and unsafest.
If you know the JobId of the latest successful full backup, and differential backups the most efficient way to restore is by creating a bootstrap file with this information and using it to restore from tape.
A bootstrap file looks like this:
Volume = VOL-1234
JobId = 914
Job = Bkp_Daily
or
Volume = VOL-1234
VolSessionId = 1
VolSessionTime = 108927638
Using a bootstrap file with bextract is easy, simply do:
bextract -b bootstrap.txt Ultrium-1 /mnt/local
Tip
|
It helps to know exactly how many files you need to restore, and using
the FileIndex and Count keywords so bextract does not require to
read the whole tape. Use the commands in your shell history to access
an example Bacula bootstrap file. |
To use bextract
to restore everything from a single tape, you can do:
bextract -V VOLUME-NAME Ultrium-1 /mnt/local
rear> bextract -V VOL-1234 Ultrium-1 /mnt/local
bextract: match.c:249-0 add_fname_to_include prefix=0 gzip=0 fname=/
bextract: butil.c:282 Using device: "Ultrium-1" for reading.
30-Mar 16:00 bextract JobId 0: Ready to read from volume "VOL-1234" on device "Ultrium-1" (/dev/st0).
bextract JobId 0: -rw-r----- 1 252 bacula 3623795 2011-03-30 11:02:18 /mnt/local/var/lib/bacula/bacula.sql
bextract JobId 0: drwxr-xr-x 2 root root 4096 2011-02-02 11:48:28 *none*
bextract JobId 0: drwxr-xr-x 4 root root 1024 2011-02-23 13:09:53 *none*
bextract JobId 0: drwxr-xr-x 12 root root 4096 2011-02-02 11:50:00 *none*
bextract JobId 0: -rwx------ 1 root root 0 2011-02-02 11:48:24 /mnt/local/.hpshm_keyfile
bextract JobId 0: -rw-r--r-- 1 root root 0 2011-02-22 12:38:03 /mnt/local/.autofsck
...
30-Mar 16:06 bextract JobId 0: End of Volume at file 7 on device "Ultrium-1" (/dev/st0), Volume "VOL-1234"
30-Mar 16:06 bextract JobId 0: End of all volumes.
30-Mar 16:07 bextract JobId 0: Alert: smartctl version 5.38 [x86_64-redhat-linux-gnu] Copyright (C) 2002-8 Bruce Allen
30-Mar 16:07 bextract JobId 0: Alert: Home page is http://smartmontools.sourceforge.net/
30-Mar 16:07 bextract JobId 0: Alert:
30-Mar 16:07 bextract JobId 0: Alert: TapeAlert: OK
30-Mar 16:07 bextract JobId 0: Alert:
30-Mar 16:07 bextract JobId 0: Alert: Error counter log:
30-Mar 16:07 bextract JobId 0: Alert: Errors Corrected by Total Correction Gigabytes Total
30-Mar 16:07 bextract JobId 0: Alert: ECC rereads/ errors algorithm processed uncorrected
30-Mar 16:07 bextract JobId 0: Alert: fast | delayed rewrites corrected invocations [10^9 bytes] errors
30-Mar 16:07 bextract JobId 0: Alert: read: 1546 0 0 0 1546 0.000 0
30-Mar 16:07 bextract JobId 0: Alert: write: 0 0 0 0 0 0.000 0
165719 files restored.
Warning
|
In this case bextract will restore all the Bacula jobs on the
provided tapes, start from the oldest, down to the latest. As a
consequence, deleted files may re-appear and the process may take
a very long time. |
Finish recovery process
Once finished, continue Relax-and-Recover by typing exit
.
rear> exit
Did you restore the backup to /mnt/local ? Ready to continue ? y
Installing GRUB boot loader
Finished recovering your system. You can explore it under '/mnt/local'.
Finished in 4424 seconds.
Important
|
If you neglect to perform this last crucial step, your new system will not boot and you have to install a boot-loader yourself manually, or re-execute this procedure. |
4.9.4. USB storage as backup media
Configuring Relax-and-Recover for backup to USB storage device
The below configuration (/etc/rear/local.conf) gives a list of possible options when you want to run Relax-and-Recover with USB storage.
BACKUP=NETFS OUTPUT=USB USB_DEVICE=/dev/disk/by-label/REAR-000 ### Exclude certain items ONLY_INCLUDE_VG=( vg00 ) EXCLUDE_MOUNTPOINTS=( /data )
Making the DR backup to USB storage device
Creating a combined rescue device that integrates the backup on USB, it is
sufficient to run rear -v mkbackup
as shown below after you have inserted
the USB device. Make sure the device name for the USB device is what is
configured.
[root@system ~]# rear -v mkbackup
Relax-and-Recover 1.12.0svn497 / 2011-07-11
Creating disk layout.
Creating root filesystem layout
Copying files and directories
Copying program files and libraries
Copying kernel modules
Creating initramfs
Creating archive 'usb:///dev/sda1/system.localdomain/20110326.0216/backup.tar.gz'
Total bytes written: 3644416000 (3.4GiB, 5.5MiB/s) in 637 seconds.
Writing MBR to /dev/sda
Modifying local GRUB configuration
Copying resulting files to usb location
Finished in 747 seconds.
Important
|
It is advised to go into single user mode (init 1 ) before creating
a backup to ensure all active data is consistent on disk (and no
important processes are active in memory) |
Booting from USB storage device
See the section Booting from USB storage device for more information about how to enable your BIOS to boot from a USB storage device.
Restoring a backup from USB storage device
Then wait for the system to boot until you get the prompt.
On the shell prompt, type rear recover
.
You may need to answer a few questions depending on your hardware configuration and whether you are restoring to a (slightly) different system.
RESCUE SYSTEM:/ # rear recover
Relax-and-Recover 1.12.0svn497 / 2011-07-11
Backup archive size is 1.2G (compressed)
To recreate HP SmartArray controller 1, type exactly YES: YES
To recreate HP SmartArray controller 7, type exactly YES: YES
Clearing HP SmartArray controller 1
Clearing HP SmartArray controller 7
Recreating HP SmartArray controller 1|A
Configuration restored successfully, reloading CCISS driver... OK
Recreating HP SmartArray controller 7|A
Configuration restored successfully, reloading CCISS driver... OK
Comparing disks.
Disk configuration is identical, proceeding with restore.
Start system layout restoration.
Creating partitions for disk /dev/cciss/c0d0 (msdos)
Creating partitions for disk /dev/cciss/c1d0 (msdos)
Creating software RAID /dev/md126
Creating software RAID /dev/md127
Creating LVM PV /dev/md127
Restoring LVM VG vg00
Creating ext3-filesystem / on /dev/mapper/vg00-lv00
Mounting filesystem /
Creating ext3-filesystem /boot on /dev/md126
Mounting filesystem /boot
Creating ext3-filesystem /data on /dev/mapper/vg00-lvdata
Mounting filesystem /data
Creating ext3-filesystem /opt on /dev/mapper/vg00-lvopt
Mounting filesystem /opt
Creating ext2-filesystem /tmp on /dev/mapper/vg00-lvtmp
Mounting filesystem /tmp
Creating ext3-filesystem /usr on /dev/mapper/vg00-lvusr
Mounting filesystem /usr
Creating ext3-filesystem /var on /dev/mapper/vg00-lvvar
Mounting filesystem /var
Creating swap on /dev/mapper/vg00-lvswap
Disk layout created.
Restoring from 'usb:///dev/sda1/system.localdomain/20110326.0216/backup.tar.gz'
Restored 3478 MiB in 134 seconds [avg 26584 KiB/sec]
Installing GRUB boot loader
Finished recovering your system. You can explore it under '/mnt/local'.
Finished in 278 seconds.
If all is well, you can now remove the USB device, restore the BIOS boot order and reboot the system into the recovered OS.
4.10. Using Relax-and-Recover with OBDR tapes
Using One-Button-Disaster-Recovery (OBDR) tapes has a few benefits.
-
Within large organisations tape media is already part of a workflow for offsite storage and is a known and trusted technology
-
Tapes can store large amounts of data reliably and restoring large amounts of data is predictable in time and effort
-
OBDR offers booting from tapes, which is very convenient
-
A single tape can hold both the rescue image as well as a complete snapshot of the system (up to 1.6TB with LTO4)
However, you need one tape per system as an OBDR tape can only store one single rescue environment.
4.10.1. Configuring Relax-and-Recover for OBDR rescue tapes
The below configuration (/etc/rear/local.conf) gives a list of possible options when you want to run Relax-and-Recover with a tape drive. This example shows how to use the tape only for storing the rescue image, the backup is expected to be handled by Bacula and so the Bacula tools are included in the rescue environment to enable a Bacula restore.
OUTPUT=OBDR TAPE_DEVICE=/dev/nst0
4.10.2. Preparing your OBDR rescue tape
To protect normal backup tapes (in case tape drives are also used by another
backup solution) Relax-and-Recover expects that the tape to use is labeled
REAR-000. To achieve this is to insert a blank tape to use for
Relax-and-Recover and run the rear format /dev/stX
command.
4.10.3. OBDR tapes as rescue media
Configuring Relax-and-Recover to have Bacula tools
If the rescue environment needs additional tools and workflow, this can be
spcified by using BACKUP=BACULA
in the configuration file
/etc/rear/local.conf:
BACKUP=BACULA OUTPUT=OBDR BEXTRACT_DEVICE=Ultrium-1 BEXTRACT_VOLUME=VOL-*
Using the BEXTRACT_DEVICE
allows you to use the tape device that is
referenced from the Bacula configuration. This helps in those cases where the
discovery of the various tape drives has already been done and configured in
Bacula.
The BEXTRACT_VOLUME
variable is optional and is only displayed in the
restore instructions on screen as an aid during recovery.
Making the OBDR rescue tape
To create a rescue environment that can boot from an OBDR tape, simply run
rear -v mkrescue
with a REAR-000 -labeled tape inserted.
[root@system ~]# rear -v mkrescue
Relax-and-Recover 1.12.0svn497 / 2011-07-11
Rewinding tape
Writing OBDR header to tape in drive '/dev/nst0'
Creating disk layout.
Creating root filesystem layout
Copying files and directories
Copying program files and libraries
Copying kernel modules
Creating initramfs
Making ISO image
Wrote ISO image: /var/lib/rear/output/rear-dag-ops.iso (48M)
Writing ISO image to tape
Modifying local GRUB configuration
Finished in 119 seconds.
Warning
|
The message above about /dev/cciss/c1d0 not being used makes sense as this is not a real disk but simply an entry for manipulating the controller. This is specific to CCISS controllers with only a tape device attached. |
Booting from OBDR rescue tape
The One Button Disaster Recovery (OBDR) functionality in HP LTO Ultrium drives enables them to emulate CD-ROM devices in specific circumstances (also known as being in 'Disaster Recovery' mode). The drive can then act as a boot device for PCs that support booting off CD-ROM.
Tip
|
An OBDR capable drive can be switched into CD-ROM mode by powering on with the eject button held down. Make sure you keep it pressed when the tape drive regains power, and then release the button. If the drive is in OBDR mode, the light will blink regularly. This might be easier in some cases than the below procedure, hence the name One Button Disaster Recovery ! |
To boot from OBDR, boot your system with the Relax-and-Recover tape inserted. During the boot sequence, interrupt the HP Smart Array controller with the tape attached by pressing F8 (or Escape-8 on serial console).
iLO 2 v1.78 Jun 10 2009 10.5.20.171
Slot 0 HP Smart Array P410i Controller (512MB, v2.00) 1 Logical Drive
Slot 3 HP Smart Array P401 Controller (512MB, v2.00) 1 Logical Drive
Slot 4 HP Smart Array P212 Controller (0MB, v2.00) 0 Logical Drives
Tape or CD-ROM Drive(s) Detected:
Port 1I: Box 0: Bay 4
1785-Slot 4 Drive Array Not Configured
No Drives Detected
Press <F8> to run the Option ROM Configuration for Arrays Utility
Press <ESC> to skip configuration and continue
Then select Configure OBDR in the menu and select the Tape drive by marking it with X (default is on) and press ENTER and F8 to activate this change so it displays 'Configuration saved'.
Then press ENTER and Escape to leave the Smart Array controller BIOS.
**** System will boot from Tape/CD/OBDR device attached to Smart Array.
To boot from OBDR when using an LSI controller, boot your system with the Relax-and-Recover tape inserted. During the boot sequence, interrupt the LSI controller BIOS that has the tape attached by pressing F8 (or Escape-8 on serial console).
LSI Logic Corp. MPT BIOS
Copyright 1995-2006 LSI Logic Corp.
MPTBIOS-5.05.21.00
HP Build
<<<Press F8 for configuration options>>>
Then select the option 1. Tape-based One Button Disaster Recovery (OBDR).
:
Select a configuration option:
1. Tape-based One Button Disaster Recovery (OBDR).
2. Multi Initiator Configuration. <F9 = Setup>
3. Exit.
And then select the correct tape drive to boot from:
compatible tape drives found ->
NUM HBA SCSI ID Drive information
0 0 A - HP Ultrium 2-SCSI
Please choose the NUM of the tape drive to place into OBDR mode.
If all goes well, the system will reboot with OBDR-mode enabled:
The PC will now reboot to begin Tape Recovery....
During the next boot, OBDR-mode will be indicate by:
*** Bootable media located, Using non-Emulation mode ***
Once booted from the OBDR tape, select the Relax-and-Recover menu entry from the menu. If you don’t press a key within 30 seconds, the system will try to boot from the local disk.
+---------------------------------------------+
| "Relax-and-Recover v1.12.0svn497" |
+---------------------------------------------+
| "Relax-and-Recover" |
|---------------------------------------------|
| "Other actions" |
| "Help for Relax-and-Recover" |
| "Boot Local disk (hd1)" |
| "Boot BIOS disk (0x81)" |
| "Boot Next BIOS device" |
| "Hardware Detection tool" |
| "Memory test" |
| "Reboot system" |
| "Power off system" |
| |
| |
+---------------------------------------------+
"Press [Tab] to edit options or [F1] for help"
"Automatic boot in 30 seconds..."
Restoring the OBDR rescue tape
Then wait for the system to boot until you get the prompt.
On the shell prompt, type rear recover
.
You may need to answer a few questions depending on your hardware configuration and whether you are restoring to a (slightly) different system.
RESCUE SYSTEM:/ # rear recover
Relax-and-Recover 1.12.0svn497 / 2011-07-11
NOTICE: Will do driver migration
Rewinding tape
To recreate HP SmartArray controller 3, type exactly YES: YES
To recreate HP SmartArray controller 0, type exactly YES: YES
Clearing HP SmartArray controller 3
Clearing HP SmartArray controller 0
Recreating HP SmartArray controller 3|A
Configuration restored successfully, reloading CCISS driver... OK
Recreating HP SmartArray controller 0|A
Configuration restored successfully, reloading CCISS driver... OK
Comparing disks.
Disk configuration is identical, proceeding with restore.
Type "Yes" if you want DRBD resource rBCK to become primary: Yes
Type "Yes" if you want DRBD resource rOPS to become primary: Yes
Start system layout restoration.
Creating partitions for disk /dev/cciss/c0d0 (msdos)
Creating partitions for disk /dev/cciss/c2d0 (msdos)
Creating software RAID /dev/md2
Creating software RAID /dev/md6
Creating software RAID /dev/md3
Creating software RAID /dev/md4
Creating software RAID /dev/md5
Creating software RAID /dev/md1
Creating software RAID /dev/md0
Creating LVM PV /dev/md6
Creating LVM PV /dev/md5
Creating LVM PV /dev/md2
Creating LVM VG vgrem
Creating LVM VG vgqry
Creating LVM VG vg00
Creating LVM volume vg00/lv00
Creating LVM volume vg00/lvdstpol
Creating LVM volume vg00/lvsys
Creating LVM volume vg00/lvusr
Creating LVM volume vg00/lvtmp
Creating LVM volume vg00/lvvar
Creating LVM volume vg00/lvopt
Creating ext3-filesystem / on /dev/mapper/vg00-lv00
Mounting filesystem /
Creating ext3-filesystem /dstpol on /dev/mapper/vg00-lvdstpol
Mounting filesystem /dstpol
Creating ext3-filesystem /dstpol/sys on /dev/mapper/vg00-lvsys
Mounting filesystem /dstpol/sys
Creating ext3-filesystem /usr on /dev/mapper/vg00-lvusr
Mounting filesystem /usr
Creating ext2-filesystem /tmp on /dev/mapper/vg00-lvtmp
Mounting filesystem /tmp
Creating ext3-filesystem /boot on /dev/md0
Mounting filesystem /boot
Creating ext3-filesystem /var on /dev/mapper/vg00-lvvar
Mounting filesystem /var
Creating ext3-filesystem /opt on /dev/mapper/vg00-lvopt
Mounting filesystem /opt
Creating swap on /dev/md1
Creating DRBD resource rBCK
Writing meta data...
initializing activity log
New drbd meta data block successfully created.
Creating LVM PV /dev/drbd2
Creating LVM VG vgbck
Creating LVM volume vgbck/lvetc
Creating LVM volume vgbck/lvvar
Creating LVM volume vgbck/lvmysql
Creating ext3-filesystem /etc/bacula/cluster on /dev/mapper/vgbck-lvetc
Mounting filesystem /etc/bacula/cluster
Creating ext3-filesystem /var/bacula on /dev/mapper/vgbck-lvvar
Mounting filesystem /var/bacula
Creating ext3-filesystem /var/lib/mysql/bacula on /dev/mapper/vgbck-lvmysql
Mounting filesystem /var/lib/mysql/bacula
Creating DRBD resource rOPS
Writing meta data...
initializing activity log
New drbd meta data block successfully created.
Creating LVM PV /dev/drbd1
Creating LVM VG vgops
Creating LVM volume vgops/lvcachemgr
Creating LVM volume vgops/lvbackup
Creating LVM volume vgops/lvdata
Creating LVM volume vgops/lvdb
Creating LVM volume vgops/lvswl
Creating LVM volume vgops/lvcluster
Creating ext3-filesystem /opt/cache on /dev/mapper/vgops-lvcachemgr
Mounting filesystem /opt/cache
Creating ext3-filesystem /dstpol/backup on /dev/mapper/vgops-lvbackup
Mounting filesystem /dstpol/backup
Creating ext3-filesystem /dstpol/data on /dev/mapper/vgops-lvdata
Mounting filesystem /dstpol/data
Creating ext3-filesystem /dstpol/databases on /dev/mapper/vgops-lvdb
Mounting filesystem /dstpol/databases
Creating ext3-filesystem /dstpol/swl on /dev/mapper/vgops-lvswl
Mounting filesystem /dstpol/swl
Creating ext3-filesystem /dstpol/sys/cluster on /dev/mapper/vgops-lvcluster
Mounting filesystem /dstpol/sys/cluster
Disk layout created.
The system is now ready to restore from Bacula. You can use the 'bls' command
to get information from your Volume, and 'bextract' to restore jobs from your
Volume. It is assumed that you know what is necessary to restore - typically
it will be a full backup.
You can find useful Bacula commands in the shell history. When finished, type
'exit' in the shell to continue recovery.
WARNING: The new root is mounted under '/mnt/local'.
rear>
Restoring from Bacula tape
See the section Restoring from Bacula tape for more information about how to restore a Bacula tape.
4.10.4. OBDR tapes as backup media
An OBDR backup tape is similar to an OBDR rescue tape, but next to the rescue environment, it also consists of a complete backup of the system. This is very convenient in that a single tape can be use for disaster recovery, and recovery is much more simple and completely automated.
Caution
|
Please make sure that the system fits onto a single tape uncompressed. For an LTO4 Ultrium that would mean no more than 1.6TB. |
Configuring Relax-and-Recover for OBDR backup tapes
The below configuration (/etc/rear/local.conf) gives a list of possible options when you want to run Relax-and-Recover with a tape drive. This example shows how to use the tape for storing both the rescue image and the backup.
BACKUP=NETFS OUTPUT=OBDR TAPE_DEVICE=/dev/nst0
Making the OBDR backup tape
To create a bootable backup tape that can boot from OBDR, simply run
rear -v mkbackup
with a REAR-000 -labeled tape inserted.
[root@system ~]# rear -v mkbackup
Relax-and-Recover 1.12.0svn497 / 2011-07-11
Rewinding tape
Writing OBDR header to tape in drive '/dev/nst0'
Creating disk layout
Creating root filesystem layout
Copying files and directories
Copying program files and libraries
Copying kernel modules
Creating initramfs
Making ISO image
Wrote ISO image: /var/lib/rear/output/rear-system.iso (45M)
Writing ISO image to tape
Creating archive '/dev/nst0'
Total bytes written: 7834132480 (7.3GiB, 24MiB/s) in 317 seconds.
Rewinding tape
Modifying local GRUB configuration
Finished in 389 seconds.
Important
|
It is advised to go into single user mode (init 1 ) before creating
a backup to ensure all active data is consistent on disk (and no
important processes are active in memory) |
Booting from OBDR backup tape
See the section Booting from OBDR rescue tape for more information about how to enable OBDR and boot from OBDR tapes.
Restoring from OBDR backup tape
RESCUE SYSTEM:~ # rear recover
Relax-and-Recover 1.12.0svn497 / 2011-07-11
NOTICE: Will do driver migration
Rewinding tape
To recreate HP SmartArray controller 3, type exactly YES: YES
To recreate HP SmartArray controller 0, type exactly YES: YES
Clearing HP SmartArray controller 3
Clearing HP SmartArray controller 0
Recreating HP SmartArray controller 3|A
Configuration restored successfully, reloading CCISS driver... OK
Recreating HP SmartArray controller 0|A
Configuration restored successfully, reloading CCISS driver... OK
Comparing disks.
Disk configuration is identical, proceeding with restore.
Type "Yes" if you want DRBD resource rBCK to become primary: Yes
Type "Yes" if you want DRBD resource rOPS to become primary: Yes
Start system layout restoration.
Creating partitions for disk /dev/cciss/c0d0 (msdos)
Creating partitions for disk /dev/cciss/c2d0 (msdos)
Creating software RAID /dev/md2
Creating software RAID /dev/md6
Creating software RAID /dev/md3
Creating software RAID /dev/md4
Creating software RAID /dev/md5
Creating software RAID /dev/md1
Creating software RAID /dev/md0
Creating LVM PV /dev/md6
Creating LVM PV /dev/md5
Creating LVM PV /dev/md2
Restoring LVM VG vgrem
Restoring LVM VG vgqry
Restoring LVM VG vg00
Creating ext3-filesystem / on /dev/mapper/vg00-lv00
Mounting filesystem /
Creating ext3-filesystem /dstpol on /dev/mapper/vg00-lvdstpol
Mounting filesystem /dstpol
Creating ext3-filesystem /dstpol/sys on /dev/mapper/vg00-lvsys
Mounting filesystem /dstpol/sys
Creating ext3-filesystem /usr on /dev/mapper/vg00-lvusr
Mounting filesystem /usr
Creating ext2-filesystem /tmp on /dev/mapper/vg00-lvtmp
Mounting filesystem /tmp
Creating ext3-filesystem /boot on /dev/md0
Mounting filesystem /boot
Creating ext3-filesystem /var on /dev/mapper/vg00-lvvar
Mounting filesystem /var
Creating ext3-filesystem /opt on /dev/mapper/vg00-lvopt
Mounting filesystem /opt
Creating swap on /dev/md1
Creating DRBD resource rBCK
Writing meta data...
initializing activity log
New drbd meta data block successfully created.
Creating LVM PV /dev/drbd2
Restoring LVM VG vgbck
Creating ext3-filesystem /etc/bacula/cluster on /dev/mapper/vgbck-lvetc
Mounting filesystem /etc/bacula/cluster
Creating ext3-filesystem /var/bacula on /dev/mapper/vgbck-lvvar
Mounting filesystem /var/bacula
Creating ext3-filesystem /var/lib/mysql/bacula on /dev/mapper/vgbck-lvmysql
Mounting filesystem /var/lib/mysql/bacula
Creating DRBD resource rOPS
Writing meta data...
initializing activity log
New drbd meta data block successfully created.
Creating LVM PV /dev/drbd1
Restoring LVM VG vgops
Creating ext3-filesystem /opt/cache on /dev/mapper/vgops-lvcachemgr
Mounting filesystem /opt/cache
Creating ext3-filesystem /dstpol/backup on /dev/mapper/vgops-lvbackup
Mounting filesystem /dstpol/backup
Creating ext3-filesystem /dstpol/data on /dev/mapper/vgops-lvdata
Mounting filesystem /dstpol/data
Creating ext3-filesystem /dstpol/databases on /dev/mapper/vgops-lvdb
Mounting filesystem /dstpol/databases
Creating ext3-filesystem /dstpol/swl on /dev/mapper/vgops-lvswl
Mounting filesystem /dstpol/swl
Creating ext3-filesystem /dstpol/sys/cluster on /dev/mapper/vgops-lvcluster
Mounting filesystem /dstpol/sys/cluster
Disk layout created.
Restoring from 'tape:///dev/nst0/system/backup.tar'
Restored 7460 MiB in 180 seconds [avg 42444 KiB/sec]
Installing GRUB boot loader
Finished recovering your system. You can explore it under '/mnt/local'.
Finished in 361 seconds.
5. Integration
5.1. Monitoring your system with Relax-and-Recover
If Relax-and-Recover is not in charge of the backup, but only for creating a rescue environment, it can be useful to know when a change to the system invalidates your existing/stored rescue environment, requiring one to update the rescue environment.
For this, Relax-and-Recover has two different targets, one to create a new baseline (which is automatically done when creating a new rescue environment successfully. And one to verify the (old) baseline to the current situation.
With this, one can monitor or automate generating new rescue environments only when it is really needed.
5.1.1. Creating a baseline
Relax-and-Recover automatically creates a new baseline as soon as it
successfully has created a new rescue environment. However if for some reason
you want to recreate the baseline manually, use rear savelayout
.
5.1.2. Detecting changes to the baseline
When you want to know if the latest rescue environment is still valid, you may
want to use the rear checklayout
command instead.
[root@system ~]# rear checklayout
[root@system ~]# echo $?
0
If the layout has changed, the return code will indicate this by a non-zero return code.
[root@system ~]# rear checklayout
[root@system ~]# echo $?
1
5.1.3. Integration with Nagios and Opsview
If having current DR rescue images is important to your organization, but they cannot be automated (eg. a tape or USB device needs inserting), we provide a Nagios plugin that can send out a notification whenever there is a critical change to the system that requires updating your rescue environment.
Changes to the system requiring an update are:
-
Changes to hardware RAID
-
Changes to software RAID
-
Changes to partitioning
-
Changes to DRBD configuration
-
Changes to LVM
-
Changes to filesystems
The integration is done using our own check_rear plugin for Nagios.
#!/bin/bash # # Purpose: Checks if disaster recovery usb stick is up to date # Check if ReaR is installed if [[ ! -x /usr/sbin/rear ]]; then echo "REAR IS NOT INSTALLED" exit 2 fi # ReaR disk layout status can be identical or changed # returncode: 0 = ok if ! /usr/sbin/rear checklayout; then echo "Disk layout has changed. Please insert Disaster Recovery USB stick into system !" exit 2 fi
We also monitor the /var/log/rear/rear-system.log file for ERROR:
and BUG BUG BUG
strings, so that in case of problems the operator is notified immediately.
6. Layout configuration
Jeroen Hoekx <jeroen.hoekx@hamok.be> 2011-09-10
6.1. General overview
The disk layout generation code in Relax-and-Recover is responsible for the faithful recreation of the disk layout of the original system. It gathers information about any component in the system layout. Components supported in Relax-and-Recover include:
-
Partitions
-
Logical volume management (LVM)
-
Software RAID (MD)
-
Encrypted volumes (LUKS)
-
Multipath disks
-
Swap
-
Filesystems
-
Btrfs Volumes
-
DRBD
-
HP SmartArray controllers
Relax-and-Recover detects dependencies between these components.
During the rescue media creation phase, Relax-and-Recover centralizes all information in one file. During recovery, that file is used to generate the actual commands to recreate the components. Relax-and-Recover allows customizations and manual editing in all these phases.
6.2. Layout information gathered during rescue image creation
Layout information is stored in /var/lib/rear/layout/disklayout.conf
. The term layout file in this document refers to this particular file.
Consider the information from the following system as an example:
disk /dev/sda 160041885696 msdos
# disk /dev/sdb 320072933376 msdos
# disk /dev/sdc 1999696297984 msdos
part /dev/sda 209682432 32768 primary boot /dev/sda1
part /dev/sda 128639303680 209719296 primary lvm /dev/sda2
part /dev/sda 31192862720 128849022976 primary none /dev/sda3
# part /dev/sdb 162144912384 32256 primary none /dev/sdb1
# part /dev/sdb 152556666880 162144944640 primary none /dev/sdb2
# part /dev/sdb 5371321856 314701611520 primary boot /dev/sdb3
# part /dev/sdc 1073741824000 1048576 primary boot /dev/sdc1
# part /dev/sdc 925953425408 1073742872576 primary lvm /dev/sdc2
# lvmdev /dev/backup /dev/sdc2 cJp4Mt-Vkgv-hVlr-wTMb-0qeA-FX7j-3C60p5 1808502784
lvmdev /dev/system /dev/mapper/disk N4Hpdc-DkBP-Hdm6-Z6FH-VixZ-7tTb-LiRt0w 251244544
# lvmgrp /dev/backup 4096 220764 904249344
lvmgrp /dev/system 4096 30669 125620224
# lvmvol /dev/backup backup 12800 104857600
# lvmvol /dev/backup externaltemp 38400 314572800
lvmvol /dev/system root 2560 20971520
lvmvol /dev/system home 5120 41943040
lvmvol /dev/system var 2560 20971520
lvmvol /dev/system swap 512 4194304
lvmvol /dev/system vmxfs 7680 62914560
lvmvol /dev/system kvm 5000 40960000
fs /dev/mapper/system-root / ext4 uuid=dbb0c0d4-7b9a-40e2-be83-daafa14eff6b label= blocksize=4096 reserved_blocks=131072 max_mounts=21 check_interval=180d options=rw,commit=0
fs /dev/mapper/system-home /home ext4 uuid=e9310015-6043-48cd-a37d-78dbfdba1e3b label= blocksize=4096 reserved_blocks=262144 max_mounts=38 check_interval=180d options=rw,commit=0
fs /dev/mapper/system-var /var ext4 uuid=a12bb95f-99f2-42c6-854f-1cb3f144d662 label= blocksize=4096 reserved_blocks=131072 max_mounts=23 check_interval=180d options=rw,commit=0
fs /dev/mapper/system-vmxfs /vmware xfs uuid=7457d2ab-8252-4f41-bab6-607316259975 label= options=rw,noatime
fs /dev/mapper/system-kvm /kvm ext4 uuid=173ab1f7-8450-4176-8cf7-c09b47f5e3cc label= blocksize=4096 reserved_blocks=256000 max_mounts=21 check_interval=180d options=rw,noatime,commit=0
fs /dev/sda1 /boot ext3 uuid=f6b08566-ea5e-46f9-8f73-5e8ffdaa7be6 label= blocksize=1024 reserved_blocks=10238 max_mounts=35 check_interval=180d options=rw,commit=0
swap /dev/mapper/system-swap uuid=9f347fc7-1605-4788-98fd-fca828beedf1 label=
crypt /dev/mapper/disk /dev/sda2 cipher=aes-xts-plain hash=sha1 uuid=beafe67c-d9a4-4992-80f1-e87791a543bb
This document will continue to use this example to explore the various options available in Relax-and-Recover. The exact syntax of the layout file is described in a later section. It is already clear that this file is human readable and thus human editable. It is also machine readable and all information necessary to restore a system is listed.
It’s easy to see that there are 3 disks attached to the system. /dev/sda
is the internal disk of the system. Its filesystems are normally mounted. The other devices are external disks. One of them has just normal partitions. The other one has a physical volume on one of the partitions.
6.3. Including/Excluding components
6.3.1. Autoexcludes
Relax-and-Recover has reasonable defaults when creating the recovery information. It has commented out the two external disks and any component that’s part of it. The reason is that no mounted filesystem uses these two disks. After all, you don’t want to recreate your backup disk when you’re recovering your system.
If we mount the filesystem on /dev/mapper/backup-backup
on /media/backup
,
Relax-and-Recover will think that it’s necessary to recreate the filesystem:
disk /dev/sda 160041885696 msdos
# disk /dev/sdb 320072933376 msdos
disk /dev/sdc 1999696297984 msdos
part /dev/sda 209682432 32768 primary boot /dev/sda1
part /dev/sda 128639303680 209719296 primary lvm /dev/sda2
part /dev/sda 31192862720 128849022976 primary none /dev/sda3
# part /dev/sdb 162144912384 32256 primary none /dev/sdb1
# part /dev/sdb 152556666880 162144944640 primary none /dev/sdb2
# part /dev/sdb 5371321856 314701611520 primary boot /dev/sdb3
part /dev/sdc 1073741824000 1048576 primary boot /dev/sdc1
part /dev/sdc 925953425408 1073742872576 primary lvm /dev/sdc2
lvmdev /dev/backup /dev/sdc2 cJp4Mt-Vkgv-hVlr-wTMb-0qeA-FX7j-3C60p5 1808502784
lvmdev /dev/system /dev/mapper/disk N4Hpdc-DkBP-Hdm6-Z6FH-VixZ-7tTb-LiRt0w 251244544
lvmgrp /dev/backup 4096 220764 904249344
lvmgrp /dev/system 4096 30669 125620224
lvmvol /dev/backup backup 12800 104857600
lvmvol /dev/backup externaltemp 38400 314572800
lvmvol /dev/system root 2560 20971520
lvmvol /dev/system home 5120 41943040
lvmvol /dev/system var 2560 20971520
lvmvol /dev/system swap 512 4194304
lvmvol /dev/system vmxfs 7680 62914560
lvmvol /dev/system kvm 5000 40960000
fs /dev/mapper/system-root / ext4 uuid=dbb0c0d4-7b9a-40e2-be83-daafa14eff6b label= blocksize=4096 reserved_blocks=131072 max_mounts=21 check_interval=180d options=rw,commit=0
fs /dev/mapper/system-home /home ext4 uuid=e9310015-6043-48cd-a37d-78dbfdba1e3b label= blocksize=4096 reserved_blocks=262144 max_mounts=38 check_interval=180d options=rw,commit=0
fs /dev/mapper/system-var /var ext4 uuid=a12bb95f-99f2-42c6-854f-1cb3f144d662 label= blocksize=4096 reserved_blocks=131072 max_mounts=23 check_interval=180d options=rw,commit=0
fs /dev/mapper/system-vmxfs /vmware xfs uuid=7457d2ab-8252-4f41-bab6-607316259975 label= options=rw,noatime
fs /dev/mapper/system-kvm /kvm ext4 uuid=173ab1f7-8450-4176-8cf7-c09b47f5e3cc label= blocksize=4096 reserved_blocks=256000 max_mounts=21 check_interval=180d options=rw,noatime,commit=0
fs /dev/sda1 /boot ext3 uuid=f6b08566-ea5e-46f9-8f73-5e8ffdaa7be6 label= blocksize=1024 reserved_blocks=10238 max_mounts=35 check_interval=180d options=rw,commit=0
fs /dev/mapper/backup-backup /media/backup ext4 uuid=da20354a-dc4c-4bef-817c-1c92894bb002 label= blocksize=4096 reserved_blocks=655360 max_mounts=24 check_interval=180d options=rw
swap /dev/mapper/system-swap uuid=9f347fc7-1605-4788-98fd-fca828beedf1 label=
crypt /dev/mapper/disk /dev/sda2 cipher=aes-xts-plain hash=sha1 uuid=beafe67c-d9a4-4992-80f1-e87791a543bb
This behavior is controlled by the AUTOEXCLUDE_DISKS=y
parameter in
default.conf
. If we unset it in the local configuration, Relax-and-Recover
will no longer exclude it automatically.
A similar mechanism exists for multipath disks. The AUTOEXCLUDE_MULTIPATH=y
variable in default.conf
prevents Relax-and-Recover from overwriting
multipath disks. Typically, they are part of the SAN disaster recovery
strategy. However, there can be cases where you want to recover them. The
information is retained in disklayout.conf
.
6.3.2. Manual excludes
It seems prudent to prevent the external drives from ever being backed-up or overwritten. The default configuration contains these lines:
# Exclude components from being backed up, recreation information is active
EXCLUDE_BACKUP=()
# Exclude components during component recreation
# will be added to EXCLUDE_BACKUP (it is not backed up)
# recreation information gathered, but commented out
EXCLUDE_RECREATE=()
# Exclude components during the backup restore phase
# only used to exclude files from the restore.
EXCLUDE_RESTORE=()
To prevent an inadvertently mounted backup filesystem being added to the restore list, the easiest way is to add the filesystem to the EXCLUDE_RECREATE
array.
EXCLUDE_RECREATE=( "${EXCLUDE_RECREATE[@]}" "fs:/media/backup" )
The layout file is as expected:
disk /dev/sda 160041885696 msdos
# disk /dev/sdb 320072933376 msdos
# disk /dev/sdc 1999696297984 msdos
part /dev/sda 209682432 32768 primary boot /dev/sda1
part /dev/sda 128639303680 209719296 primary lvm /dev/sda2
part /dev/sda 31192862720 128849022976 primary none /dev/sda3
# part /dev/sdb 162144912384 32256 primary none /dev/sdb1
# part /dev/sdb 152556666880 162144944640 primary none /dev/sdb2
# part /dev/sdb 5371321856 314701611520 primary boot /dev/sdb3
# part /dev/sdc 1073741824000 1048576 primary boot /dev/sdc1
# part /dev/sdc 925953425408 1073742872576 primary lvm /dev/sdc2
# lvmdev /dev/backup /dev/sdc2 cJp4Mt-Vkgv-hVlr-wTMb-0qeA-FX7j-3C60p5 1808502784
lvmdev /dev/system /dev/mapper/disk N4Hpdc-DkBP-Hdm6-Z6FH-VixZ-7tTb-LiRt0w 251244544
# lvmgrp /dev/backup 4096 220764 904249344
lvmgrp /dev/system 4096 30669 125620224
# lvmvol /dev/backup backup 12800 104857600
# lvmvol /dev/backup externaltemp 38400 314572800
lvmvol /dev/system root 2560 20971520
lvmvol /dev/system home 5120 41943040
lvmvol /dev/system var 2560 20971520
lvmvol /dev/system swap 512 4194304
lvmvol /dev/system vmxfs 7680 62914560
lvmvol /dev/system kvm 5000 40960000
fs /dev/mapper/system-root / ext4 uuid=dbb0c0d4-7b9a-40e2-be83-daafa14eff6b label= blocksize=4096 reserved_blocks=131072 max_mounts=21 check_interval=180d options=rw,commit=0
fs /dev/mapper/system-home /home ext4 uuid=e9310015-6043-48cd-a37d-78dbfdba1e3b label= blocksize=4096 reserved_blocks=262144 max_mounts=38 check_interval=180d options=rw,commit=0
fs /dev/mapper/system-var /var ext4 uuid=a12bb95f-99f2-42c6-854f-1cb3f144d662 label= blocksize=4096 reserved_blocks=131072 max_mounts=23 check_interval=180d options=rw,commit=0
fs /dev/mapper/system-vmxfs /vmware xfs uuid=7457d2ab-8252-4f41-bab6-607316259975 label= options=rw,noatime
fs /dev/mapper/system-kvm /kvm ext4 uuid=173ab1f7-8450-4176-8cf7-c09b47f5e3cc label= blocksize=4096 reserved_blocks=256000 max_mounts=21 check_interval=180d options=rw,noatime,commit=0
fs /dev/sda1 /boot ext3 uuid=f6b08566-ea5e-46f9-8f73-5e8ffdaa7be6 label= blocksize=1024 reserved_blocks=10238 max_mounts=35 check_interval=180d options=rw,commit=0
# fs /dev/mapper/backup-backup /media/backup ext4 uuid=da20354a-dc4c-4bef-817c-1c92894bb002 label= blocksize=4096 reserved_blocks=655360 max_mounts=24 check_interval=180d options=rw
swap /dev/mapper/system-swap uuid=9f347fc7-1605-4788-98fd-fca828beedf1 label=
crypt /dev/mapper/disk /dev/sda2 cipher=aes-xts-plain hash=sha1 uuid=beafe67c-d9a4-4992-80f1-e87791a543bb
Another approach would be to exclude the backup volume group. This is achieved by adding this line to the local configuration:
EXCLUDE_RECREATE=( "${EXCLUDE_RECREATE[@]}" "/dev/backup" )
6.4. Restore to the same hardware
Restoring the system to the same hardware is simple. Type rear recover
in
the rescue system prompt. Relax-and-Recover will detect that it’s restoring to
the same system and will make sure things like UUIDs match. It also asks for
your LUKS encryption password.
Once the restore of the backup has completed, Relax-and-Recover will install the bootloader and the system is back in working order.
RESCUE firefly:~ # rear recover
Relax-and-Recover 0.0.0 / $Date$
NOTICE: Will do driver migration
Comparing disks.
Disk configuration is identical, proceeding with restore.
Start system layout restoration.
Creating partitions for disk /dev/sda (msdos)
Please enter the password for disk(/dev/sda2):
Enter LUKS passphrase:
Please re-enter the password for disk(/dev/sda2):
Enter passphrase for /dev/sda2:
Creating LVM PV /dev/mapper/disk
Restoring LVM VG system
Creating ext4-filesystem / on /dev/mapper/system-root
Mounting filesystem /
Creating ext4-filesystem /home on /dev/mapper/system-home
Mounting filesystem /home
Creating ext4-filesystem /var on /dev/mapper/system-var
Mounting filesystem /var
Creating xfs-filesystem /vmware on /dev/mapper/system-vmxfs
meta-data=/dev/mapper/system-vmxfs isize=256 agcount=4, agsize=1966080 blks
= sectsz=512 attr=2, projid32bit=0
data = bsize=4096 blocks=7864320, imaxpct=25
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0
log =internal log bsize=4096 blocks=3840, version=2
= sectsz=512 sunit=0 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
Mounting filesystem /vmware
Creating ext4-filesystem /kvm on /dev/mapper/system-kvm
Mounting filesystem /kvm
Creating ext3-filesystem /boot on /dev/sda1
Mounting filesystem /boot
Creating swap on /dev/mapper/system-swap
Disk layout created.
Please start the restore process on your backup host.
Make sure that you restore the data into '/mnt/local' instead of '/' because the
hard disks of the recovered system are mounted there.
Please restore your backup in the provided shell and, when finished, type exit
in the shell to continue recovery.
Welcome to Relax-and-Recover. Run "rear recover" to restore your system !
rear>
6.5. Restore to different hardware
There are two ways to deal with different hardware. One is being lazy and dealing with problems when you encounter them. The second option is to plan in advance. Both are valid approaches. The lazy approach works fine when you are in control of the restore and you have good knowledge of the components in your system. The second approach is preferable in disaster recovery situations or migrations where you know the target hardware in advance and the actual restore can be carried out by less knowledgeable people.
6.5.1. The Ad-Hoc Way
Relax-and-Recover will assist you somewhat in case it notices different disk sizes. It will ask you to map each differently sized disk to a disk in the target system. Partitions will be resized. Relax-and-Recover is careful not to resize your boot partition, since this is often the one with the most stringent sizing constraints. In fact, it only resizes LVM and RAID partitions.
Let’s try to restore our system to a different system. Instead of one 160G disk, there is now one 5G and one 10G disk. That’s not enough space to restore the complete system, but for purposes of this demonstration, we do not care about that. We’re also not going to use the first disk, but we just want to show that Relax-and-Recover handles the renaming automatically.
RESCUE firefly:~ # rear recover
Relax-and-Recover 0.0.0 / $Date$
NOTICE: Will do driver migration
Comparing disks.
Device sda has size 5242880000, 160041885696 expected
Switching to manual disk layout configuration.
Disk sda does not exist in the target system. Please choose the appropriate replacement.
1) sda
2) sdb
3) Do not map disk.
#? 2
2011-09-10 16:17:10 Disk sdb chosen as replacement for sda.
Disk sdb chosen as replacement for sda.
This is the disk mapping table:
/dev/sda /dev/sdb
Please confirm that '/var/lib/rear/layout/disklayout.conf' is as you expect.
1) View disk layout (disklayout.conf) 4) Go to Relax-and-Recover shell
2) Edit disk layout (disklayout.conf) 5) Continue recovery
3) View original disk space usage 6) Abort Relax-and-Recover
Ok, mapping the disks was not that hard. If Relax-and-Recover insists on us checking the disklayout file, we’d better do that.
#? 1
disk /dev/sdb 160041885696 msdos
# disk _REAR1_ 320072933376 msdos
# disk /dev/sdc 1999696297984 msdos
part /dev/sdb 209682432 32768 primary boot /dev/sdb1
part /dev/sdb -20916822016 209719296 primary lvm /dev/sdb2
part /dev/sdb 31192862720 128849022976 primary none /dev/sdb3
# part _REAR1_ 162144912384 32256 primary none _REAR1_1
# part _REAR1_ 152556666880 162144944640 primary none _REAR1_2
# part _REAR1_ 5371321856 314701611520 primary boot _REAR1_3
# part /dev/sdc 1073741824000 1048576 primary boot /dev/sdc1
# part /dev/sdc 925953425408 1073742872576 primary lvm /dev/sdc2
# lvmdev /dev/backup /dev/sdc2 cJp4Mt-Vkgv-hVlr-wTMb-0qeA-FX7j-3C60p5 1808502784
lvmdev /dev/system /dev/mapper/disk N4Hpdc-DkBP-Hdm6-Z6FH-VixZ-7tTb-LiRt0w 251244544
# lvmgrp /dev/backup 4096 220764 904249344
lvmgrp /dev/system 4096 30669 125620224
# lvmvol /dev/backup backup 12800 104857600
# lvmvol /dev/backup externaltemp 38400 314572800
lvmvol /dev/system root 2560 20971520
lvmvol /dev/system home 5120 41943040
lvmvol /dev/system var 2560 20971520
lvmvol /dev/system swap 512 4194304
lvmvol /dev/system vmxfs 7680 62914560
lvmvol /dev/system kvm 5000 40960000
fs /dev/mapper/system-root / ext4 uuid=dbb0c0d4-7b9a-40e2-be83-daafa14eff6b label= blocksize=4096 reserved_blocks=131072 max_mounts=21 check_interval=180d options=rw,commit=0
fs /dev/mapper/system-home /home ext4 uuid=e9310015-6043-48cd-a37d-78dbfdba1e3b label= blocksize=4096 reserved_blocks=262144 max_mounts=38 check_interval=180d options=rw,commit=0
fs /dev/mapper/system-var /var ext4 uuid=a12bb95f-99f2-42c6-854f-1cb3f144d662 label= blocksize=4096 reserved_blocks=131072 max_mounts=23 check_interval=180d options=rw,commit=0
fs /dev/mapper/system-vmxfs /vmware xfs uuid=7457d2ab-8252-4f41-bab6-607316259975 label= options=rw,noatime
fs /dev/mapper/system-kvm /kvm ext4 uuid=173ab1f7-8450-4176-8cf7-c09b47f5e3cc label= blocksize=4096 reserved_blocks=256000 max_mounts=21 check_interval=180d options=rw,noatime,commit=0
fs /dev/sdb1 /boot ext3 uuid=f6b08566-ea5e-46f9-8f73-5e8ffdaa7be6 label= blocksize=1024 reserved_blocks=10238 max_mounts=35 check_interval=180d options=rw,commit=0
# fs /dev/mapper/backup-backup /media/backup ext4 uuid=da20354a-dc4c-4bef-817c-1c92894bb002 label= blocksize=4096 reserved_blocks=655360 max_mounts=24 check_interval=180d options=rw
swap /dev/mapper/system-swap uuid=9f347fc7-1605-4788-98fd-fca828beedf1 label=
crypt /dev/mapper/disk /dev/sdb2 cipher=aes-xts-plain hash=sha1 uuid=beafe67c-d9a4-4992-80f1-e87791a543bb
1) View disk layout (disklayout.conf)
2) Edit disk layout (disklayout.conf)
3) View original disk space usage
4) Go to Relax-and-Recover shell
5) Continue recovery
6) Abort Relax-and-Recover
#?
The renaming operation was successful.
On the other hand, we can already see quite a few problems. A partition with negative sizes. I do not think any tool would like to create that. Still, we don’t care at this moment. Do you like entering partition sizes in bytes? Neither do I. There has to be a better way to handle it. We will show it during the next step.
The /kvm and /vmware filesystems are quite big. We don’t care about them, so just put some nice comments on them and their logical volumes.
The resulting layout file looks like this:
disk /dev/sdb 160041885696 msdos
# disk _REAR1_ 320072933376 msdos
# disk /dev/sdc 1999696297984 msdos
part /dev/sdb 209682432 32768 primary boot /dev/sdb1
part /dev/sdb -20916822016 209719296 primary lvm /dev/sdb2
part /dev/sdb 31192862720 128849022976 primary none /dev/sdb3
# part _REAR1_ 162144912384 32256 primary none _REAR1_1
# part _REAR1_ 152556666880 162144944640 primary none _REAR1_2
# part _REAR1_ 5371321856 314701611520 primary boot _REAR1_3
# part /dev/sdc 1073741824000 1048576 primary boot /dev/sdc1
# part /dev/sdc 925953425408 1073742872576 primary lvm /dev/sdc2
# lvmdev /dev/backup /dev/sdc2 cJp4Mt-Vkgv-hVlr-wTMb-0qeA-FX7j-3C60p5 1808502784
lvmdev /dev/system /dev/mapper/disk N4Hpdc-DkBP-Hdm6-Z6FH-VixZ-7tTb-LiRt0w 251244544
# lvmgrp /dev/backup 4096 220764 904249344
lvmgrp /dev/system 4096 30669 125620224
# lvmvol /dev/backup backup 12800 104857600
# lvmvol /dev/backup externaltemp 38400 314572800
lvmvol /dev/system root 2560 20971520
lvmvol /dev/system home 5120 41943040
lvmvol /dev/system var 2560 20971520
lvmvol /dev/system swap 512 4194304
#lvmvol /dev/system vmxfs 7680 62914560
#lvmvol /dev/system kvm 5000 40960000
fs /dev/mapper/system-root / ext4 uuid=dbb0c0d4-7b9a-40e2-be83-daafa14eff6b label= blocksize=4096 reserved_blocks=131072 max_mounts=21 check_interval=180d options=rw,commit=0
fs /dev/mapper/system-home /home ext4 uuid=e9310015-6043-48cd-a37d-78dbfdba1e3b label= blocksize=4096 reserved_blocks=262144 max_mounts=38 check_interval=180d options=rw,commit=0
fs /dev/mapper/system-var /var ext4 uuid=a12bb95f-99f2-42c6-854f-1cb3f144d662 label= blocksize=4096 reserved_blocks=131072 max_mounts=23 check_interval=180d options=rw,commit=0
#fs /dev/mapper/system-vmxfs /vmware xfs uuid=7457d2ab-8252-4f41-bab6-607316259975 label= options=rw,noatime
#fs /dev/mapper/system-kvm /kvm ext4 uuid=173ab1f7-8450-4176-8cf7-c09b47f5e3cc label= blocksize=4096 reserved_blocks=256000 max_mounts=21 check_interval=180d options=rw,noatime,commit=0
fs /dev/sdb1 /boot ext3 uuid=f6b08566-ea5e-46f9-8f73-5e8ffdaa7be6 label= blocksize=1024 reserved_blocks=10238 max_mounts=35 check_interval=180d options=rw,commit=0
# fs /dev/mapper/backup-backup /media/backup ext4 uuid=da20354a-dc4c-4bef-817c-1c92894bb002 label= blocksize=4096 reserved_blocks=655360 max_mounts=24 check_interval=180d options=rw
swap /dev/mapper/system-swap uuid=9f347fc7-1605-4788-98fd-fca828beedf1 label=
crypt /dev/mapper/disk /dev/sdb2 cipher=aes-xts-plain hash=sha1 uuid=beafe67c-d9a4-4992-80f1-e87791a543bb
Let’s continue recovery.
1) View disk layout (disklayout.conf)
2) Edit disk layout (disklayout.conf)
3) View original disk space usage
4) Go to Relax-and-Recover shell
5) Continue recovery
6) Abort Relax-and-Recover
#? 5
Partition /dev/sdb3 size reduced to fit on disk.
Please confirm that '/var/lib/rear/layout/diskrestore.sh' is as you expect.
1) View restore script (diskrestore.sh)
2) Edit restore script (diskrestore.sh)
3) View original disk space usage
4) Go to Relax-and-Recover shell
5) Continue recovery
6) Abort Relax-and-Recover
#?
Now, this is where human friendly resizes are possible. Edit the file. Find the partition creation code.
if create_component "/dev/sdb" "disk" ; then
# Create /dev/sdb (disk)
LogPrint "Creating partitions for disk /dev/sdb (msdos)"
parted -s /dev/sdb mklabel msdos >&2
parted -s /dev/sdb mkpart primary 32768B 209715199B >&2
parted -s /dev/sdb set 1 boot on >&2
parted -s /dev/sdb mkpart primary 209719296B -20707102721B >&2
parted -s /dev/sdb set 2 lvm on >&2
parted -s /dev/sdb mkpart primary 18446744053002452992B 10485759999B >&2
# Wait some time before advancing
sleep 10
It’s simple bash code. Change it to use better values. Parted is happy to accept partitions in Megabytes.
if create_component "/dev/sdb" "disk" ; then
# Create /dev/sdb (disk)
LogPrint "Creating partitions for disk /dev/sdb (msdos)"
parted -s /dev/sdb mklabel msdos >&2
parted -s /dev/sdb mkpart primary 1M 200M >&2
parted -s /dev/sdb set 1 boot on >&2
parted -s /dev/sdb mkpart primary 200M 10485759999B >&2
parted -s /dev/sdb set 2 lvm on >&2
# Wait some time before advancing
sleep 10
The same action should be done for the remaining logical volumes. We would like them to fit on the disk.
if create_component "/dev/mapper/system-root" "lvmvol" ; then
# Create /dev/mapper/system-root (lvmvol)
LogPrint "Creating LVM volume system/root"
lvm lvcreate -l 2560 -n root system >&2
component_created "/dev/mapper/system-root" "lvmvol"
else
LogPrint "Skipping /dev/mapper/system-root (lvmvol) as it has already been created."
fi
No-one but a computer likes to think in extents, so we size it a comfortable 5G.
if create_component "/dev/mapper/system-root" "lvmvol" ; then
# Create /dev/mapper/system-root (lvmvol)
LogPrint "Creating LVM volume system/root"
lvm lvcreate -L 5G -n root system >&2
component_created "/dev/mapper/system-root" "lvmvol"
else
LogPrint "Skipping /dev/mapper/system-root (lvmvol) as it has already been created."
fi
Do the same thing for the other logical volumes and choose number 5, continue.
1) View restore script (diskrestore.sh)
2) Edit restore script (diskrestore.sh)
3) View original disk space usage
4) Go to Relax-and-Recover shell
5) Continue recovery
6) Abort Relax-and-Recover
#? 5
Start system layout restoration.
Creating partitions for disk /dev/sdb (msdos)
Please enter the password for disk(/dev/sdb2):
Enter LUKS passphrase:
Please re-enter the password for disk(/dev/sdb2):
Enter passphrase for /dev/sdb2:
Creating LVM PV /dev/mapper/disk
Creating LVM VG system
Creating LVM volume system/root
Creating LVM volume system/home
Creating LVM volume system/var
Creating LVM volume system/swap
Creating ext4-filesystem / on /dev/mapper/system-root
Mounting filesystem /
Creating ext4-filesystem /home on /dev/mapper/system-home
An error occurred during layout recreation.
1) View Relax-and-Recover log
2) View original disk space usage
3) Go to Relax-and-Recover shell
4) Edit restore script (diskrestore.sh)
5) Continue restore script
6) Abort Relax-and-Recover
#?
An error… Did you expect it? I didn’t.
Relax-and-Recover produces exceptionally good logs. Let’s check them.
+++ tune2fs -r 262144 -c 38 -i 180d /dev/mapper/system-home
tune2fs: reserved blocks count is too big (262144)
tune2fs 1.41.14 (22-Dec-2010)
Setting maximal mount count to 38
Setting interval between checks to 15552000 seconds
2011-09-10 16:27:35 An error occurred during layout recreation.
Yes, we resized the home partition from 20GB to 2G in the previous step. The root user wants more reserved blocks than the total number of available blocks.
Fixing it is simple. Edit the restore script, option 4. Find the code responsible for filesystem creation.
if create_component "fs:/home" "fs" ; then
# Create fs:/home (fs)
LogPrint "Creating ext4-filesystem /home on /dev/mapper/system-home"
mkfs -t ext4 -b 4096 /dev/mapper/system-home >&2
tune2fs -U e9310015-6043-48cd-a37d-78dbfdba1e3b /dev/mapper/system-home >&2
tune2fs -r 262144 -c 38 -i 180d /dev/mapper/system-home >&2
LogPrint "Mounting filesystem /home"
mkdir -p /mnt/local/home
mount /dev/mapper/system-home /mnt/local/home
component_created "fs:/home" "fs"
else
LogPrint "Skipping fs:/home (fs) as it has already been created."
fi
The -r
parameter is causing the error. We just remove it and do the same for the other filesystems.
if create_component "fs:/home" "fs" ; then
# Create fs:/home (fs)
LogPrint "Creating ext4-filesystem /home on /dev/mapper/system-home"
mkfs -t ext4 -b 4096 /dev/mapper/system-home >&2
tune2fs -U e9310015-6043-48cd-a37d-78dbfdba1e3b /dev/mapper/system-home >&2
tune2fs -c 38 -i 180d /dev/mapper/system-home >&2
LogPrint "Mounting filesystem /home"
mkdir -p /mnt/local/home
mount /dev/mapper/system-home /mnt/local/home
component_created "fs:/home" "fs"
else
LogPrint "Skipping fs:/home (fs) as it has already been created."
fi
Continue the restore script.
1) View Relax-and-Recover log
2) View original disk space usage
3) Go to Relax-and-Recover shell
4) Edit restore script (diskrestore.sh)
5) Continue restore script
6) Abort Relax-and-Recover
#? 5
Start system layout restoration.
Skipping /dev/sdb (disk) as it has already been created.
Skipping /dev/sdb1 (part) as it has already been created.
Skipping /dev/sdb2 (part) as it has already been created.
Skipping /dev/sdb3 (part) as it has already been created.
Skipping /dev/mapper/disk (crypt) as it has already been created.
Skipping pv:/dev/mapper/disk (lvmdev) as it has already been created.
Skipping /dev/system (lvmgrp) as it has already been created.
Skipping /dev/mapper/system-root (lvmvol) as it has already been created.
Skipping /dev/mapper/system-home (lvmvol) as it has already been created.
Skipping /dev/mapper/system-var (lvmvol) as it has already been created.
Skipping /dev/mapper/system-swap (lvmvol) as it has already been created.
Skipping fs:/ (fs) as it has already been created.
Creating ext4-filesystem /home on /dev/mapper/system-home
Mounting filesystem /home
Creating ext4-filesystem /var on /dev/mapper/system-var
Mounting filesystem /var
Creating ext3-filesystem /boot on /dev/sdb1
Mounting filesystem /boot
Creating swap on /dev/mapper/system-swap
Disk layout created.
That looks the way we want it. Notice how Relax-and-Recover detected that it had already created quite a few components and did not try to recreate them anymore.
6.5.2. Planning In Advance
Relax-and-Recover makes it possible to define the layout on the target system
even before the backup is taken. All one has to do is to move the
/var/lib/rear/layout/disklayout.conf
file to /etc/rear/disklayout.conf
and
edit it. This won’t be overwritten on future backup runs. During recovery,
Relax-and-Recover will use that file instead of the snapshot of the original
system.
6.6. Disk layout file syntax
This section describes the syntax of all components in the Relax-and-Recover
layout file at /var/lib/rear/layout/disklayout.conf
. The syntax used to describe it
is straightforward. Normal text has to be present verbatim in the file. Angle
brackets "<" and ">" delimit a value that can be edited. Quotes " inside the
angle brackets indicate a verbatim option, often used together with a / to
indicate multiple options. Parenthesis "(" ")" inside explain the expected unit. No
unit suffix should be present, unless specifically indicated. Square brackets
"[" and "]" indicate an optional parameter. They can be excluded when
hand-crafting a layout file line.
6.6.1. Disks
disk <name> <size(B)> <partition label>
6.6.2. Partitions
part <disk name> <size(B)> <start(B)> <partition name/type> <flags/"none"> <partition name>
6.6.3. Software RAID
raid /dev/<name> level=<RAID level> raid-devices=<nr of devices> [uuid=<uuid>] [spare-devices=<nr of spares>] [layout=<RAID layout>] [chunk=<chunk size>] devices=<device1,device2,...>
6.6.4. Physical Volumes
lvmdev /dev/<volume group name> <device> <UUID> [<size(K)>]
6.6.5. Volume Groups
lvmgrp /dev/<volume group name> <extent size(B)> [<number of extents>] [<size(K)>]
6.6.6. Logical Volumes
lvmvol /dev/<volume group name> <logical volume name> <number of extents> [<size(K)>]
6.6.7. LUKS Devices
crypt /dev/mapper/<name> <device> [cipher=<cipher>] [key_size=<key size>] [hash=<hash function>] [uuid=<uuid>] [keyfile=<keyfile>] [password=<password>]
6.6.8. DRBD
drbd /dev/drbd<nr> <drbd resource name> <device>
6.6.9. Filesystems
fs <device> <mountpoint> <filesystem type> [uuid=<uuid>] [label=<label>] [blocksize=<block size(B)>] [<reserved_blocks=<nr of reserved blocks>] [max_mounts=<nr>] [check_interval=<number of days>d] [options=<filesystem options>]
6.6.10. Btrfs Default SubVolumes
btrfsdefaultsubvol <device> <mountpoint> <btrfs_subvolume_ID> <btrfs_subvolume_path>
6.6.11. Btrfs Normal SubVolumes
btrfsnormalsubvol <device> <mountpoint> <btrfs_subvolume_ID> <btrfs_subvolume_path>
6.6.12. Btrfs Mounted SubVolumes
btrfsmountedsubvol <device> <subvolume_mountpoint> <mount_options> <btrfs_subvolume_path>
6.6.13. Swap
swap <device> [uuid=<uuid>] [label=<label>]
6.6.14. HP SmartArray Controllers
smartarray <slot number>
6.6.15. HP SmartArray Logical Drives
logicaldrive <device> <slot nr>|<array name>|<logical drive name> raid=<raid level> drives=<drive1,drive2> [spares=<spare1,spare2>] [sectors=<sectors>] [stripesize=<stripe size>]
6.6.16. TCG Opal 2-compliant Self-Encrypting Disks
opaldisk <device> [boot=<[yn]>] [password=<password>]
6.7. Disk Restore Script (recover mode)
The /var/lib/rear/layout/disklayout.conf
file is being used as input during rear recover
to create on-the-fly a script called /var/lib/rear/layout/diskrestore.sh
.
When something goes wrong during the recreation of partitions, volume groups you will be thrown in edit mode and you can make the modification needed. However, it is desirable to have a preview mode before doing the recovery so you can review the diskrestore.sh
script before doing any recovery. It is better to find mistakes, obsolete arguments and so on before then later, right?
Gratien wrote a script to accomplish this (script is not part of ReaR) and is meant for debugging reasons only. For more details see http://www.it3.be/2016/06/08/rear-diskrestore/
7. Tips and Tricks using Relax-and-Recover
Recovering a system should not be a struggle against time with poor tools weighing against you. Relax-and-Recover emphasizes on a relaxing recovery, and for this it follows three distinct rules:
-
Do what is generally expected, if possible
-
In doubt, ask the operator or allow intervention
-
Provide an environment that is as convenient as possible
This results in the following useful tips and tricks:
Tip
|
|
8. Troubleshooting Relax-and-Recover
If you encounter a problem, you may find more information in the log file which is located at /var/log/rear/rear-system.log. During recovery the backup log file is also available from /var/log/rear/, for your convenience the history in rescue mode comes with useful commands for debugging, use the up arrow key in the shell to find those commands.
There are a few options in Relax-and-Recover to help you debug the situation:
-
use the
-v
option to show progress output during execution -
use the
-d
option to have debug information in your log file -
use the
-s
option to see what scripts Relax-and-Recover would be using; this is useful to understand how Relax-and-Recover is working internally -
use the
-S
option to step through each individual script when troubleshooting -
use the
-D
option to dump every function call to log file; this is very convenient during development or when troubleshooting Relax-and-Recover
8.1. During backup
During backup Relax-and-Recover creates a description of your system layout in one file (disklayout.conf) and stores this as part of its rescue image. This file describes the configuration of SmartArray RAID, partitions, software RAID, DRBD, logical volumes, filesystems, and possibly more.
Here is a list of known issues during backup:
-
One or more HP SmartArray controllers have errors
Relax-and-Recover had detected that one of your HP SmartArray controllers is in ERROR state and as a result it can not trust the information returned from that controller. This can be dangerous because we cannot guarantee that the disk layout is valid when recovering the system.
We discovered that this problem can be caused by a controller that still has information in its cache that has not been flushed and the only way to solve it was to reboot the system and pressing F2 during the controller initialization when it reports this problem.
-
USB sticks disappear and re-appear with difference device name
We have had issues before with a specific batch of JetFlash USB sticks which, during write operations, reset the USB controller because of a bug in the Linux kernel. The behavior is that the device disappears (during write operations!) and reappears with a different device name. The result is that the filesystem becomes corrupt and the stick cannot be used.
To verify if the USB stick has any issues like this, we recommend using the
f3
tool on Linux or theh2testw
tool on Windows. If this tool succeeds in a write and verify test, the USB stick is reliable.
8.2. During recovery
During restore Relax-and-Recover uses the saved system layout as the basis for recreating a workable layout on your new system. If your new hardware is very different, it’s advised to copy the layout file /var/lib/rear/layout/disklayout.conf to /etc/rear and modify it according to what is required.
cp /var/lib/rear/layout/disklayout.conf /etc/rear/
vi /etc/rear/disklayout.conf
Then restart the recovery process: rear recover
During the recovery process, Relax-and-Recover translates this layout file into a shell procedure (/var/lib/rear/layout/diskrestore.sh) that contains all the needed instructions for recreating your desired layout.
If Relax-and-Recover comes across irreconcilable differences, it provides you
with a small menu of options you have. In any case you can Abort the menu, and
retry after cleaning up everything Relax-and-Recover may already have done, incl.
mdadm --stop --scan
or vgchange -a n
.
In any case, you will have to look into the issue, see what goes wrong and
either fix the layout file (disklayout.conf) and restart the recovery
process (rear recover
) or instead fix the shell procedure (diskrestore.sh)
and choose Retry
.
Warning
|
Customizations to the shell procedure (diskrestore.sh) get
lost when restarting rear recover . |
Here is a list of known issues during recovery:
-
Failed to clear HP SmartArray controller 1
This error may be caused by trying to clear an HP SmartArray controller that does not have a configuration or does not exist. Since we have no means to know whether this is a fatal condition or not we simply try to recreate the logical drive(s) and see what happens.
This message is harmless, but may help troubleshoot the subsequent error message.
-
An error has been detected during restore
The (generated) layout restore script /var/lib/rear/layout/diskrestore.sh was not able to perform all necessary steps without error. The system will provide you with a menu allowing you to fix the diskrestore.sh script manually and continue from where it left off.
Cannot create array. Cannot add physical drive 2I:1:5 Could not configure the HP SmartArray controllers
When the number of physical or logical disks are different, or when other important system characteristics that matter to recovery are incompatible, this will be indicated by a multitude of possible error-messages. Relax-and-Recover makes it possible to recover also in these cases by hand.
You can find more information about your HP SmartArray setup by running one of the following commands:
# hpacucli ctrl all show detail # hpacucli ctrl all show config # hpacucli ctrl all show config detail
Tip
|
You can find these commands as part of the history of the Relax-and-Recover shell. |
9. Design concepts
Schlomo Schapiro, Gratien D’haese, Dag Wieers
9.1. The Workflow System
Relax-and-Recover is built as a modular framework. A call of rear <command>
will invoke the following general workflow:
-
Configuration: Collect system information to assemble a correct configuration (default, arch, OS, OS_ARCH, OS_VER, site, local). See the output of
rear dump
for an example.
Read config files for the combination of system attributes. Always read default.conf first and site.conf, local.conf last. -
Create work area in /tmp/rear.$$/ and start logging to /var/log/rear/rear-hostname.log
-
Run the workflow script for the specified command: /usr/share/rear/lib/<command>-workflow.sh
-
Cleanup work area
9.2. Workflow - Make Rescue Media
The application will have the following general workflow which is represented by appropriately named scripts in various subdirectories:
-
Prep: Prepare the build area by copying a skeleton filesystem layout. This can also come from various sources (FS layout for arch, OS, OS_VER, Backup-SW, Output, …)
-
Analyse disklayout: Analyse the system disklayout to create the /var/lib/rear/layout/ data
-
Analyse (Rescue): Analyse the system to create the rescue system (network, binary dependencies, …)
-
Build: Build the rescue image by copying together everything required
-
Pack: Package the kernel and initrd image together
-
Backup: (Optionally) run the backup software to create a current backup
-
Output: Copy / Install the rescue system (kernel+initrd+(optionally) backups) into the target environment (e.g. PXE boot, write on tape, write on CD/DVD)
-
Cleanup: Cleanup the build area from temporary files
The configuration must define the BACKUP
and OUTPUT
methods. Valid choices are:
NAME |
TYPE |
Description |
Implement in Phase |
NETFS |
BACKUP |
Copy files to NFS/CIFS share |
done |
TAPE |
BACKUP |
Copy files to tape(s) |
done |
DUPLICITY |
BACKUP |
Copy files to the Cloud |
done |
NSR |
BACKUP |
Use Legato Networker |
done |
TSM |
BACKUP |
Use Tivoli Storage Manager |
done |
DP |
BACKUP |
Use HP DataProtector |
done |
NBU |
BACKUP |
Use Symantec NetBackup |
done |
BACULA |
BACKUP |
Use Bacula |
done |
BAREOS |
BACKUP |
Use fork of Bacula |
done |
RSYNC |
BACKUP |
Use rsync to remote location |
done |
RBME |
BACKUP |
Use Rsync Backup Made Easy |
done |
FDRUPSTREAM |
BACKUP |
Use FDR/Upstream |
done |
BORG |
BACKUP |
Use Borg |
done |
ISO |
OUTPUT |
Write result to ISO9660 image |
done |
OBDR |
OUTPUT |
Create OBDR Tape |
done |
PXE |
OUTPUT |
Create PXE bootable files on TFTP server |
done |
USB |
OUTPUT |
Create bootable USB device |
done |
9.3. Workflow - Recovery
The result of the analysis is written into configuration files under /etc/rear/recovery/. This directory is copied together with the other Relax-and-Recover directories onto the rescue system where the same framework runs a different workflow - the recovery workflow.
The recovery workflow consists of these parts (identically named modules are indeed the same):
-
Config: By utilizing the same configuration module, the same configuration variable are available for the recovery, too. This makes writing pairs of backup/restore modules much easier.
-
Verify: Verify the integrity and sanity of the recovery data and check the hardware found to determine, whether a recovery will be likely to succeed. If not, then we abort the workflow so as not to touch the hard disks if we don’t believe that we would manage to successfully recover the system on this hardware.
-
Recreate: Recreate the FS layout (partitioning, LVM, raid, filesystems, …) and mount it under /mnt/local
-
Restore: Restore files and directories from the backup to /mnt/local/. This module is the analog to the Backup module
-
Finalize: Install boot loader, finalize system, dump recovery log onto /var/log/rear/ in the recovered system.
9.4. FS layout
Relax-and-Recover tries to be as much LSB compliant as possible. Therefore ReaR will be installed into the usual locations:
- /etc/rear/
-
Configurations
- /usr/sbin/rear
-
Main program
- /usr/share/rear/
-
Internal scripts
- /tmp/rear.$$/
-
Build area
9.4.1. Layout of /etc/rear
- default.conf
-
Default configuration - will define EVERY variable with a sane default setting. Serves also as a reference for the available variables site.conf site wide configuration (optional)
- local.conf
-
local machine configuration (optional)
- $(uname -s)-$(uname -i).conf
-
architecture specific configuration (optional)
- $(uname -o).conf
-
OS system (e.g. GNU/Linux.conf) (optional)
- $OS/$OS_VER.conf
-
OS and OS Version specific configuration (optional)
- templates/
-
Directory to keep user-changeable templates for various files used or generated
- templates/PXE_per_node_config
-
template for pxelinux.cfg per-node configurations
- templates/CDROM_isolinux.cfg
-
isolinux.cfg template
- templates/…
-
other templates as the need arises
- recovery/…
-
Recovery information
9.4.2. Layout of /usr/share/rear
- skel/default/
-
default rescue FS skeleton
- skel/$(uname -i)/
-
arch specific rescue FS skeleton (optional)
- skel/$OS_$OS_VER/
-
OS-specific rescue FS skeleton (optional)
- skel/$BACKUP/
-
Backup-SW specific rescue FS skeleton (optional)
- skel/$OUTPUT/
-
Output-Method specific rescue FS skeleton (optional)
- lib/*.sh
-
function definitions, split into files by their topic
- prep/default/*.sh
- prep/$(uname -i)/*.sh
- prep/$OS_$OS_VER/*.sh
- prep/$BACKUP/*.sh
- prep/$OUTPUT/*.sh
-
Prep scripts. The scripts get merged from the applicable directories and executed in their alphabetical order. Naming conventions are:
_name.sh
where 00 < < 99 - layout/compare/default/
- layout/compare/$OS_$OS_VER/
-
Scripts to compare the saved layout (under /var/lib/rear/layout/) with the actual situation. This is used by workflow rear checklayout and may trigger a new run of rear mkrescue or rear mkbackup
- layout/precompare/default/
- layout/precompare/$OS_$OS_VER/
- layout/prepare/default/
- layout/prepare/$OS_$OS_VER/
- layout/recreate/default/
- layout/recreate/$OS_$OS_VER/
- layout/save/default/
- layout/save/$OS_$OS_VER/
-
Scripts to capture the disk layout and write it into /var/lib/rear/layout/ directory
- rescue/…
-
Analyse-Rescue scripts: …
- build/…
-
Build scripts: …
- pack/…
-
Pack scripts: …
- backup/$BACKUP/*.sh
-
Backup scripts: …
- output/$OUTPUT/*.sh
-
Output scripts: …
- verify/…
-
Verify the recovery data against the hardware found, whether we can successfully recover the system
- recreate/…
-
Recreate file systems and their dependencies
- restore/$BACKUP/…
-
Restore data from backup media
- finalize/…
-
Finalization scripts
9.5. Inter-module communication
The various stages and modules communicate via standardized environment variables:
NAME |
TYPE |
Descriptions |
Example |
CONFIG_DIR |
STRING (RO) |
Configuration dir |
/etc/rear/ |
SHARE_DIR |
STRING (RO) |
Shared data dir |
/usr/share/rear/ |
BUILD_DIR |
STRING (RO) |
Build directory |
/tmp/rear.$$/ |
ROOTFS_DIR |
STRING (RO) |
Root FS directory for rescue system |
/tmp/rear.$$/initrd/ |
TARGET_FS_ROOT |
STRING (RO) |
Directory for restore |
/mnt/local |
PROGS |
LIST |
Program files to copy |
|
MODULES |
LIST |
Modules to copy |
|
COPY_AS_IS |
LIST |
Files (with path) to copy as-is |
/etc/localtime … |
RO means that the framework manages this variable and modules and methods shouldn’t change it.
9.6. Major changes compared with mkCDrec
-
No Makefiles
-
Major script called xxx that arranges all
-
Simplify the testing and configuration
-
Being less verbose
-
Better control over echo to screen, log file or debugging
-
Less color
-
Easier integration with third party software (GPL or commercial)
-
Modular and plug-ins should be easy for end-users
-
Better documentation for developers
-
Cut the overhead - less is better
-
Less choices (⇒ less errors)
-
mkCDrec project is obsolete
10. Integrating external backup programs into ReaR
Relax-and-Recover can be used only to restore the disk layout of your system and boot loader. However, that means you are responsible for taking backups. And, more important, to restore these before you reboot recovered system.
However, we have successfully already integrated external backup programs within ReaR, such as Netbackup, EMC NetWorker, Tivoli Storage Manager, Data Protetctor to name a few commercial backup programs. Furthermore, open source external backup programs which are also working with ReaR are Bacula, Bareos, Duplicity and Borg to name the most known ones.
Ah, my backup program which is the best of course, is not yet integrated within ReaR. How shall we proceed to make your backup program working with ReaR? This is a step by step approach.
The work-flow mkrescue
is the only needed as mkbackup
will not create any backup as it is done outside ReaR anyhow. Very important to know.
10.1. Think before coding
Well, what does this mean? Is my backup program capable of making full backups of my root disks, including ACLs? And, as usual, did we test a restore of a complete system already? Can we do a restore via the command line, or do we need a graphical user interface to make this happen? If the CLI approach is working then this would be the preferred manor for ReaR. If on the other hand only GUI approach is possible, then can you initiate a push from the media server instead of the pull method (which we could program within ReaR)?
So, most imprtant things to remember here are:
-
CLI - preferred method (and far the easiest one to integrate within ReaR) - pull method
-
GUI - as ReaR has no X Windows available (only command line) we cannot use the GUI within ReaR, however, GUI is still possible from another system (media server or backup server) and push out the restore to the recovered system. This method is similar to the REQUESTRESTORE BACKUP method.
What does ReaR need to have on board before we can initiate a restore from your backup program?
-
the executables (and libraries) from your backup program (only client related)
-
configuration files required by above executables?
-
most likely you need the manuals a bit to gather some background information of your backup program around its minimum requirements
10.2. Steal code from previous backup integrations
Do not make your life too difficult by re-invented the wheel. Have a look at existing integrations. How?
Start with the default configuration file of ReaR:
$ cd /usr/share/rear/conf
$ grep -r NBU *
default.conf:# BACKUP=NBU stuff (Symantec/Veritas NetBackup)
default.conf:COPY_AS_IS_NBU=( /usr/openv/bin/vnetd /usr/openv/bin/vopied /usr/openv/lib /usr/openv/netbackup /usr/openv/var/auth/[mn]*.txt )
default.conf:COPY_AS_IS_EXCLUDE_NBU=( "/usr/openv/netbackup/logs/*" "/usr/openv/netbackup/bin/bpjava*" "/usr/openv/netbackup/bin/xbp" )
default.conf:PROGS_NBU=( )
What does this learn you?
-
you need to define a backup method name, e.g.
BACKUP=NBU
(must be unique within ReaR!) -
define some new variables to automatically copy executables into the ReaR rescue image, and one to exclude stuff which is not required by the recovery (this means you have to play with it and fine-tune it)
-
finally, define a place holder array for your backup programs (is empty to start with).
Now, you have defined a new BACKUP scheme name, right? As an example take the name BURP (http://burp.grke.org/).
Define in /usr/share/rear/conf/default:
# BACKUP=BURP section (Burp program stuff)
COPY_AS_IS_BURP=( )
COPY_AS_IS_EXCLUDE_BURP=( )
PROGS_BURP=( )
Of course, the tricky part is what should above arrays contain? That you should already know as that was part of the first task (Think before coding).
This is only the start of learning what others have done before:
$ cd /usr/share/rear
$ find . -name NBU
./finalize/NBU
./prep/NBU
./rescue/NBU
./restore/NBU
./skel/NBU
./verify/NBU
What does this mean? Well, these are directories created for Netbackup and beneath these directories are scripts that will be included during the mkrescue
and recover
work-flows.
Again, think burp, and you probably also need these directories to be created:
$ mkdir --mode=755 /usr/share/rear/{finalize,prep,rescue,restore,verify}/BURP
Another easy trick is to look at the existing scripts of NBU (as a starter):
$ sudo rear -s mkrescue | grep NBU
Source prep/NBU/default/400_prep_nbu.sh
Source prep/NBU/default/450_check_nbu_client_configured.sh
Source rescue/NBU/default/450_prepare_netbackup.sh
Source rescue/NBU/default/450_prepare_xinetd.sh
$ sudo rear -s recover | grep NBU
Source verify/NBU/default/380_request_client_destination.sh
Source verify/NBU/default/390_request_point_in_time_restore_parameters.sh
Source verify/NBU/default/400_verify_nbu.sh
Source restore/NBU/default/300_create_nbu_restore_fs_list.sh
Source restore/NBU/default/400_restore_with_nbu.sh
Source finalize/NBU/default/990_copy_bplogrestorelog.sh
11. Using Multiple Backups for Relax-and-Recover
11.1. Basics
Currently multiple backups are only supported for:
-
the internal BACKUP=NETFS method with BACKUP_TYPE=""
-
the internal BACKUP=BLOCKCLONE method
-
the external BACKUP=BORG method
In general multiple backups are not supported for BACKUP_TYPE=incremental or BACKUP_TYPE=differential because those require special backup archive file names.
11.1.1. The basic idea behind
A "rear mkbackup" run can be split into a "rear mkrescue" run plus a "rear mkbackuponly" run and the result is still the same.
Accordingly "rear mkbackup" can be split into a single "rear mkrescue" plus multiple "rear mkbackuponly" where each particular "rear mkbackuponly" backups only a particular part of the files of the system, for example:
-
a backup of the files of the basic system
-
a backup of the files in the /home directories
-
a backup of the files in the /opt directory
Multiple "rear mkbackuponly" require that each particular "rear mkbackuponly" uses a specific ReaR configuration file that specifies how that particular "rear mkbackuponly" must be done.
Therefore the -C command line parameter is needed where an additional ReaR configuration file can be specified.
11.1.2. The basic way how to create multiple backups
Have common settings in /etc/rear/local.conf
For each particular backup specify its parameters in a separated additional configuration files like
/etc/rear/basic_system.conf
/etc/rear/home_backup.conf
/etc/rear/opt_backup.conf
First create the ReaR recovery/rescue system ISO image together with a backup of the files of the basic system:
rear -C basic_system mkbackup
Then backup the files in the /home directories:
rear -C home_backup mkbackuponly
Afterwards backup the files in the /opt directory:
rear -C opt_backup mkbackuponly
11.1.3. The basic way how to recover with multiple backups
The basic idea how to recover with multiple backups is to split the "rear recover" into an initial recovery of the basic system followed by several backup restore operations as follows:
Boot the ReaR recovery/rescue system.
In the ReaR recovery/rescue system do the following:
First recover the basic system:
rear -C basic_system recover
Then restore the files in the /home directories:
rear -C home_backup restoreonly
Afterwards restore the files in the /opt directory:
rear -C opt_backup restoreonly
Finally reboot the recreated system.
For more internal details and some background information see https://github.com/rear/rear/issues/1088
11.2. Relax-and-Recover Setup for Multiple Backups
Assume for example multiple backups should be done using the NETFS backup method with tar as backup program to get separated backups for:
-
the files of the basic system
-
the files in the /home directories
-
the files in the /opt directory
Those four configuration files could be used:
OUTPUT=ISO BACKUP=NETFS BACKUP_OPTIONS="nfsvers=3,nolock" BACKUP_URL=nfs://your.NFS.server.IP/path/to/your/rear/backup
this_file_name=$( basename ${BASH_SOURCE[0]} ) LOGFILE="$LOG_DIR/rear-$HOSTNAME-$WORKFLOW-${this_file_name%.*}.log" BACKUP_PROG_EXCLUDE=( "${BACKUP_PROG_EXCLUDE[@]}" '/home/*' '/opt/*' ) BACKUP_PROG_ARCHIVE="backup-${this_file_name%.*}"
this_file_name=$( basename ${BASH_SOURCE[0]} ) LOGFILE="$LOG_DIR/rear-$HOSTNAME-$WORKFLOW-${this_file_name%.*}.log" BACKUP_ONLY_INCLUDE="yes" BACKUP_PROG_INCLUDE=( '/home/*' ) BACKUP_PROG_ARCHIVE="backup-${this_file_name%.*}"
this_file_name=$( basename ${BASH_SOURCE[0]} ) LOGFILE="$LOG_DIR/rear-$HOSTNAME-$WORKFLOW-${this_file_name%.*}.log" BACKUP_ONLY_INCLUDE="yes" BACKUP_PROG_INCLUDE=( '/opt/*' ) BACKUP_PROG_ARCHIVE="backup-${this_file_name%.*}"
The BACKUP_ONLY_INCLUDE setting is described in conf/default.conf.
With those config files creating the ReaR recovery/rescue system ISO image and subsequently backup the files of the system could be done like:
rear mkrescue
rear -C basic_system mkbackuponly
rear -C home_backup mkbackuponly
rear -C opt_backup mkbackuponly
Recovery of that system could be done by calling in the ReaR recovery/rescue system:
rear -C basic_system recover
rear -C home_backup restoreonly
rear -C opt_backup restoreonly
Note that system recovery with multiple backups requires that first and foremost the basic system is recovered where all files must be restored that are needed to install the bootloader and to boot the basic system into a normal usable state.
Nowadays systemd usually needs files in the /usr directory so that in practice in particular all files in the /usr directory must be restored during the initial basic system recovery plus whatever else is needed to boot and run the basic system.
Multiple backups cannot be used to spilt the files of the basic system into several backups. The files of the basic system must be in one single backup and that backup must be restored during the initial recovery of the basic system.
11.3. Relax-and-Recover Setup for Different Backup Methods
Because multiple backups are used via separated additional configuration files, different backup methods can be used.
Assume for example multiple backups should be used to get separated backups for the files of the basic system using the NETFS backup method with tar as backup program and to backup the files in the /home directory using the BORG backup method.
The configuration files could be like the following:
OUTPUT=ISO REQUIRED_PROGS=( "${REQUIRED_PROGS[@]}" borg locale ) COPY_AS_IS=( "${COPY_AS_IS[@]}" "/borg/keys" )
this_file_name=$( basename ${BASH_SOURCE[0]} ) LOGFILE="$LOG_DIR/rear-$HOSTNAME-$WORKFLOW-${this_file_name%.*}.log" BACKUP_PROG_EXCLUDE=( "${BACKUP_PROG_EXCLUDE[@]}" '/home/*' ) BACKUP_PROG_ARCHIVE="backup-${this_file_name%.*}" BACKUP=NETFS BACKUP_OPTIONS="nfsvers=3,nolock" BACKUP_URL=nfs://your.NFS.server.IP/path/to/your/rear/backup
this_file_name=$( basename ${BASH_SOURCE[0]} ) LOGFILE="$LOG_DIR/rear-$HOSTNAME-$WORKFLOW-${this_file_name%.*}.log" BACKUP=BORG BACKUP_ONLY_INCLUDE="yes" BACKUP_PROG_INCLUDE=( '/home/*' ) BORGBACKUP_ARCHIVE_PREFIX="rear" BORGBACKUP_HOST="borg.server.name" BORGBACKUP_USERNAME="borg_server_username" BORGBACKUP_REPO="/path/to/borg/repository/on/borg/server" BORGBACKUP_PRUNE_HOURLY=5 BORGBACKUP_PRUNE_WEEKLY=2 BORGBACKUP_COMPRESSION="zlib,9" BORGBACKUP_ENC_TYPE="keyfile" export BORG_KEYS_DIR="/borg/keys" export BORG_CACHE_DIR="/borg/cache" export BORG_PASSPHRASE="a1b2c3_d4e5f6" export BORG_RELOCATED_REPO_ACCESS_IS_OK="yes" export BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK="yes" export BORG_REMOTE_PATH="/usr/local/bin/borg"
Using different backup methods requires to get all the binaries and all other needed files of all used backup methods into the ReaR recovery/rescue system during "rear mkbackup/mkrescue".
Those binaries and other needed files must be manually specified via REQUIRED_PROGS and COPY_AS_IS in /etc/rear/local.conf (regarding REQUIRED_PROGS and COPY_AS_IS see conf/default.conf).
With those config files creating the ReaR recovery/rescue system ISO image together with a tar backup of the files of the basic system and a separated Borg backup of the files in /home could be done like:
rear -C home_backup mkbackuponly
rear -C basic_system mkbackup
In contrast to the other examples above the Borg backup is run first because Borg creates encryption keys during repository initialization. This ensures the right /borg/keys is created before it will be copied into the ReaR recovery/rescue system by the subsequent "rear mkbackup/mkrescue". Alternatively the ReaR recovery/rescue system could be created again after the Borg backup is done like:
rear -C basic_system mkbackup
rear -C home_backup mkbackuponly
rear -C basic_system mkrescue
Recovery of that system could be done by calling in the ReaR recovery/rescue system:
rear -C basic_system recover
rear -C home_backup restoreonly
11.4. Running Multiple Backups and Restores in Parallel
When the files in multiple backups are separated from each other it should work to run multiple backups or multiple restores in parallel.
Whether or not that actually works in your particular case depends on how you made the backups in your particular case.
For sufficiently well separated backups it should work to run multiple different
rear -C backup_config mkbackuponly
or multiple different
rear -C backup_config restoreonly
in parallel.
Running in parallel is only supported for mkbackuponly and restoreonly.
For example like
rear -C backup1 mkbackuponly & rear -C backup2 mkbackuponly & wait
or
rear -C backup1 restoreonly & rear -C backup2 restoreonly & wait
ReaR’s default logging is not prepared for multiple simultaneous runs and also ReaR’s current progress subsystem is not prepared for that. On the terminal the messages from different simultaneous runs are indistinguishable and the current progress subsystem additionally outputs subsequent messages on one same line which results illegible and meaningless output on the terminal.
Therefore additional parameters must be set to make ReaR’s messages and the progress subsystem output appropriate for parallel runs.
Simultaneously running ReaR workflows require unique messages and unique logfile names.
Therefore the PID ($$) is specified to be used as message prefix for all ReaR messages and it is also added to the LOGFILE value.
The parameters MESSAGE_PREFIX PROGRESS_MODE and PROGRESS_WAIT_SECONDS are described in conf/default.conf.
For example a setup for parallel runs of mkbackuponly and restoreonly could look like the following:
OUTPUT=ISO BACKUP=NETFS BACKUP_OPTIONS="nfsvers=3,nolock" BACKUP_URL=nfs://your.NFS.server.IP/path/to/your/rear/backup MESSAGE_PREFIX="$$: " PROGRESS_MODE="plain" PROGRESS_WAIT_SECONDS="3"
this_file_name=$( basename ${BASH_SOURCE[0]} ) LOGFILE="$LOG_DIR/rear-$HOSTNAME-$WORKFLOW-${this_file_name%.*}-$$.log" BACKUP_PROG_EXCLUDE=( "${BACKUP_PROG_EXCLUDE[@]}" '/home/*' '/opt/*' ) BACKUP_PROG_ARCHIVE="backup-${this_file_name%.*}"
this_file_name=$( basename ${BASH_SOURCE[0]} )
LOGFILE="$LOG_DIR/rear-$HOSTNAME-$WORKFLOW-${this_file_name%.*}-$$.log"
BACKUP_ONLY_INCLUDE="yes"
BACKUP_PROG_INCLUDE=( '/home/*' )
BACKUP_PROG_ARCHIVE="backup-${this_file_name%.*}"
this_file_name=$( basename ${BASH_SOURCE[0]} ) LOGFILE="$LOG_DIR/rear-$HOSTNAME-$WORKFLOW-${this_file_name%.*}-$$.log" BACKUP_ONLY_INCLUDE="yes" BACKUP_PROG_INCLUDE=( '/opt/*' ) BACKUP_PROG_ARCHIVE="backup-${this_file_name%.*}"
With those config files creating the ReaR recovery/rescue system ISO image together with a backup of the files of the basic system and then backup the files in /home and /opt in parallel could be done like:
rear -C basic_system mkbackup
rear -C home_backup mkbackuponly & rear -C opt_backup mkbackuponly & wait
Recovery of that system could be done by calling in the ReaR recovery/rescue system:
rear -C basic_system recover
rear -C home_backup restoreonly & rear -C opt_backup restoreonly & wait
Even on a relatively small system with a single CPU running multiple backups and restores in parallel can be somewhat faster compared to sequential processing.
On powerful systems with multiple CPUs, much main memory, fast storage access, and fast access to the backups it is in practice mandatory to split a single huge backup of the whole system into separated parts and run at least the restores in parallel to utilize powerful hardware and be as fast as possible in case of emergency and time pressure during a real disaster recovery.
Remember that system recovery with multiple backups requires that first and foremost the basic system is recovered where all files must be restored that are needed to install the bootloader and to boot the basic system into a normal usable state so that rear recover cannot run in parallel with rear restoreonly.
12. BLOCKCLONE as backup method
BLOCKCLONE backup method is a bit distinct type of backup, which works directly with block devices. It allows running backups of any kind of block device that can be read/write by Linux drivers and save them to e.g. NFS share or USB drive for later restore. It currently integrates Disk Dump (dd) and ntfsclone (from ntfs-3g package). With BLOCKCLONE, user is also able to make full backup and restore of dual boot (Linux / Windows) environments.
12.1. Limitations
-
Works only directly with disk partitions
-
GPT not supported (work in progress)
-
No UEFI support (work in progress)
-
Linux family boot loader must be used as primary (Windows bootloader was not tested)
-
Restore should be done to same sized or larger disks
-
Tests were made with Windows 7/10 with NFS and USB as destinations. Other ReaR backup destinations like SMB or FTP might however work as well.
12.2. Warning!
ReaR with BLOCKCLONE is capable of doing backup of Linux/Windows dual boot
environments. There is however a need for some basic knowledge on how is
source OS setup. Things like boot loader device location, Linux/Windows
partitioning and file system layout are essential for backup setup.
Always test before you rely!
12.3. Examples
12.3.1. 1. Backup/restore of arbitrary block device with BLOCKCLONE and dd on NFS server
Configuration
This is very basic and most simple scenario where we will do backup
of single partition (/dev/sdc1) located on separate disk (/dev/sdc).
First we need to set some global options in local.conf,
like target for backups.
In our small example backups will be stored in /mnt/rear directory
on BACKUP_URL NFS server.
``` # cat local.conf OUTPUT=ISO BACKUP=NETFS BACKUP_OPTIONS="nfsvers=3,nolock" BACKUP_URL=nfs://<hostname>/mnt/rear ```
Now we will define variables that will apply only for targeted block device
``` # cat alien.conf BACKUP=BLOCKCLONE # Define BLOCKCLONE as backup method BACKUP_PROG_ARCHIVE="alien" # Name of image file BACKUP_PROG_SUFFIX=".dd.img" # Suffix of image file BACKUP_PROG_COMPRESS_SUFFIX="" # Don’t use additional suffixes
BLOCKCLONE_PROG=dd # Use dd for image creation BLOCKCLONE_PROG_OPTS="bs=4k" # Additional options that will be passed to dd BLOCKCLONE_SOURCE_DEV="/dev/sdc1" # Device that should be backed up
BLOCKCLONE_SAVE_MBR_DEV="/dev/sdc" # Device where partitioning information is stored (optional) BLOCKCLONE_MBR_FILE="alien_boot_strap.img" # Output filename for boot strap code BLOCKCLONE_PARTITIONS_CONF_FILE="alien_partitions.conf" # Output filename for partition configuration
BLOCKCLONE_ALLOW_MOUNTED="yes" # Device can be mounted during backup (default NO) ```
Running backup
Save partitions configuration, bootstrap code and create actual backup of /dev/sdc1 ``` # rear -C alien mkbackuponly ```
Running restore from ReaR restore/recovery system
``` # rear -C alien restoreonly Restore alien.dd.img to device: [/dev/sdc1] # User is always prompted for restore destination Device /dev/sdc1 was not found. # If destination does not exist ReaR will try to create it (or fail if BLOCKCLONE_SAVE_MBR_DEV was not set during backup) Restore partition layout to (^c to abort): [/dev/sdc] # Prompt user for device where partition configuration should be restored Checking that no-one is using this disk right now … OK
Disk /dev/sdc: 5 GiB, 5368709120 bytes, 10485760 sectors Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes
>>> Script header accepted. >>> Script header accepted. >>> Script header accepted. >>> Script header accepted. >>> Created a new DOS disklabel with disk identifier 0x10efb7a9. Created a new partition 1 of type HPFS/NTFS/exFAT and of size 120 MiB. /dev/sdc2: New situation:
Device Boot Start End Sectors Size Id Type /dev/sdc1 4096 249855 245760 120M 7 HPFS/NTFS/exFAT
The partition table has been altered. Calling ioctl() to re-read partition table. Syncing disks. ```
Summary
In first example we have run backup of /dev/sdc1 partition and stored it on NFS server. Saved image was later restored from ReaR rescue/recovery system. ReaRs BLOCKCLONE will always ask user for restore destination, so it is users responsibility to identify right target disk/partition prior restore. Unlike NETFS backup method, no guesses about target devices will be made!
Tip
|
One of easiest ways how to identify right disk could be its size. Running fdisk -l <device_file> could be helpful. |
During restore phase ReaR recognized that target partition does not exist and asked if it should be created. If restore destination does not exist and BLOCKCLONE_SAVE_MBR_DEV was set during backup, ReaR will try to deploy partition setup from saved configuration files (BLOCKCLONE_MBR_FILE and BLOCKCLONE_PARTITIONS_CONF_FILE) and continue with restore.
12.3.2. 2. Backup/restore of Linux / Windows 10 dual boot setup with each OS on separate disk
Configuration
In next example we will do backup/restore using BLOCKCLONE and ntfsclone
of Linux (installed on /dev/sda) and Windows 10 (installed on /dev/sdb).
Tip
|
You can locate right disk devices using df and os-prober
```
# df -h /boot
Filesystem Size Used Avail Use% Mounted on
/dev/sda1 10G 4.9G 5.2G 49% / # Linux is most probably installed on /dev/sda |
# os-prober /dev/sdb1:Windows 10 (loader):Windows:chain # Windows 10 is most probably installed on /dev/sdb ```
First we will configure some ReaR backup global options (similar to first example we will do backup/restore with help of NFS server).
``` # cat local.conf OUTPUT=ISO BACKUP=NETFS BACKUP_OPTIONS="nfsvers=3,nolock" BACKUP_URL=nfs://<hostname>/mnt/rear REQUIRED_PROGS=( "${REQUIRED_PROGS[@]}" ntfsclone ) ```
Now we will define backup parameters for Linux.
``` # cat base_os.conf this_file_name=$( basename ${BASH_SOURCE[0]} ) LOGFILE="$LOG_DIR/rear-$HOSTNAME-$WORKFLOW-$..log" BACKUP_PROG_ARCHIVE="backup-$." BACKUP_PROG_EXCLUDE=( ${BACKUP_PROG_EXCLUDE[@]} /media/* ) ```
Our Windows 10 is by default installed on two separate partitions (partition 1 for boot data and partition 2 for disk C:), so we will create two separate configuration files for each partition.
Windows boot partition:
``` # cat windows_boot.conf BACKUP=BLOCKCLONE BACKUP_PROG_ARCHIVE="windows_boot" BACKUP_PROG_SUFFIX=".img" BACKUP_PROG_COMPRESS_SUFFIX=""
BLOCKCLONE_PROG=ntfsclone BLOCKCLONE_SOURCE_DEV="/dev/sdb1" BLOCKCLONE_PROG_OPTS="--quiet"
BLOCKCLONE_SAVE_MBR_DEV="/dev/sdb" BLOCKCLONE_MBR_FILE="windows_boot_strap.img" BLOCKCLONE_PARTITIONS_CONF_FILE="windows_partitions.conf" ```
Windows data partition (disk C:\): ``` # cat windows_data.conf BACKUP=BLOCKCLONE BACKUP_PROG_ARCHIVE="windows_data" BACKUP_PROG_SUFFIX=".img" BACKUP_PROG_COMPRESS_SUFFIX=""
BLOCKCLONE_PROG=ntfsclone BLOCKCLONE_SOURCE_DEV="/dev/sdb2" BLOCKCLONE_PROG_OPTS="--quiet"
BLOCKCLONE_SAVE_MBR_DEV="/dev/sdb" BLOCKCLONE_MBR_FILE="windows_boot_strap.img" BLOCKCLONE_PARTITIONS_CONF_FILE="windows_partitions.conf" ```
Running backup
First we will create backup of Linux. mkbackup
command will create bootable
ISO image with ReaR rescue/recovery system that will be later used for
booting broken system and consecutive recovery.
```
# rear -C base_os mkbackup
```
Now we create backup of Windows 10 boot partition. Command mkbackuponly
will ensure that only partition data and partition layout will be saved
(ReaR rescue/recovery system will not be created which is exactly what we want).
```
# rear -C windows_boot mkbackuponly
```
Similarly, we create backup of Windows 10 data partition (disk C:\) ``` # rear -C windows_data mkbackuponly ```
Running restore from ReaR restore/recovery system
As a first step after ReaR rescue/recovery system booted, we will recover Linux. This step will recover all Linux file systems, OS data and bootloader. Windows disk will remain untouched. ``` # rear -C base_os recover ```
In second step will recover Windows 10 boot partition. During this step ReaR
will detect that destination partition is not present and ask us for device
file where partition(s) should be created. It doesn’t really matter whether
we decide to recover Windows 10 boot or data partition first.
restoreonly
command ensures that previously restored Linux data and
partition(s) configuration (currently mounted under /mnt/local) will
remain untouched. Before starting Windows 10 recovery we should identify
right disk for recovery, as mentioned earlier disk size could be a good start.
```
# fdisk -l /dev/sdb
Disk /dev/sdb: 50 GiB, 53687091200 bytes, 104857600 sectors
```
/dev/sdb looks to be right destination, so we can proceed with restore. ``` # rear -C windows_boot restoreonly Restore windows_boot.img to device: [/dev/sdb1] Device /dev/sdb1 was not found. Restore partition layout to (^c to abort): [/dev/sdb] Checking that no-one is using this disk right now … OK … ```
Last step is to recover Windows 10 OS data (C:\). Partitions on /dev/sdb were already created in previous step, hence ReaR will skip prompt for restoring partition layout. ``` # rear -C windows_data restoreonly Restore windows_data.img to device: [/dev/sdb2] Ntfsclone image version: 10.1 Cluster size : 4096 bytes Image volume size : 33833349120 bytes (33834 MB) Image device size : 33833353216 bytes Space in use : 9396 MB (27.8%) Offset to image data : 56 (0x38) bytes Restoring NTFS from image … … ```
At this stage Linux together with Windows 10 is successfully restored.
Tip
|
As Linux part is still mounted under /mnt/local, you can do some final configuration changes. e.g. adapt GRUB configuration, /etc/fstab, reinstall boot loader … |
Tip
|
ReaR will by default not include tools for mounting NTFS file systems. You
can do it manually by adding
REQUIRED_PROGS=( "${REQUIRED_PROGS[@]}" ntfsclone mount.ntfs-3g )
to your local.conf |
12.3.3. 3. Backup/restore of Linux / Windows 10 dual boot setup sharing same disk
Configuration
In this example we will do backup/restore using BLOCKCLONE and ntfsclone
of Linux and Windows 10 installed on same disk (/dev/sda).
Linux is installed on partition /dev/sda3. Windows 10 is again divided into
boot partition located on /dev/sda1 and OS data (C:/) located on /dev/sda2.
Backups will be stored on NFS server.
First we set global ReaR options ``` # cat local.conf OUTPUT=ISO BACKUP=NETFS BACKUP_OPTIONS="nfsvers=3,nolock" BACKUP_URL=nfs://<hostname>/mnt/rear REQUIRED_PROGS=( "${REQUIRED_PROGS[@]}" ntfsclone )
BLOCKCLONE_STRICT_PARTITIONING="yes" BLOCKCLONE_SAVE_MBR_DEV="/dev/sda"
BLOCKCLONE_MBR_FILE="boot_strap.img" BLOCKCLONE_PARTITIONS_CONF_FILE="partitions.conf"
```
Important
|
BLOCKCLONE_STRICT_PARTITIONING is mandatory if backing up Linux / Windows that shares one disk. Not using this option might result to unbootable Windows 10 installation. |
Linux configuration ``` # cat base_os.conf this_file_name=$( basename ${BASH_SOURCE[0]} ) LOGFILE="$LOG_DIR/rear-$HOSTNAME-$WORKFLOW-$..log" BACKUP_PROG_ARCHIVE="backup-$." BACKUP_PROG_EXCLUDE=( ${BACKUP_PROG_EXCLUDE[@]} /media/* ) ```
Windows 10 boot partition configuration ``` # cat windows_boot.conf BACKUP=BLOCKCLONE
BACKUP_PROG_ARCHIVE="windows_boot" BACKUP_PROG_SUFFIX=".nc.img" BACKUP_PROG_COMPRESS_SUFFIX=""
BLOCKCLONE_PROG=ntfsclone BLOCKCLONE_PROG_OPTS="--quiet"
BLOCKCLONE_SOURCE_DEV="/dev/sda1" ```
Windows 10 data partition configuration ``` # cat windows_data.conf BACKUP=BLOCKCLONE BACKUP_PROG_ARCHIVE="windows_data" BACKUP_PROG_SUFFIX=".nc.img" BACKUP_PROG_COMPRESS_SUFFIX=""
BLOCKCLONE_PROG=ntfsclone BLOCKCLONE_PROG_OPTS="--quiet"
BLOCKCLONE_SOURCE_DEV="/dev/sda2" ```
Running backup
Backup of Linux ``` # rear -C base_os mkbackup ```
Backup of Windows 10 boot partition ``` # rear -C windows_boot mkbackuponly ```
Backup of Windows 10 data partition ``` # rear -C windows_data mkbackuponly ```
Running restore from ReaR restore/recovery system
Restore Linux ``` # rear -C base_os recover ```
During this step ReaR will also create both Windows 10 partitions
Restore Windows 10 data partition ``` # rear -C windows_data restoreonly ```
Restore Windows 10 boot partition ``` # rear -C windows_boot restoreonly ```
12.3.4. 4. Backup/restore of Linux / Windows 10 dual boot setup sharing same disk with USB as destination
Configuration
In this example we will do backup/restore using BLOCKCLONE and ntfsclone
of Linux and Windows 10 installed on same disk (/dev/sda).
Linux is installed on partition /dev/sda3. Windows 10 is again divided into
boot partition located on /dev/sda1 and OS data (C:/) located on /dev/sda2.
Backups will be stored on USB disk drive (/dev/sdb in this example).
Global options ``` # cat local.conf OUTPUT=USB BACKUP=NETFS
USB_DEVICE=/dev/disk/by-label/REAR-000 BACKUP_URL=usb:///dev/disk/by-label/REAR-000
USB_SUFFIX="USB_backups"
GRUB_RESCUE=n REQUIRED_PROGS=( "${REQUIRED_PROGS[@]}" ntfsclone )
BLOCKCLONE_STRICT_PARTITIONING="yes" BLOCKCLONE_SAVE_MBR_DEV="/dev/sda"
BLOCKCLONE_MBR_FILE="boot_strap.img" BLOCKCLONE_PARTITIONS_CONF_FILE="partitions.conf" ```
Options used during Linux backup/restore. ``` # cat local.conf OUTPUT=USB BACKUP=NETFS
USB_DEVICE=/dev/disk/by-label/REAR-000 BACKUP_URL=usb:///dev/disk/by-label/REAR-000
USB_SUFFIX="USB_backups"
GRUB_RESCUE=n REQUIRED_PROGS=( "${REQUIRED_PROGS[@]}" ntfsclone )
BLOCKCLONE_STRICT_PARTITIONING="yes" BLOCKCLONE_SAVE_MBR_DEV="/dev/sda"
BLOCKCLONE_MBR_FILE="boot_strap.img" BLOCKCLONE_PARTITIONS_CONF_FILE="partitions.conf" ```
Important
|
USB_SUFFIX option is mandatory as it avoids ReaR to hold every backup in separate directory, this behavior is essential for BLOCKCLONE backup method to work correctly. |
Windows boot partition options ``` # cat windows_boot.conf BACKUP=BLOCKCLONE
BACKUP_PROG_ARCHIVE="windows_boot" BACKUP_PROG_SUFFIX=".nc.img" BACKUP_PROG_COMPRESS_SUFFIX=""
BLOCKCLONE_PROG=ntfsclone BLOCKCLONE_PROG_OPTS="--quiet"
BLOCKCLONE_SOURCE_DEV="/dev/sda1" ```
Windows data partition options ``` # cat windows_data.conf BACKUP=BLOCKCLONE BACKUP_PROG_ARCHIVE="windows_data" BACKUP_PROG_SUFFIX=".nc.img" BACKUP_PROG_COMPRESS_SUFFIX=""
BLOCKCLONE_PROG=ntfsclone BLOCKCLONE_PROG_OPTS="--quiet"
BLOCKCLONE_SOURCE_DEV="/dev/sda2" ```
Running backup
First we need to format target USB device, with rear format
command
```
# rear -v format /dev/sdb
Relax-and-Recover 2.00 / Git
Using log file: /var/log/rear/rear-centosd.log
USB device /dev/sdb is not formatted with ext2/3/4 or btrfs filesystem
Type exactly Yes to format /dev/sdb with ext3 filesystem: Yes
Repartitioning /dev/sdb
Creating partition table of type msdos on /dev/sdb
Creating ReaR data partition up to 100% of /dev/sdb
Setting boot flag on /dev/sdb
Creating ext3 filesystem with label REAR-000 on /dev/sdb1
Adjusting filesystem parameters on /dev/sdb1
```
Backup of Linux ``` # rear -C base_os mkbackup ```
Backup of Windows 10 boot partition ``` # rear -C windows_boot mkbackuponly NTFS volume version: 3.1 Cluster size : 4096 bytes Current volume size: 524283904 bytes (525 MB) Current device size: 524288000 bytes (525 MB) Scanning volume … Accounting clusters … Space in use : 338 MB (64.4%) Saving NTFS to image … Syncing … ```
Backup of Windows 10 data partition ``` # rear -C windows_data mkbackuponly NTFS volume version: 3.1 Cluster size : 4096 bytes Current volume size: 18104709120 bytes (18105 MB) Current device size: 18104713216 bytes (18105 MB) Scanning volume … Accounting clusters … Space in use : 9833 MB (54.3%) Saving NTFS to image … Syncing … ```
Running restore from ReaR restore/recovery system
For sake of this demonstration I’ve purposely used ReaR’s rescue/recovery media
(USB disk that holds our backed up Linux and Windows 10) as /dev/sda and
disk that will be used as restore destination as /dev/sdb. This will
demonstrate possibility of ReaR to recover backup to arbitrary disk.
As first step Linux will be restored, this will create all the partitions
needed, even those used by Windows 10.
```
RESCUE centosd:~ # rear -C base_os recover
Relax-and-Recover 2.00 / Git
Using log file: /var/log/rear/rear-centosd.log
Sourcing additional configuration file /etc/rear/base_os.conf
Running workflow recover within the ReaR rescue/recovery system
Starting required daemons for NFS: RPC portmapper (portmap or rpcbind) and rpc.statd if available.
Started RPC portmapper rpcbind.
RPC portmapper rpcbind available.
Started rpc.statd.
RPC status rpc.statd available.
Using backup archive /tmp/rear.70zIHqCYsIbtlr6/outputfs/centosd/backup-base_os.tar.gz
Calculating backup archive size
Backup archive size is 1001M /tmp/rear.70zIHqCYsIbtlr6/outputfs/centosd/backup-base_os.tar.gz (compressed)
Comparing disks.
Device sda has size 15733161984, 37580963840 expected
Switching to manual disk layout configuration.
Original disk /dev/sda does not exist in the target system. Please choose an appropriate replacement.
1) /dev/sda
2) /dev/sdb
3) Do not map disk.
#?
```
Now ReaR recover command stops as it detected that disk layout is not identical. As our desired restore target is /dev/sdb we choose right disk and continue recovery. ReaR will ask to check created restore scripts, but this is not needed in our scenario. ``` #? 2 2017-01-25 20:54:01 Disk /dev/sdb chosen as replacement for /dev/sda. Disk /dev/sdb chosen as replacement for /dev/sda. This is the disk mapping table: /dev/sda /dev/sdb Please confirm that /var/lib/rear/layout/disklayout.conf is as you expect.
1) View disk layout (disklayout.conf) 4) Go to Relax-and-Recover shell 2) Edit disk layout (disklayout.conf) 5) Continue recovery 3) View original disk space usage 6) Abort Relax-and-Recover #? 5 Partition primary on /dev/sdb: size reduced to fit on disk. Please confirm that /var/lib/rear/layout/diskrestore.sh is as you expect.
1) View restore script (diskrestore.sh) 2) Edit restore script (diskrestore.sh) 3) View original disk space usage 4) Go to Relax-and-Recover shell 5) Continue recovery 6) Abort Relax-and-Recover #? 5 Start system layout restoration. Creating partitions for disk /dev/sdb (msdos)
Disk /dev/sdb: 6527 cylinders, 255 heads, 63 sectors/track Old situation: Units: cylinders of 8225280 bytes, blocks of 1024 bytes, counting from 0
Device Boot Start End #cyls #blocks Id System
/dev/sdb1 * 0+ 91- 92- 731449 83 Linux
/dev/sdb2 91+ 3235- 3145- 25258396+ 83 Linux
/dev/sdb3 3235+ 6527- 3292- 26436900 83 Linux
/dev/sdb4 0 - 0 0 0 Empty
New situation:
Units: sectors of 512 bytes, counting from 0
Device Boot Start End #sectors Id System
/dev/sdb1 * 2048 1026047 1024000 7 HPFS/NTFS/exFAT
/dev/sdb2 1026048 36386815 35360768 7 HPFS/NTFS/exFAT
/dev/sdb3 36386816 73400319 37013504 83 Linux
/dev/sdb4 0 - 0 0 Empty
Successfully wrote the new partition table
Re-reading the partition table …
Creating filesystem of type xfs with mount point / on /dev/sdb3. Mounting filesystem / Disk layout created. Restoring from /tmp/rear.70zIHqCYsIbtlr6/outputfs/centosd/backup-base_os.tar.gz… Restoring usr/lib/modules/3.10.0-514.2.2.el7.x86_64/kernel/drivers/net/wireless/realtek/rtlwifi/rtl8723be/rtl8723be.koRestoring var/log/rear/rear-centosd.log OK Restored 2110 MiB in 103 seconds [avg. 20977 KiB/sec] Restoring finished. Restore the Mountpoints (with permissions) from /var/lib/rear/recovery/mountpoint_permissions Patching /etc/default/grub instead of etc/sysconfig/grub Patching /proc/1909/mounts instead of etc/mtab Skip installing GRUB Legacy boot loader because GRUB 2 is installed (grub-probe or grub2-probe exist). Installing GRUB2 boot loader Finished recovering your system. You can explore it under /mnt/local. Saving /var/log/rear/rear-centosd.log as /var/log/rear/rear-centosd-recover-base_os.log ```
Now we have Linux part restored, GRUB installed and partitions created, hence we can continue with Windows 10 boot partition recovery. ``` RESCUE centosd:~ # rear -C windows_boot restoreonly Restore windows_boot.nc.img to device: [/dev/sda1] /dev/sdb1 Ntfsclone image version: 10.1 Cluster size : 4096 bytes Image volume size : 524283904 bytes (525 MB) Image device size : 524288000 bytes Space in use : 338 MB (64.4%) Offset to image data : 56 (0x38) bytes Restoring NTFS from image … Syncing … ```
Similarly to Linux restore, we were prompted for restore destination, which
is /dev/sdb1 in our case.
As the last step we will recover Windows 10 data partition
```
RESCUE centosd:~ # rear -C windows_data restoreonly
Restore windows_data.nc.img to device: [/dev/sda2] /dev/sdb2
Ntfsclone image version: 10.1
Cluster size : 4096 bytes
Image volume size : 18104709120 bytes (18105 MB)
Image device size : 18104713216 bytes
Space in use : 9867 MB (54.5%)
Offset to image data : 56 (0x38) bytes
Restoring NTFS from image …
Syncing …
```
Again after restoreonly command is launched, ReaR prompts for restore
destination.
Now both operating systems are restored and we can reboot.
13. Support for TCG Opal 2-compliant Self-Encrypting Disks
Beginning with version 2.4, Relax-and-Recover supports self-encrypting disks (SEDs) compliant with the TCG Opal 2 specification.
Self-encrypting disk support includes
-
recovery (saving and restoring the system’s SED configuration),
-
setting up SEDs, including assigning a disk password,
-
providing a pre-boot authentication (PBA) system to unlock SEDs at boot time.
13.1. Prerequisites
To enable Relax-and-Recover’s TCG Opal 2 support, install the sedutil-cli
(version 1.15.1) executable into a directory within root’s search
path. sedutil-cli
is available for
download from Drive Trust Alliance
(check version compatibility), or see
[How to Build sedutil-cli Version 1.15.1].
13.2. TCG Opal 2-compliant Self-Encrypting Disks
Note
|
This is a simplified explanation to help understand self-encrypting disks in the context of Relax-and-Recover support. |
An Opal 2-compliant self-encrypting disk (SED) encrypts disk contents in hardware. The SED can be configured to store a user-assigned password and to lock itself when powered off. Unlocking the disk after powering up requires the user to supply the password.
13.2.1. Booting From a Self-Encrypting Disk
How can a system boot from a disk which is locked? The Opal solution is metamorphosis. An Opal disk hides or presents different contents depending on whether it is locked or not:
-
In addition to its regular contents, an Opal disk contains a special area for additional boot code, the (unfortunately named) shadow MBR. It is small (the spec guarantees just 128 MB), write-protected, and normally hidden.
-
When unlocked, an Opal disk shows its regular contents like any other disk. In this state, the system firmware would boot the regular operating system.
-
When locked, an Opal boot disk exposes its shadow MBR at the start, followed by zeroed blocks. In this state, the system firmware would boot the code residing in the shadow MBR.
The shadow MBR, when enabled, can be prepared with a pre-boot authentication (PBA) system. The PBA system is a purpose-built operating system which
-
is booted by the firmware like any other operating system,
-
asks the user for the disk password,
-
unlocks the boot disk (and possibly other Opal 2-compliant SEDs as well), and
-
continues to boot the regular operating system.
13.3. Administering Self-Encrypting Disks
13.3.1. Creating a Pre-Boot Authentication (PBA) System
Note
|
This is only required if an SED is to be used as boot disk. |
To create a pre-boot authentication (PBA) system image:
-
Run
sudo rear -v mkopalpba
-
The PBA image will appear below the
OPAL_PBA_OUTPUT_URL
directory (seedefault.conf
) as$HOSTNAME/TCG-Opal-PBA-$HOSTNAME.raw
.
-
-
If you want to test the PBA system image,
-
copy it it onto a disk boot medium (an USB stick will do) with
dd if="$image_file" bs=1MB of="$usb_device"
(use the entire disk device, not a partition), -
boot from the medium just created.
-
To create a rescue system with an integrated PBA system image:
-
Verify that the
OPAL_PBA_OUTPUT_URL
configuration variable points to a local directory (which is the default), or setOPAL_PBA_IMAGE_FILE
to the image file’s full path. -
Run
sudo rear -v mkrescue
13.3.2. Setting Up Self-Encrypting Disks
Warning
|
Setting up an SED normally ERASES ALL DATA ON THE DISK, as a new data
encryption key (DEK) will be generated. While rear opaladmin includes safety
measures to avoid accidentally erasing a partitioned disk, do not rely on this
solely. Always back up your data and have a current rescue system available. |
To set up SEDs:
-
Boot the Relax-and-Recover rescue system.
-
If SED boot support is required, ensure that the rescue system was built with an integrated PBA system image.
-
-
Run
rear opaladmin setupERASE DEVICE ...
-
DEVICE is the disk device path like
/dev/sda
, orALL
for all available devices -
This will set up Opal 2-compliant disks specified by the DEVICE arguments.
-
You will be asked for a new disk password. The same password will be used for all disks being set up.
-
If a PBA is available on the rescue system, you will be asked for each disk whether it should act as a boot device for disk unlocking (in which case the PBA will be installed).
-
DISK CONTENTS WILL BE ERASED, with the following exceptions:
-
If the disk has mounted partitions, the disk’s contents will be left untouched.
-
If unmounted disk partitions are detected, you will be asked whether the disk’s contents shall be erased.
-
-
-
On UEFI systems, see [Setting up UEFI Firmware to Boot From a Self-Encrypting Disk].
13.3.3. Verifying Disk Setup
If you want to ensure that disks have been set up correctly:
-
Power off, then power on the system.
-
Boot directly into the Relax-and-Recover rescue system.
-
Run
rear opaladmin info
and verify that output looks like this:DEVICE MODEL I/F FIRMWARE SETUP ENCRYPTED LOCKED SHADOW MBR /dev/sda Samsung SSD 850 PRO 256GB ATA EXM04B6Q y y y visible
The device should appear with SETUP=
y
, ENCRYPTED=y
and LOCKED=y
, SHADOW MBR on boot disks should bevisible
, otherwisedisabled
. -
Run
rear opaladmin unlock
, supplying the correct disk password. -
Run
rear opaladmin info
and verify that output looks like this:DEVICE MODEL I/F FIRMWARE SETUP ENCRYPTED LOCKED SHADOW MBR /dev/sda Samsung SSD 850 PRO 256GB ATA EXM04B6Q y y n hidden
The device should appear with SETUP=
y
, ENCRYPTED=y
and LOCKED=n
, SHADOW MBR on boot disks should behidden
, otherwisedisabled
.
13.3.4. Routine Administrative Tasks
The following tasks can be safely performed on the original system (with sudo
)
or on the rescue system.
-
Display disk information:
rear opaladmin info
-
Change the disk password:
rear opaladmin changePW
-
Upload the PBA onto the boot disk(s):
rear opaladmin uploadPBA
-
Unlock disk(s):
rear opaladmin unlock
-
For help:
rear opaladmin help
13.3.5. Erasing a Self-Encrypting Disk
To ERASE ALL DATA ON THE DISK but retain the setup:
-
Boot the Relax-and-Recover rescue system.
-
Run
rear opaladmin resetDEK DEVICE ...
-
DEVICE is the disk device path like
/dev/sda
, orALL
for all available devices -
If mounted disk partitions are detected, the disk’s contents will not be erased.
-
If unmounted disk partitions are detected, you will be asked whether the disk’s contents shall be erased.
-
To ERASE ALL DATA ON THE DISK and reset the disk to factory settings:
-
Boot the Relax-and-Recover rescue system.
-
Run
rear opaladmin factoryRESET DEVICE ...
-
DEVICE is the disk device path like
/dev/sda
, orALL
for all available devices -
If mounted disk partitions are detected, the disk’s contents will not be erased.
-
If unmounted disk partitions are detected, you will be asked whether the disk’s contents shall be erased.
-
13.4. Details
13.4.1. How to Build sedutil-cli Version 1.15.1
-
Download Drive-Trust-Alliance/sedutil version 1.15.1 source code.
-
Extract the archive, creating a directory
sedutil-{sedutil-cli-version}
:1: tar xof sedutil-{sedutil-cli-version}.tar.gz
-
Configure the build system:
1: cd sedutil-{sedutil-cli-version} 2: aclocal 3: autoconf 4: ./configure
NoteIgnore the following error: configure: error: cannot find install-sh, install.sh, or shtool in "." "./.." "./../.."
NoteIf there are any other error messages, you may have to install required packages like build-essential
, then re-run./configure
. -
Compile the executable (on the x86_64 architecture in this example):
1: cd linux/CLI 2: make CONF=Release_x86_64
-
Install the executable into a directory root’s search path (
/usr/local/bin
in this example):1: cp dist/Release_x86_64/GNU-Linux/sedutil-cli /usr/local/bin
13.4.2. Setting up UEFI Firmware to Boot From a Self-Encrypting Disk
Note
|
UEFI support currently requires that Secure Boot be turned off. |
If the UEFI firmware is configured to boot from the disk device (instead of some specific operating system entry), no further configuration is necessary.
Otherwise the UEFI firmware (formerly BIOS setup) must be configured to boot two different targets:
-
The PBA system (which is only accessible while the disk is locked).
-
The regular operating system (which is only accessible while the disk is unlocked).
This can be configured as follows:
-
Ensure that the PBA system has been correctly installed to the boot drive.
-
Power off, then power on the system.
-
Enter the firmware setup.
-
Configure the firmware to boot from the (only) EFI entry of the boot drive.
-
Once a regular operating system has been installed:
-
Unlock the disk.
-
Reboot without powering off.
-
Enter the firmware setup.
-
Configure the firmware to boot from the EFI entry of your regular operating system. Do not delete the previously configured boot entry for the PBA system.
-