Posted on Leave a comment

3-2-1 Backup plan with Fedora ARM server

Fedora Server Edition works on Single Board Computers (SBC) like Raspberry Pi. This article is aimed at data backup and restoration of personal data for users who want to take advantage of solid server systems and built-in tools like Cockpit. It describes 3 levels of backup.

Pre-requisites

To use this guide, all you need is a working Fedora Linux workstation and the following items.

  • You should read, understand, and practice the requirements as documented in the Fedora Docs for server installation and administration
  • An SBC (Single Board Computer), tested for Fedora Linux. Check hardware status here.
  • Fedora ARM server raw image & ARM image installer
  • A choice of microSD Card (64 GB / Class 10) and SSD device
  • Ethernet cable / DHCP reserved IP or static IP
  • A Linux client workstation with ssh keys prepared
  • Make a choice of cloud storage services
  • Have an additional Linux workstation available

With this setup, I opted for Raspberry Pi 3B+/4B+ (one for hot-swap) because of the price and availability at the time of writing this article. While the Pi server is remotely connected using Cockpit, you can position the Pi near the router for a neat set-up.

Harden server security

After following through with server installation and administration on the SBC, it is a good practice to harden the server security with firewalld.

You must configure the firewall as soon as the server is online before connecting the storage device to the server. Firewalld is a zone-based firewall. It creates one pre-defined zone ‘FedoraServer’ after following through with the installation and administration guide in the Fedora Docs.

Rich rules in firewalld

Rich rules are used to block or allow a particular IP address or address range. The following rule accepts SSH connections only from the host with the registered IP (of client workstation) and drops other connections. Run the commands in Cockpit Terminal or terminal in client workstation connect to the server via ssh.

firewall-cmd --add-rich-rule='rule family=ipv4 source address=<registered_ip_address>/24 service name=ssh log prefix="SSH Logs" level="notice" accept'

Reject ping requests from all hosts

Use this command to set the icmp reject and disallow ping requests

firewall-cmd --add-rich-rule='rule protocol value=icmp reject'

To carry out additional firewall controls, such as managing ports and zones, please refer to the link below. Please be aware that misconfiguring the firewall may make it vulnerable to security breaches.

Managing firewall in Cockpit
firewalld rules

Configure storage for file server

The next step is to connect a storage device to the SBC and partition a newly attached storage device using Cockpit. With Cockpit’s graphical server management interface, managing a home lab (whether a single server or several servers) is much simpler than before. Fedora Linux server offers Cockpit as standard.

In this setup, an SSD device, powered by the USB port of the SBC, is placed in service without the need for an additional power supply.

  • Connect the storage device to a USB port of the SBC
  • After Cockpit is running (as set up in the pre-requisites), visit ip-address-of-machine:9090 in the web browser of your client workstation
  • After logging into Cockpit, click ‘Turn on administrative access’ at the top of the Cockpit page
  • Click the “Storage” on the left pane
  • Select the device under “Drives” section to format and partition a blank storage device
Cockpit Storage management
  • On the screen of the selected storage device create a new partition table or format and create new partitions. When prompted to initialize disk, in the “Partitioning” type, select GPT partition
  • For a file system type from the drop-down list (XFS and ext4), choose ext4. This is suitable for an SBC with limited I/O capability (like USB 2.0 port) and limited bandwidth (less than 200MB/s)
Create a partition in Cockpit
  • To create a single partition taking up all the storage space on the device, specify its mount point, such as “/media” and click “Ok”
  • Click “Create partition”, which creates a new partition mounted at “/media”.

Create backups and restore from backups

Backups are rarely one-size-fits-all. There are a few choices to make such as where the data is backed up, the steps you take to backup data, identify any automation, and determine how to restore backed-up data.

Backup workflow – version 1.0

Backup 1. rsync from client to file server (Raspberry Pi)

The command used for this transfer was:

rsync -azP ~/source syncuser@host1:/destination
Options:
-a, --archive
-z, --compress
-P, --progress

To run rsync with additional options, set the following flags:

Update destination files in-place

--inplace

Append data onto shorter files

--append

Source-side deduplication combined with compression is the most effective way to reduce the size of data to be backed up before it goes to backup storage.

I run this manually at the end of the day. Automation scripts are advantageous once I settled in with the cloud backup workflow.

For details on rsync, please visit the Fedora magazine article here.

Backup 2. rsync from file server to primary cloud storage

Factors to consider when selecting cloud storage are;

  • Cost: Upload, storage, and download fee
  • rsync, sftp supported
  • Data redundancy (RAID 10 or data center redundancy plan in place)
  • Snapshots

One of the cloud storage fitting these criteria is Hetzner’s hosted Nextcloud – Storage Box. You are not tied to a supplier and are free to switch without an exit penalty.

Generate SSH keys and create authorized key files in the file server

Use ssh-keygen to generate a new pair of SSH keys for the file server and cloud storage.

ssh-keygen Generating public/private rsa key pair.
Enter file in which to save the key . . . 

Insert the required public SSH keys into a new local authorized_keys file.

cat .ssh/id_rsa.pub >> storagebox_authorized_keys

Transfer keys to cloud storage

The next step is to upload the generated authorized_keys file to the Storage Box. To do this, create the directory .ssh with permission 700 and create the file authorized_keys with the public SSH keys and permission 600. Run the following command.

echo -e "mkdir .ssh \n chmod 700 .ssh \n put storagebox_authorized_keys .ssh/authorized_keys \n chmod 600 .ssh/authorized_keys" | sftp <username>@<username>.your-storagebox.de

Use rsync over ssh

Use rsync to synchronize the current state of your file directories to Storage Box.

rsync --progress -e 'ssh -p23' --recursive <local_directory> <username>@<username>.your-storagebox.de:<target_directory>

This process is called a push operation because it “pushes” a directory from the local system to a remote system.

Restore a directory from cloud storage

To restore a directory from the Storage Box, swap the directories:

rsync --progress -e 'ssh -p23' --recursive <username>@<username>.your-storagebox.de:<remote_directory> <local_directory>

Backup 3. Client backup to secondary cloud storage

Deja Dup is in the Fedora software repo, making it a quick backup solution for Fedora Workstation. It handles the GPG encryption, scheduling, and file inclusion (which directories to back up).

Backing up to the secondary cloud
Restoring files from cloud storage

Archive personal data

Not every data needs a 3-2-1 backup strategy. That is personal data share. I repurposed a hand-me-down laptop with a 1TB HDD as an archive of personal data (family photos).

Go to “Sharing” in settings (in my case, the GNOME file manager) and toggle the slider to enable sharing.

Turn on “file sharing”, “Networks” and “Required password”, which allows you to share your public folders with other workstations on your local network using WebDAV.

Prepare fallback options

Untested backups are no better than no backups at all. I take the ‘hot swap’ approach in a home lab environment where disruptions like frequent power outages or liquid damages do happen. However, my recommendations are far from disaster recovery plans or automatic failover in corporate IT.

  • Dry run restoration of files on a regular basis
  • Backup ssh/GPG keys onto an external storage device
  • Copy a raw image of the Fedora ARM server onto an SD card
  • Keep snapshots of full backups at primary cloud storage
  • Automate backup process to minimize human error or oversight

Track activity and troubleshoot with Cockpit

As your project grows, so does the number of servers you manage. Activity and alert tracking in Cockpit ease your administrative burden. You can achieve this in three ways using Cockpit’s graphical interface.

SELinux menu

How to diagnose network issues, find logs and troubleshoot in Cockpit

  • Go to SELinux to check logs
  • Check “solution details”
  • Select “Apply this solution” when necessary
  • View automation script and run it if necessary
SELinux logs

Network or storage logs

Server logs track detailed metrics that correlate CPU load, memory usage, network activity, and storage performance with the system’s journal. Logs are organized under the network or storage dashboard.

Storage logs in Cockpit

Software updates

Cockpit helps security updates on preset time and frequency. You can run all updates when you need them.

Software updates

Congratulations on setting up a file/backup server with the Fedora ARM server edition.

Posted on Leave a comment

Recover your files from Btrfs snapshots

As you have seen in a previous article, Btrfs snapshots are a convenient and fast way to make backups. Please note that these articles do not suggest that you avoid backup software or well-tested backup plans. Their goals are to show a great feature of this file system, snapshots, and to inspire curiosity and invite you to explore, experiment and deepen the subject. Read on for more about how to recover your files from Btrfs snapshots.

A subvolume for your project

Let’s assume that you want to save the documents related to a project inside the directory $HOME/Documents/myproject.

As you have seen, a Btrfs subvolume, as well as a snapshot, looks like a normal directory. Why not use a Btrfs subvolume for your project, in order to take advantage of snapshots? To create the subvolume, use this command:

btrfs subvolume create $HOME/Documents/myproject

You can create an hidden directory where to arrange your snapshots:

mkdir $HOME/.snapshots

As you can see, in this case there’s no need to use sudo. However, sudo is still needed to list the subvolumes, and to use the send and receive commands.

Now you can start writing your documents. Each day (or each hour, or even minute) you can take a snapshot just before you start to work:

btrfs subvolume snapshot -r $HOME/Documents/myproject $HOME/.snapshots/myproject-day1

For better security and consistency, and if you need to send the snapshot to an external drive as shown in the previous article, remember that the snapshot must be read only, using the -r flag.

Note that in this case, a snapshot of the /home subvolume will not snapshot the $HOME/Documents/myproject subvolume.

How to recover a file or a directory

In this example let’s assume a classic error: you deleted a file by mistake. You can recover it from the most recent snapshot, or recover an older version of the file from an older snapshot. Do you remember that a snapshot appears like a regular directory? You can simply use the cp command to restore the deleted file:

cp $HOME/.snapshots/myproject-day1/filename.odt $HOME/Documents/myproject

Or restore an entire directory:

cp -r $HOME/.snapshots/myproject-day1/directory $HOME/Documents/myproject

What if you delete the entire $HOME/Documents/myproject directory (actually, the subvolume)? You can recreate the subvolume as seen before, and again, you can simply use the cp command to restore the entire content from the snapshot:

btrfs subvolume create $HOME/Documents/myproject
cp -rT $HOME/.snapshots/myproject-day1 $HOME/Documents/myproject

Or you could restore the subvolume by using the btrfs snapshot command (yes, a snapshot of a snapshot):

btrfs subvolume snapshot $HOME/.snapshots/myproject-day1 $HOME/Documents/myproject

How to recover btrfs snapshots from an external drive

You can use the cp command even if the snapshot resides on an external drive. For instance:

cp /run/media/user/mydisk/bk/myproject-day1/filename.odt $HOME/Documents/myproject

You can restore an entire snapshot as well. In this case, since you will use the send and receive commands, you must use sudo. In addition, consider that the restored subvolume will be created as read only. Therefore you need to also set the read only property to false:

sudo btrfs send /run/media/user/mydisk/bk/myproject-day1 | sudo btrfs receive $HOME/Documents/
mv Documents/myproject-day1 Documents/myproject
btrfs property set Documents/myproject ro false

Here’s an extra explanation. The command btrfs subvolume snapshot will create an exact copy of a subvolume in the same device. The destination has to reside in the same btrfs device. You can’t use another device as the destination of the snapshot. In that case you need to take a snapshot and use the send and receive commands.

For more information, refer to some of the online documentation:

man btrfs-subvolume
man btrfs-send
man btrfs-receive
Posted on Leave a comment

Connect your Google Drive to Fedora Workstation

There are plenty of cloud services available where you can store important documents. Google Drive is undoubtedly one of the most popular. It offers a matching set of applications like Docs, Sheets, and Slides to create content. But you can also store arbitrary content in your Google Drive. This article shows you how to connect it to your Fedora Workstation.

Adding an account

Fedora Workstation lets you add an account either after installation during first startup, or at any time afterward. To add your account during first startup, follow the prompts. Among them is a choice of accounts you can add:

Online account listing

Select Google and a login prompt appears for you to login, so use your Google account information.

Online account login dialog

Be aware this information is only transmitted to Google, not to the GNOME project. The next screen asks you to grant access, which is required so your system’s desktop can interact with Google. Scroll down to review the access requests, and choose Allow to proceed.

You can expect to receive notifications on mobile devices and Gmail that a new device — your system — accessed your Google account. This is normal and expected.

Online account access request dialog

If you didn’t do this at first startup, or you need to re-add your account, open the Settings tool, and select Online Accounts to add the account. The Settings tool is available through the dropdown at right side of the Top Bar (the “gear” icon), or by opening the Overview and typing settings. Then proceed as described above.

Using the Files app with Google Drive

Open the Files app (formerly known as nautilus). Locations the Files app can access appear on the left side. Locate your Google account in the list.

When you select this account, the Files app shows the contents of your Google drive. Some files can be opened using your Fedora Workstation local apps, such as sound files or LibreOffice-compatible files (including Microsoft Office docs). Other files, such as Google app files like Docs, Sheets, and Slides, open using your web browser and the corresponding app.

Remember that if the file is large, it will take some time to receive over the network so you can open it.

You can also copy and paste files in your Google Drive storage from or to other storage connected to your Fedora Workstation. You can also use the built in functions to rename files, create folders, and organize them.

Be aware that the Files app does not refresh contents in real time. If you add or remove files from other Google connected devices like your mobile phone or tablet, you may need to hit Ctrl+R to refresh the Files app view.


Photo by Beatriz Pérez Moya on Unsplash.

Posted on Leave a comment

Make free encrypted backups to the cloud on Fedora

Most free cloud storage is limited to 5GB or less. Even Google Drive is limited to 15GB. While not heavily advertised, IBM offers free accounts with a whopping 25GB of cloud storage for free. This is not a limited time offer, and you don’t have to provide a credit card. It’s absolutely free! Better yet, since it’s S3 compatible, most of the S3 tools available for backups should work fine.

This article will show you how to use restic for encrypted backups onto this free storage. Please also refer to this previous Magazine article about installing and configuring restic. Let’s get started!

Creating your free IBM account and storage

Head over to the IBM cloud services site and follow the steps to sign up for a free account here: https://cloud.ibm.com/registration. You’ll need to verify your account from the email confirmation that IBM sends to you.

Then log in to your account to bring up your dashboard, at https://cloud.ibm.com/.

Click on the Create resource button.

Click on Storage and then Object Storage.

Next click on the Create Bucket button.

This brings up the Configure your resource section.

Next, click on the Create button to use the default settings.

Under Predefined buckets click on the Standard box:

A unique bucket name is automatically created, but it’s suggested that you change this.

In this example, the bucket name is changed to freecloudstorage.

Click on the Next button after choosing a bucket name:

Continue to click on the Next button until you get the the Summary page:

Scroll down to the Endpoints section.

The information in the Public section is the location of your bucket. This is what you need to specify in restic when you create your backups. In this example, the location is s3.us-south.cloud-object-storage.appdomain.cloud.

Making your credentials

The last thing that you need to do is create an access ID and secret key. To start, click on Service credentials.

Click on the New credential button.

Choose a name for your credential, make sure you check the Include HMAC Credential box and then click on the Add button. In this example I’m using the name resticbackup.

Click on View credentials.

The access_key_id and secret_access_key is what you are looking for. (For obvious reasons, the author’s details here are obscured.)

You will need to export these by calling them with the export alias in the shell, or putting them into a backup script.

Preparing a new repository

Restic refers to your backup as a repository, and can make backups to any bucket on your IBM cloud account. First, setup the following environment variables using your access_key_id and secret_access_key that you retrieved from your IBM cloud bucket. These can also be set in any backup script you may create.

$ export AWS_ACCESS_KEY_ID=<MY_ACCESS_KEY>
$ export AWS_SECRET_ACCESS_KEY=<MY_SECRET_ACCESS_KEY>

Even though you are using IBM Cloud and not AWS, as previously mentioned, IBM Cloud storage is S3 compatible, and restic uses its interal AWS commands for any S3 compatible storage. So these AWS keys really refer to the keys from your IBM bucket.

Create the repository by initializing it. A prompt appears for you to type a password for the repository. Do not lose this password because your data is irrecoverable without it!

restic -r s3:http://PUBLIC_ENDPOINT_LOCATION/BUCKET init

The PUBLIC_ENDPOINT_LOCATION was specified in the Endpoint section of your Bucket summary.

For example:

$ restic -r s3:http://s3.us-south.cloud-object-storage.appdomain.cloud/freecloudstorage init

Creating backups

Now it’s time to backup some data. Backups are called snapshots. Run the following command and enter the repository password when prompted.

restic -r s3:http://PUBLIC_ENDPOINT_LOCATION/BUCKET backup files_to_backup

For example:

$ restic -r s3:http://s3.us-south.cloud-object-storage.appdomain.cloud/freecloudstorage backup Documents/
Enter password for repository: repository 106a2eb4 opened successfully, password is correct Files: 51 new, 0 changed, 0 unmodified Dirs: 0 new, 0 changed, 0 unmodified Added to the repo: 11.451 MiB processed 51 files, 11.451 MiB in 0:06 snapshot 611e9577 saved

Restoring from backups

Now that you’ve backed up some files, it’s time to make sure you know how to restore them. To get a list of all of your backup snapshots, use this command:

restic -r s3:http://PUBLIC_ENDPOINT_LOCATION/BUCKET snapshots

For example:

$ restic -r s3:http://s3.us-south.cloud-object-storage.appdomain.cloud/freecloudstorage snapshots
Enter password for repository: ID Date Host Tags Directory ------------------------------------------------------------------- 106a2eb4 2020-01-15 15:20:42 client /home/curt/Documents

To restore an entire snapshot, run a command like this:

restic -r s3:http://s3.us-south.cloud-object-storage.appdomain.cloud/freecloudstorage restore snapshotID --target restoreDirectory

For example:

$ restic -r s3:http://s3.us-south.cloud-object-storage.appdomain.cloud/freecloudstorage restore 106a2eb4 --target ~
Enter password for repository: repository 106a2eb4 opened successfully, password is correct
restoring <Snapshot 106a2eb4 of [/home/curt/Documents]

If the directory still exists on your system, be sure to specify a different location for the restoreDirectory. For example:

restic -r s3:http://s3.us-south.cloud-object-storage.appdomain.cloud/freecloudstorage restore 106a2eb4 --target /tmp

To restore an individual file, run a command like this:

restic -r s3:http://PUBLIC_ENDPOINT_LOCATION/BUCKET restore snapshotID --target restoreDirectory --include filename

For example:

$ restic -r s3:http://s3.us-south.cloud-object-storage.appdomain.cloud/freecloudstorage restore 106a2eb4 --target /tmp --include file1.txt Enter password for repository: restoring <Snapshot 106a2eb4 of [/home/curt/Documents] at 2020-01-16 15:20:42.833131988 -0400 EDT by curt@client> to /tmp

Photo by Alex Machado on Unsplash.

[EDITORS NOTE: The Fedora Project is sponsored by Red Hat, which is owned by IBM.]

[EDITORS NOTE: Updated at 1647 UTC on 24 February 2020 to correct a broken link.]