Posted on Leave a comment

Make free encrypted backups to the cloud on Fedora

Most free cloud storage is limited to 5GB or less. Even Google Drive is limited to 15GB. While not heavily advertised, IBM offers free accounts with a whopping 25GB of cloud storage for free. This is not a limited time offer, and you don’t have to provide a credit card. It’s absolutely free! Better yet, since it’s S3 compatible, most of the S3 tools available for backups should work fine.

This article will show you how to use restic for encrypted backups onto this free storage. Please also refer to this previous Magazine article about installing and configuring restic. Let’s get started!

Creating your free IBM account and storage

Head over to the IBM cloud services site and follow the steps to sign up for a free account here: https://cloud.ibm.com/registration. You’ll need to verify your account from the email confirmation that IBM sends to you.

Then log in to your account to bring up your dashboard, at https://cloud.ibm.com/.

Click on the Create resource button.

Click on Storage and then Object Storage.

Next click on the Create Bucket button.

This brings up the Configure your resource section.

Next, click on the Create button to use the default settings.

Under Predefined buckets click on the Standard box:

A unique bucket name is automatically created, but it’s suggested that you change this.

In this example, the bucket name is changed to freecloudstorage.

Click on the Next button after choosing a bucket name:

Continue to click on the Next button until you get the the Summary page:

Scroll down to the Endpoints section.

The information in the Public section is the location of your bucket. This is what you need to specify in restic when you create your backups. In this example, the location is s3.us-south.cloud-object-storage.appdomain.cloud.

Making your credentials

The last thing that you need to do is create an access ID and secret key. To start, click on Service credentials.

Click on the New credential button.

Choose a name for your credential, make sure you check the Include HMAC Credential box and then click on the Add button. In this example I’m using the name resticbackup.

Click on View credentials.

The access_key_id and secret_access_key is what you are looking for. (For obvious reasons, the author’s details here are obscured.)

You will need to export these by calling them with the export alias in the shell, or putting them into a backup script.

Preparing a new repository

Restic refers to your backup as a repository, and can make backups to any bucket on your IBM cloud account. First, setup the following environment variables using your access_key_id and secret_access_key that you retrieved from your IBM cloud bucket. These can also be set in any backup script you may create.

$ export AWS_ACCESS_KEY_ID=<MY_ACCESS_KEY>
$ export AWS_SECRET_ACCESS_KEY=<MY_SECRET_ACCESS_KEY>

Even though you are using IBM Cloud and not AWS, as previously mentioned, IBM Cloud storage is S3 compatible, and restic uses its interal AWS commands for any S3 compatible storage. So these AWS keys really refer to the keys from your IBM bucket.

Create the repository by initializing it. A prompt appears for you to type a password for the repository. Do not lose this password because your data is irrecoverable without it!

restic -r s3:http://PUBLIC_ENDPOINT_LOCATION/BUCKET init

The PUBLIC_ENDPOINT_LOCATION was specified in the Endpoint section of your Bucket summary.

For example:

$ restic -r s3:http://s3.us-south.cloud-object-storage.appdomain.cloud/freecloudstorage init

Creating backups

Now it’s time to backup some data. Backups are called snapshots. Run the following command and enter the repository password when prompted.

restic -r s3:http://PUBLIC_ENDPOINT_LOCATION/BUCKET backup files_to_backup

For example:

$ restic -r s3:http://s3.us-south.cloud-object-storage.appdomain.cloud/freecloudstorage backup Documents/
Enter password for repository: repository 106a2eb4 opened successfully, password is correct Files: 51 new, 0 changed, 0 unmodified Dirs: 0 new, 0 changed, 0 unmodified Added to the repo: 11.451 MiB processed 51 files, 11.451 MiB in 0:06 snapshot 611e9577 saved

Restoring from backups

Now that you’ve backed up some files, it’s time to make sure you know how to restore them. To get a list of all of your backup snapshots, use this command:

restic -r s3:http://PUBLIC_ENDPOINT_LOCATION/BUCKET snapshots

For example:

$ restic -r s3:http://s3.us-south.cloud-object-storage.appdomain.cloud/freecloudstorage snapshots
Enter password for repository: ID Date Host Tags Directory ------------------------------------------------------------------- 106a2eb4 2020-01-15 15:20:42 client /home/curt/Documents

To restore an entire snapshot, run a command like this:

restic -r s3:http://s3.us-south.cloud-object-storage.appdomain.cloud/freecloudstorage restore snapshotID --target restoreDirectory

For example:

$ restic -r s3:http://s3.us-south.cloud-object-storage.appdomain.cloud/freecloudstorage restore 106a2eb4 --target ~
Enter password for repository: repository 106a2eb4 opened successfully, password is correct
restoring <Snapshot 106a2eb4 of [/home/curt/Documents]

If the directory still exists on your system, be sure to specify a different location for the restoreDirectory. For example:

restic -r s3:http://s3.us-south.cloud-object-storage.appdomain.cloud/freecloudstorage restore 106a2eb4 --target /tmp

To restore an individual file, run a command like this:

restic -r s3:http://PUBLIC_ENDPOINT_LOCATION/BUCKET restore snapshotID --target restoreDirectory --include filename

For example:

$ restic -r s3:http://s3.us-south.cloud-object-storage.appdomain.cloud/freecloudstorage restore 106a2eb4 --target /tmp --include file1.txt Enter password for repository: restoring <Snapshot 106a2eb4 of [/home/curt/Documents] at 2020-01-16 15:20:42.833131988 -0400 EDT by curt@client> to /tmp

Photo by Alex Machado on Unsplash.

[EDITORS NOTE: The Fedora Project is sponsored by Red Hat, which is owned by IBM.]

[EDITORS NOTE: Updated at 1647 UTC on 24 February 2020 to correct a broken link.]

Posted on Leave a comment

Demonstrating PERL with Tic-Tac-Toe, Part 1

Larry Wall’s Practical Extraction and Reporting Language (PERL) was originally developed in 1987 as a general-purpose Unix scripting language that borrowed features from C, sh, awk, sed, BASIC, and LISP. In the late 1990s, before PHP became more popular, PERL was commonly used for CGI scripting. PERL is still the go-to tool for many sysadmins who need something more powerful than sed or awk when writing complex parsing and automation scripts. It has a somewhat high learning curve due to its dense notation. But a recent survey indicates that PERL developers earn 54 per cent more than the average developer. So it may still be a worthwhile language to learn.

PERL is far too complex to cover in any significant detail in this magazine. But this short series of articles will attempt to demonstrate a few of the most basic features of the language so that you can get a sense of what the language is like and the kind of things it can do.

An example PERL program

PERL was originally a language optimized for scanning arbitrary text files, extracting information from those text files, and printing reports based on that information. To demonstrate how this core feature of PERL works, a very simple Tic-Tac-Toe game is provided below. The below program scans a textual representation of a Tic-Tac-Toe board, extracts and manipulates the numbers on the board, and prints the modified result to the console.

00 #!/usr/bin/perl
01 02 use feature 'state';
03 04 use constant MARKS=>[ 'X', 'O' ];
05 use constant BOARD=>'
06 ┌───┬───┬───┐
07 │ 1 │ 2 │ 3 │
08 ├───┼───┼───┤
09 │ 4 │ 5 │ 6 │
10 ├───┼───┼───┤
11 │ 7 │ 8 │ 9 │
12 └───┴───┴───┘
13 ';
14 15 sub get_mark {
16 my $game = shift;
17 my @nums = $game =~ /[1-9]/g;
18 my $indx = (@nums+1) % 2;
19 20 return MARKS->[$indx];
21 }
22 23 sub put_mark {
24 my $game = shift;
25 my $mark = shift;
26 my $move = shift;
27 28 $game =~ s/$move/$mark/;
29 30 return $game;
31 }
32 33 sub get_move {
34 return (<> =~ /^[1-9]$/) ? $& : '0';
35 }
36 37 PROMPT: {
38 state $game = BOARD;
39 40 my $mark;
41 my $move;
42 43 print $game;
44 45 last PROMPT if ($game !~ /[1-9]/);
46 47 $mark = get_mark $game;
48 print "$mark\'s move?: ";
49 50 $move = get_move;
51 $game = put_mark $game, $mark, $move;
52 53 redo PROMPT;
54 }

To try out the above program on your PC, you can copy-and-paste the above text into a plain text file and save and run it. The line numbers will have to be removed before the program will work. Of course, the command that one uses to perform that sort of textual extraction and reporting is perl.

Assuming that you have saved the above text to a file named game.txt, the following command can be used to strip the leading numbers from all the lines and write the modified version to a new file named game:

$ cat game.txt | perl -npe 's/...//' > game

The above command is a very small PERL script and it is an example of what is called a one-liner.

Now that the line numbers have been removed, the program can be run by entering the following command:

$ perl game

How it works

PERL is a procedural programming language. A program written in PERL consists of a series of commands that are executed sequentially. With few exceptions, most commands alter the state of the computer’s memory in some way.

Line 00 in the Tic-Tac-Toe program isn’t technically part of the PERL program and it can be omitted. It is called a shebang (the letter e is pronounced soft as it is in the word shell). The purpose of the shebang line is to tell the operating system what interpreter the remaining text should be processed with if one isn’t specified on the command line.

Line 02 isn’t strictly necessary for this program either. It makes available an advanced command named state. The state command creates a variable that can retain its value after it has gone out of scope. I’m using it here as a way to avoid declaring a global variable. It is considered good practice in computer programming to avoid using global variables where possible because they allow for action at a distance. If you didn’t follow all of that, don’t worry about it. It’s not important at this point.

PERL scopes, blocks and subroutines

Scope is a very important concept that one needs to be familiar with when reading and writing procedural programs. In PERL, scope is often delineated by a pair of curly brackets. Within the global scope, the above Tic-Tac-Toe program defines four sub-scopes on lines 15-21, 23-31, 33-35 and 37-54. The first three scopes are prefixed with subroutine declarations and the last scope is prefixed with the label PROMPT.

Scopes serve multiple purposes in programming languages. One purpose of a scope is to group a set of commands together as a unit so that they can be called repeatedly with a single command rather than having to repeat several lines of code each time in the program. Another purpose is to enhance the readability of the program by denoting a restricted area where the value of a variable can be updated.

Within the scope that is labeled PROMPT and defined on lines 37-54 of the above Tic-Tac-Toe program, a variable named mark is created using the my keyword (line 40). After it is created, it is assigned a value by calling the get_mark subroutine (line 47). Later, the put_mark subroutine is called (line 51) to change the value in the square that was chosen by the get_move subroutine on line 50.

Hopefully it is obvious that the mark that put_mark is setting is meant to be the same mark that get_mark retrieved earlier. As a programmer though, how do I know that the value of mark wasn’t changed when the get_move subroutine was called? This example program is small enough that every line can be examined to make that determination. But most programs are much larger than this example and having to know exactly what is going on at all points in the program’s execution can be overwhelming and error-prone. Because mark was created with the my keyword, its value can only be accessed and modified within the scope that it was created (or a sub-scope). It doesn’t matter what subroutines at parallel or higher scopes do; even if they change variables with the same name in their own scopes. This property of scopes — restricting the range of lines on which the value of a variable can be updated — improves the readability of the code by allowing the programmer to focus on a smaller section of the program without having to be concerned about what is happening elsewhere in the program.

Lines 04 and 05 define the MARKS and BOARD variables, respectively. Because they are not within any curly bracket pairing, they exist in the global scope. It is permissible to create constant variables in the global scope because they are read-only and therefore not subject to the action at a distance concern. In PERL, it is traditional to name constants in all upper case letters.

Notice that scopes can be nested such that variables defined in outer scopes can be accessed and modified from within inner scopes. This is why the MARKS and BOARD variables can be accessed within the get_mark subroutine and PROMPT block respectively — they are sub-scopes of the global scope.

The statements in the program are executed in order from top to bottom and left to right. Each statement is terminated with a semi-colon (;). The semi-colon can be omitted from the last statement in any scope and from after the last block of many statements that define the flow of the program such as sub, if and while.

In PERL nomenclature, scopes are called blocks. Scope is the more general term that is typically used in online references like Wikipedia, but the remainder of this article will use the more perlish term blocks.

The statements within the first three blocks are not immediately executed as the program is evaluated from top to bottom. Rather, they are associated with the subroutine name preceding the block. This is the function of the sub keyword — it associates a subroutine name with a block of statements so that they can be called as a unit elsewhere in the program. The three subroutines get_mark (lines 15-21), put_mark (lines 23-31), and get_move (lines 33-35) are called on lines 47, 51 and 50 respectively.

The PROMPT block is not associated with a subroutine definition or other flow-control statement, so the statements within it are immediately executed in sequence when the program is run.

PERL regular expressions

If there is one feature that is more central to PERL than any other it is regular expressions. Notice that in the example Tic-Tac-Toe program every block contains a =~ (or !~) operator followed by some text surrounded with forward slashes (/). The text within the forward slashes is called a regular expression and the operator binds the regular expression to a variable or data stream.

It is important to note that there are different regular expression syntaxes. Some editors and command-line tools (for example, grep) allow the user to select which regular expression syntax they prefer to use. PERL-Compatible Regular Expressions (PCRE) are by far the most powerful.

Regular expressions used in matching operations

The result of applying the regular expression to a variable or data stream is usually a value that, when used in a flow-control statement such as if or while, will evaluate to true or false depending on whether or not the match succeeded. There are modifiers that can be appended to the closing slash of a regular expression to change its return value.

Line 45 of the Tic-Tac-Toe program provides a typical example of how a regular expression is used in a PERL program. The regular expression [1-9] is being applied to the variable game which holds the in-memory representation of the Tic-Tac-Toe game board. The expression is a character class that matches any character in the range from 1 to 9 (inclusive). The result of the regular expression will be true only if a character from 1 to 9 is present in what is being evaluated. On line 45, the !~ operator applies the regular expression to the game variable and negates its sense such that the result will be true only if none of the characters from 1 to 9 are present. Because the regular expression is embedded within the conditional clause of the if statement modifier, the statement last PROMPT is only executed if there are no characters in the range from 1 to 9 left on the game board.

The last statement is one of a few flow-control statements in PERL that allow the program execution sequence to jump from the current line to another line somewhere else in the program. Other flow-control statements that work in a similar fashion include next, continue, redo and goto (the goto statement should be avoided whenever possible because it allows for spaghetti code).

In the example Tic-Tac-Toe program, the last PROMPT statement on line 45 causes program execution to resume just after the PROMPT block. Because there are no more statements in the program, the program will terminate.

The label PROMPT was chosen arbitrarily. Any label (or none at all) could have been used.

The redo PROMPT statement at the end of the PROMPT block causes program execution to jump back to the beginning of the PROMPT block.

Notice that the state keyword like the my keyword creates a variable that can only be accessed or modified within the block that it is created (or a nested sub-block if any exist). Unlike the my keyword, variables created with the state keyword keep their former value when the blocks they are in are called repeatedly. This is the behavior that is needed for the game variable because it is being updated incrementally each time the PROMPT block is run. The mark and move variables are meant to be different on each iteration of the PROMPT block, so they do not need to be created with the state keyword.

Regular expressions used for input validation

Another common use of regular expressions is for input validation. Line 34 of the example Tic-Tac-Toe program provides an example of a regular expression being used for input validation. The expression on line 34 is similar to the one on line 45. It is also checking for characters from 1 to 9. However, it is performing the check against the null filehandle (<>); it is using the =~ operator; and it is prefixed and suffixed with the zero-width assertions ^ and $ respectively.

The null filehandle, when accessed as it is on line 34, will cause the program to pause until one line of input is provided. The regular expression will evaluate to true only if the line contains one character in the range from 1 to 9. The assertions ^ and $ do not match any characters. Rather, they match the beginning and end positions, respectively, on the line. The regular expression effectively reads: “Begin (^) with one character in the range from 1 to 9 ([1-9]) and end ($)”.

Because it is embedded in the conditional clause of the ternary operator, line 34 will return either what was matched ($&) if the match succeeded or the character zero (0) if it failed. If the input were not validated in this way, then the user could submit their opponent’s mark rather than a number on the board.

Regular expressions used for filtering data

Line 17 demonstrates using the global modifier (g) on a regular expression. With the global modifier, the regular expression will return the number of matches instead of true or false. In list context, it returns a list of all the matched substrings.

Line 17 uses a regular expression to copy all the numbers in the range from 1 to 9 from the game variable into the array named nums. Line 18 then uses the modulo operator with the integer 2 as its second argument to determine whether the length of the nums array is even or odd. The formula on line 18 will result in 0 if the length of nums is odd and 1 if the length of nums is even. Finally, the computed index (indx) is used to access an element of the MARKS array and return it. Using this formula, the get_mark function will alternately return X or O depending on whether there are an odd or even number of positions left on the board.

Regular expressions used for substituting data

Line 28 demonstrates yet another common use of regular expressions in PERL. Rather than being used in a match operator (m), the regular expression on line 28 is being used in a substitution operator (s). If the value in the move variable is found in the game variable, it will be substituted with the value of the mark variable.

PERL sigils and data types

The last things of note that are used in the example Tic-Tac-Toe program are the sigils ($ and @) that are placed before the variable names. When creating a variable, the sigil indicates the type of variable being created. It is important to note that a different sigil can be prefixed to the variable name when it is accessed to indicate whether one or many items should be returned from the variable.

There are three built-in data types in PERL: scalars ($), arrays (@) and associative arrays (%). Scalars hold a single data item such as a number, character or string of characters. Arrays are numerically indexed sets of scalars. Associative arrays are arrays that are indexed by scalars rather than numbers.

Posted on Leave a comment

Enable remote collaboration with tmate.io on Fedora

Being able to collaborate on task remotely is an increasing need in today’s world. Contributing to Open Source project ? Working remotely ? tmate is a tmux fork that makes it easy to share a terminal session with others. It can save you hours of lonely debugging or programming.

tmate, being a tmux fork, supports all of tmux features and configuration. Also tmux and tmate can co-exist on the same system. To learn more about tmux, you can read the following article

Installing tmate on Fedora

tmate is available in the Fedora repository, making it really easy to install.

$ sudo dnf install tmate
$ tmate
Connecting to ssh.tmate.io… Note: clear your terminal before sharing readonly access web session read only: https://tmate.io/t/ro-F2aK7T ssh session read only: ssh [email protected] web session: https://tmate.io/t/H5rPw ssh session: ssh [email protected]

After starting tmate, different ways to share your session will be available. You have the choice between ssh (read-only, read-write) or web (read-only, read-write).

The web client is known to have a few issues and is still work in progress, for example the tmux key bindings are not yet supported.

On the host running tmate, you start a new pane by hitting “Ctrl+b, c”. The new pane will then be available with anyone connected to your session.

You can easily keep track of how many clients are connected to your session, using the tmate control pane. To access it hit “Ctrl+b, 0 (zero)” you will then see something like this.

A mate has joined (109.95.145.251) -- 1 client currently connected
A mate has left (109.95.145.251) -- 0 client currently connected
A mate has joined (109.95.145.251) -- 1 client currently connected

To close a session you can simply close tmate “Ctrl+c, Ctrl+d“.

Running your own server

By default tmate is using a remote server hosted on tmate.io. If you prefer you have the possibility to run your own server. For convenience a container image is provided and instruction are available on tmate.io.

It is important to remember that sharing your terminal session in read-write mode will give full access to your system to the connected client. So make sure you trust the persons you sharing you session with or use the read-only mode.

Posted on Leave a comment

Learning about Partitions and How to Create Them for Fedora

Operating system distributions try to craft a one size fits all partition layout for their file systems. Distributions cannot know the details about how your hardware is configured or how you use your system though. Do you have more than one storage drive? If so, you might be able to get a performance benefit by putting the write-heavy partitions (var and swap for example) on a separate drive from the others that tend to be more read-intensive since most drives cannot read and write at the same time. Or maybe you are running a database and have a small solid-state drive that would improve the database’s performance if its files are stored on the SSD.

The following sections attempt to describe in brief some of the historical reasons for separating some parts of the file system out into separate partitions so that you can make a more informed decision when you install your Linux operating system.

If you know more (or contradictory) historical details about the partitioning decisions that shaped the Linux operating systems used today, contribute what you know below in the comments section!

Common partitions and why or why not to create them

The boot partition

One of the reasons for putting the /boot directory on a separate partition was to ensure that the boot loader and kernel were located within the first 1024 cylinders of the disk. Most modern computers do not have the 1024 cylinder restriction. So for most people, this concern is no longer relevant. However, modern UEFI-based computers have a different restriction that makes it necessary to have a separate partition for the boot loader. UEFI-based computers require that the boot loader (which can be the Linux kernel directly) be on a FAT-formatted file system. The Linux operating system, however, requires a POSIX-compliant file system that can designate access permissions to individual files. Since FAT file systems do not support access permissions, the boot loader must be on a separate file system than the rest of the operating system on modern UEFI-based computers. A single partition cannot be formatted with more than one type of file system.

The var partition

One of the historical reasons for putting the /var directory on a separate partition was to prevent files that were frequently written to (/var/log/* for example) from filling up the entire drive. Since modern drives tend to be much larger and since other means like log rotation and disk quotas are available to manage storage utilization, putting /var on a separate partition may not be necessary. It is much easier to change a disk quota than it is to re-partition a drive.

Another reason for isolating /var was that file system corruption was much more common in the original version of the Linux Extended File System (EXT). The file systems that had more write activity were much more likely to be irreversibly corrupted by a power outage than those that did not. By partitioning the disk into separate file systems, one could limit the scope of the damage in the event of file system corruption. This concern is no longer as significant because modern file systems support journaling.

The home partition

Having /home on a separate partition makes it possible to re-format the other partitions without overwriting your home directories. However, because modern Linux distributions are much better at doing in-place operating system upgrades, re-formatting shouldn’t be needed as frequently as it might have been in the past.

It can still be useful to have /home on a separate partition if you have a dual-boot setup and want both operating systems to share the same home directories. Or if your operating system is installed on a file system that supports snapshots and rollbacks and you want to be able to rollback your operating system to an older snapshot without reverting the content in your user profiles. Even then, some file systems allow their descendant file systems to be rolled back independently, so it still may not be necessary to have a separate partition for /home. On ZFS, for example, one pool/partition can have multiple descendant file systems.

The swap partition

The swap partition reserves space for the contents of RAM to be written to permanent storage. There are pros and cons to having a swap partition. A pro of having swap memory is that it theoretically gives you time to gracefully shutdown unneeded applications before the OOM killer takes matters into its own hands. This might be important if the system is running mission-critical software that you don’t want abruptly terminated. A con might be that your system runs so slow when it starts swapping memory to disk that you’d rather the OOM killer take care of the problem for you.

Another use for swap memory is hibernation mode. This might be where the rule that the swap partition should be twice the size of your computer’s RAM originated. Ideally, you should be able to put a system into hibernation even if nearly all of its RAM is in use. Beware that Linux’s support for hibernation is not perfect. It is not uncommon that after a Linux system is resumed from hibernation some hardware devices are left in an inoperable state (for example, no video from the video card or no internet from the WiFi card).

In any case, having a swap partition is more a matter of taste. It is not required.

The root partition

The root partition (/) is the catch-all for all directories that have not been assigned to a separate partition. There is always at least one root partition. BIOS-based systems that are new enough to not have the 1024 cylinder limit can be configured with only a root partition and no others so that there is never a need to resize a partition or file system if space requirements change.

The EFI system partition

The EFI System Partition (ESP) serves the same purpose on UEFI-based computers as the boot partition did on the older BIOS-based computers. It contains the boot loader and kernel. Because the files on the ESP need to be accessible by the computer’s firmware, the ESP has a few restrictions that the older boot partition did not have. The restrictions are:

  1. The ESP must be formatted with a FAT file system (vfat in Anaconda)
  2. The ESP must have a special type-code (EF00 when using gdisk)

Because the older boot partition did not have file system or type-code restrictions, it is permissible to apply the above properties to the boot partition and use it as your ESP. Note, however, that the GRUB boot loader does not support combining the boot and ESP partitions. If you use GRUB, you will have to create a separate partition and mount it beneath the /boot directory.

The Boot Loader Specification (BLS) lists several reasons why it is ideal to use the legacy boot partition as your ESP. The reasons include:

  1. The UEFI firmware should be able to load the kernel directly. Having a separate, non-ESP compliant boot partition for the kernel prevents the UEFI firmware from being able to directly load the kernel.
  2. Nesting the ESP mount point three mount levels deep increases the likelihood that an intermediate mount could fail or otherwise be unavailable when needed. That is, requiring root (/), then boot (/boot), then efi (/efi) to be consecutively mounted is unnecessarily complex and prone to error.
  3. Requiring the boot loader to be able to read other partitions/disks which may be formatted with arbitrary file systems is non-trivial. Even when the boot loader does contain such code, the code that works at installation time can become outdated and fail to access the kernel/initrd after a file system update. This is currently true of GRUB’s ZFS file system driver, for example. You must be careful not to update your ZFS file system if you use the GRUB boot loader or else your system may not come back up the next time you reboot.

Besides the concerns listed above, it is a good idea to have your startup environment — up to and including your initramfs — on a single self-contained file system for recovery purposes. Suppose, for example, that you need to rollback your root file system because it has become corrupted or it has become infected with malware. If your kernel and initramfs are on the root file system, you may be unable to perform the recovery. By having the boot loader, kernel, and initramfs all on a single file system that is rarely accessed or updated, you can increase your chances of being able to recover the rest of your system.

In summary, there are many ways that you can layout your partitions and the type of hardware (BIOS or UEFI) and the brand of boot loader (GRUB, Syslinux or systemd-boot) are among the factors that will influence which layouts will work.

Other considerations

MBR vs. GPT

GUID Partition Table (GPT) is the newer partition format that supports larger disks. GPT was designed to work with the newer UEFI firmware. It is backward-compatible with the older Master Boot Record (MBR) partition format but not all boot loaders support the MBR boot method. GRUB and Syslinux support both MBR and UEFI, but systemd-boot only supports the newer UEFI boot method.

By using GPT now, you can increase the likelihood that your storage device, or an image of it, can be transferred over to a newer computer in the future should you wish to do so. If you have an older computer that natively supports only MBR-partitioned drives, you may need to add the inst.gpt parameter to Anaconda when starting the installer to get it to use the newer format. How to add the inst.gpt parameter is shown in the below video titled “Partitioning a BIOS Computer”.

If you use the GPT partition format on a BIOS-based computer, and you use the GRUB boot loader, you must additionally create a one megabyte biosboot partition at the start of your storage device. The biosboot partition is not needed by any other brand of boot loader. How to create the biosboot partition is demonstrated in the below video titled “Partitioning a BIOS Computer”.

LVM

One last thing to consider when manually partitioning your Linux system is whether to use standard partitions or logical volumes. Logical volumes are managed by the Logical Volume Manager (LVM). You can setup LVM volumes directly on your disk without first creating standard partitions to hold them. However, most computers still require that the boot partition be a standard partition and not an LVM volume. Consequently, having LVM volumes only increases the complexity of the system because the LVM volumes must be created within standard partitions.

The main features of LVM — online storage resizing and clustering — are not really applicable to the typical end user. Most laptops do not have hot-swappable drive bays for adding or reconfiguring storage while the system is running. And not many laptop or desktop users have clvmd configured so they can access a centralized storage device concurrently from multiple client computers.

LVM is great for servers and clusters. But it adds extra complexity for the typical end user. Go with standard partitions unless you are a server admin who needs the more advanced features.

Video demonstrations

Now that you know which partitions you need, you can watch the sort video demonstrations below to see how to manually partition a Fedora Linux computer from the Anaconda installer.

These videos demonstrate creating only the minimally required partitions. You can add more if you choose.

Because the GRUB boot loader requires a more complex partition layout on UEFI systems, the below video titled “Partitioning a UEFI Computer” additionally demonstrates how to install the systemd-boot boot loader. By using the systemd-boot boot loader, you can reduce the number of needed partitions to just two — boot and root. How to use a boot loader other than the default (GRUB) with Fedora’s Anaconda installer is officially documented here.

Partitioning a UEFI Computer
Partitioning a BIOS Computer
Posted on Leave a comment

Best of 2019: Fedora for system administrators

The end of the year is a perfect time to look back on some of the Magazine’s most popular articles of 2019. One of the Fedora operating systems’s many strong points is its wide array of tools for system administrators. As your skills progress, you’ll find that the Fedora OS has even more to offer. And because Linux is the sysadmin’s best friend, you’ll always be in good company. In 2019, there were quite a few articles about sysadmin tools our readers enjoyed. Here’s a sampling.

Introducing Fedora CoreOS

If you follow modern IT topics, you know that containers are a hot topic — and containers mean Linux. This summer brought the first preview release of Fedora CoreOS. This new edition of Fedora can run containerized workloads. You can use it to deploy apps and services in a modern way.

InitRAMFS, dracut and the dracut emergency shell

To be a good sysadmin, you need to understand system startup and the boot process. From time to time, you’ll encounter software errors, configuration problems, or other issues that keep your system from starting normally. With the information in the article below, you can do some life-saving surgery on your system, and restore it to working order.

How to reset your root password

Although this article was published a few years ago, it continues to be one of the most popular. Apparently, we’re not the only people who sometimes get locked out of our own system! If this happens to you, and you need to reset the root password, the article below should do the trick.

Systemd: unit dependencies and order

This article is part of an entire series on systemd, the modern system and process manager in Fedora and other distributions. As you may know, systemd has sophisticated but easy to use methods to start up or shut own services in the right order. This article shows you how they work. That way you can apply the right options to unit files you create for systemd.

Setting kernel command line arguments

Fedora 30 introduced new ways to change the boot options for your kernel. This article from Laura Abbott on the Fedora kernel team explains the new Bootloader Spec (BLS). It also tells you how to use it to set options on your kernel for boot time.

Stay tuned to the Magazine for other upcoming “Best of 2019” categories. All of us at the Magazine hope you have a great end of year and holiday season.

Posted on Leave a comment

Using Ansible to organize your SSH keys in AWS

If you’ve worked with instances in Amazon Web Services (AWS) for a long time, you may run into this common issue. It’s not technical, but more to do with the human nature of getting too comfortable. When you launch a new instance in a region you haven’t used recently, you may end up creating a new SSH key pair. This leads to having too many keys, which can become complicated and disordered.

This article shows you a way to have your public key in all regions. A recent Fedora Magazine article includes one solution. But the solution in this article is automated even further, and in a more concise and scalable way.

Say you have a Fedora 30 or 31 desktop system where your key is stored, and Ansible is installed as well. These two things together provide the solution to this problem and many more.

With Ansible’s ec2_key module, you can create a simple playbook that will maintain your SSH key pair in all regions. If you need to add or remove keys, it’s as simple as adding and removing lines from a file.

Setting up and running the playbook

To use the playbook, first install necessary dependencies for the ec2_key module:

$ sudo dnf install python3-boto python3-boto3

The playbook is simple: you need only to change your key and its name as in the example below. After that, run the playbook and it iterates over all the public AWS regions listed. The example also includes the restricted regions in case you have access. To include them, uncomment each line as needed, save the file, and then run the playbook again.

---
- name: Maintain an ssh key pair in ec2 hosts: localhost connection: local gather_facts: no vars: ansible_python_interpreter: python tasks: - name: Make available your ssh public key in ec2 for new instances ec2_key: name: "YOUR KEY NAME GOES HERE" key_material: 'YOUR KEY GOES HERE' state: present region: "{{ item }}" with_items: - us-east-2 #US East (Ohio) - us-east-1 #US East (N. Virginia) - us-west-1 #US West (N. California) - us-west-2 #US West (Oregon) - ap-east-1 #Asia Pacific (Hong Kong) - ap-south-1 #Asia Pacific (Mumbai) - ap-northeast-2 #Asia Pacific (Seoul) - ap-southeast-1 #Asia Pacific (Singapore) - ap-southeast-2 #Asia Pacific (Sydney) - ap-northeast-1 #Asia Pacific (Tokyo) - ca-central-1 #Canada (Central) - eu-central-1 #EU (Frankfurt) - eu-west-1 #EU (Ireland) - eu-west-2 #EU (London) - eu-west-3 #EU (Paris) - eu-north-1 #EU (Stockholm) - me-south-1 #Middle East (Bahrain) - sa-east-1 #South America (Sao Paulo) # - us-gov-east-1 #AWS GovCloud (US-East) # - us-gov-west-1 #AWS GovCloud (US-West) # - ap-northeast-3 #Asia Pacific (Osaka-Local) # - cn-north-1 #China (Beijing) # - cn-northwest-1 #China (Ningxia)

This playbook requires AWS access via API, as well. To do this, use environment variables as follows:

$ AWS_ACCESS_KEY="aws-access-key-id" AWS_SECRET_KEY="aws-secret-key-id" ansible-playbook ec2-playbook.yml

Another option is to install the aws cli tools and add the credentials as explained in a previous Fedora Magazine article. It is not recommended to insert these values in the playbook if you store it anywhere online! You can find this playbook code on GitHub.

After the playbook finishes, confirm that your key is available on the AWS console. To do that:

  1. Log into your AWS console
  2. Go to EC2 > Key Pairs
  3. You should see your key listed. The only limitation is that you have to check region-by-region with this method.

Another way is to use a quick command in a shell to do this check for you.

First create a variable with all regions on the playbook:

AWS_REGION="us-east-1 us-west-1 us-west-2 ap-east-1 ap-south-1 ap-northeast-2 ap-southeast-1 ap-southeast-2 ap-northeast-1 ca-central-1 eu-central-1 eu-west-1 eu-west-2 eu-west-3 eu-north-1 me-south-1 sa-east-1"

Then do a for loop and you will get the result from aws API:

for each in ${AWS_REGION} ; do aws ec2 describe-key-pairs --key-name <YOUR KEY GOES HERE> ; done

Keep in mind that to do the above you need to have the aws cli installed.

Posted on Leave a comment

A quick introduction to Toolbox on Fedora

Toolbox allows you to sort and manage your development environments in containers without requiring root privileges or manually attaching volumes. It creates a container where you can install your own CLI tools, without installing them on the base system itself. You can also utilize it when you do not have root access or cannot install programs directly. This article gives you an introduction to toolbox and what it does.

Installing Toolbox

Silverblue includes Toolbox by default. For the Workstation and Server editions, you can grab it from the default repositories using dnf install toolbox.

Creating Toolboxes

Open your terminal and run toolbox enter. The utility will automatically request permission to download the latest image, create your first container, and place your shell inside this container.

$ toolbox enter
No toolbox containers found. Create now? [y/N] y
Image required to create toolbox container.
Download registry.fedoraproject.org/f30/fedora-toolbox:30 (500MB)? [y/N]: y

Currently there is no difference between the toolbox and your base system. Your filesystems and packages appear unchanged. Here is an example using a repository that contains documentation source for a resume under a ~/src/resume folder. The resume is built using the pandoc tool.

$ pwd /home/rwaltr $ cd src/resume/ $ head -n 5 Makefile all: pdf html rtf text docx pdf: init pandoc -s -o BUILDS/resume.pdf markdown/* $ make pdf
bash: make: command not found
$ pandoc -v
bash: pandoc: command not found

This toolbox does not have the programs required to build the resume. You can remedy this by installing the tools with dnf. You will not be prompted for the root password, because you are running in a container.

$ sudo dnf groupinstall "Authoring and Publishing" -y && sudo dnf install pandoc make -y
... $ make all #Successful builds
mkdir -p BUILDS
pandoc -s -o BUILDS/resume.pdf markdown/*
pandoc -s -o BUILDS/resume.html markdown/*
pandoc -s -o BUILDS/resume.rtf markdown/*
pandoc -s -o BUILDS/resume.txt markdown/*
pandoc -s -o BUILDS/resume.docx markdown/*
$ ls BUILDS/
resume.docx resume.html resume.pdf resume.rtf resume.txt

Run exit at any time to exit the toolbox.

$ cd BUILDS/
$ pandoc --version || ls
pandoc 2.2.1
Compiled with pandoc-types 1.17.5.4, texmath 0.11.1.2, skylighting 0.7.5
...
for a particular purpose.
resume.docx resume.html resume.pdf resume.rtf resume.txt
$ exit logout
$ pandoc --version || ls
bash: pandoc: command not found...
resume.docx resume.html resume.pdf resume.rtf resume.txt

You retain the files created by your toolbox in your home directory. None of the programs installed in your toolbox will be available outside of it.

Tips and tricks

This introduction to toolbox only scratches the surface. Here are some additional tips, but you can also check out the official documentation.

  • Toolbox –help will show you the man page for Toolbox
  • You can have multiple toolboxes at once. Use toolbox create -c Toolboxname and toolbox enter -c Toolboxname
  • Toolbox uses Podman to do the heavy lifting. Use toolbox list to find the IDs of the containers Toolbox creates. Podman can use these IDs to perform actions such as rm and stop. (You can also read more about Podman in this Magazine article.)

Photo courtesy of Florian Richter from Flickr.

Posted on Leave a comment

Create virtual machines with Cockpit in Fedora

This article shows you how to install the software you need to use Cockpit to create and manage virtual machines on Fedora 31. Cockpit is an interactive admin interface that lets you access and manage systems from any supported web browser. With virt-manager being deprecated users are encouraged to use Cockpit instead, which is meant to replace it.

Cockpit is an actively developed project, with many plugins available that extend how it works. For example, one such plugin is “Machines,” which interacts with libvirtd and lets users create and manage virtual machines.

Installing software

The required software prerequisites are libvirt, cockpit and cockpit-machines. To install them on Fedora 31, run the following command from a terminal using sudo:

$ sudo dnf install libvirt cockpit cockpit-machines

Cockpit is also included as part of the “Headless Management” package group. This group is useful for a Fedora based server that you only access through a network. In that case, to install it, use this command:

$ sudo dnf groupinstall "Headless Management"

Setting up Cockpit services

After installing the necessary packages it’s time to enable the services. The libvirtd service runs the virtual machines, while Cockpit has a socket activated service to let you access the Web GUI:

$ sudo systemctl enable libvirtd --now
$ sudo systemctl enable cockpit.socket --now

This should be enough to run virtual machines and manage them through Cockpit. Optionally, if you want to access and manage your machine from another device on your network, you need to expose the service to the network. To do this, add a new rule in your firewall configuration:

$ sudo firewall-cmd --zone=public --add-service=cockpit --permanent
$ sudo firewall-cmd --reload

To confirm the services are running and no issues occurred, check the status of the services:

$ sudo systemctl status libvirtd
$ sudo systemctl status cockpit.socket

At this point everything should be working. The Cockpit web GUI should be available at https://localhost:9090 or https://127.0.0.1:9090. Or, enter the local network IP in a web browser on any other device connected to the same network. (Without SSL certificates setup, you may need to allow a connection from your browser.)

Creating and installing a machine

Log into the interface using the user name and password for that system. You can also choose whether to allow your password to be used for administrative tasks in this session.

Select Virtual Machines and then select Create VM to build a new box. The console gives you several options:

  • Download an OS using Cockpit’s built in library
  • Use install media already downloaded on the system you’re managing
  • Point to a URL for an OS installation tree
  • Boot media over the network via the PXE protocol

Enter all the necessary parameters. Then select Create to power up the new virtual machine.

At this point, a graphical console appears. Most modern web browsers let you use your keyboard and mouse to interact with the VM console. Now you can complete your installation and use your new VM, just as you would via virt-manager in the past.


Photo by Miguel Teixeira on Flickr (CC BY-SA 2.0).

Posted on Leave a comment

Build a virtual private network with Wireguard

Wireguard is a new VPN designed as a replacement for IPSec and OpenVPN. Its design goal is to be simple and secure, and it takes advantage of recent technologies such as the Noise Protocol Framework. Some consider Wireguard’s ease of configuration akin to OpenSSH. This article shows you how to deploy and use it.

It is currently in active development, so it might not be the best for production machines. However, Wireguard is under consideration to be included into the Linux kernel. The design has been formally verified,* and proven to be secure against a number of threats.

When deploying Wireguard, keep your Fedora Linux system updated to the most recent version, since Wireguard does not have a stable release cadence.

Set the timezone

To check and set your timezone, first display current time information:

timedatectl

Then if needed, set the correct timezone, for example to Europe/London.

timedatectl set-timezone Europe/London

Note that your system’s real time clock (RTC) may continue to be set to UTC or another timezone.

Install Wireguard

To install, enable the COPR repository for the project and then install with dnf, using sudo:

$ sudo dnf copr enable jdoss/wireguard
$ sudo dnf install wireguard-dkms wireguard-tools

Once installed, two new commands become available, along with support for systemd:

  • wg: Configuration of wireguard interfaces
  • wg-quick Bringing up the VPN tunnels

Create the configuration directory for Wireguard, and apply a umask of 077. A umask of 077 allows read, write, and execute permission for the file’s owner (root), but prohibits read, write, and execute permission for everyone else.

mkdir /etc/wireguard
cd /etc/wireguard
umask 077

Generate Key Pairs

Generate the private key, then derive the public key from it.

$ wg genkey > /etc/wireguard/privkey
$ wg pubkey < /etc/wireguard/privkey > /etc/wireguard/publickey

Alternatively, this can be done in one go:

wg genkey | tee /etc/wireguard/privatekey | wg pubkey > /etc/wireguard/publickey

There is a vanity address generator, which might be of interest to some. You can also generate a pre-shared key to provide a level of quantum protection:

wg genpsk > psk

This will be the same value for both the server and client, so you only need to run the command once.

Configure Wireguard server and client

Both the client and server have an [Interface] option to specify the IP address assigned to the interface, along with the private keys.

Each peer (server and client) has a [Peer] section containing its respective PublicKey, along with the PresharedKey. Additionally, this block can list allowed IP addresses which can use the tunnel.

Server

A firewall rule is added when the interface is brought up, along with enabling masquerading. Make sure to note the /24 IPv4 address range within Interface, which differs from the client. Edit the /etc/wireguard/wg0.conf file as follows, using the IP address for your server for Address, and the client IP address in AllowedIPs.

[Interface]
Address = 192.168.2.1/24, fd00:7::1/48
PrivateKey = <SERVER_PRIVATE_KEY>
PostUp = firewall-cmd --zone=public --add-port 51820/udp && firewall-cmd --zone=public --add-masquerade
PostDown = firewall-cmd --zone=public --remove-port 51820/udp && firewall-cmd --zone=public --remove-masquerade
ListenPort = 51820 [Peer]
PublicKey = <CLIENT_PUBLIC_KEY>
PresharedKey = LpI+UivLx1ZqbzjyRaWR2rWN20tbBsOroNdNnjKLMQ=
AllowedIPs = 192.168.2.2/32, fd00:7::2/48

Allow forwarding of IP packets by adding the following to /etc/sysctl.conf:

net.ipv4.ip_forward=1
net.ipv6.conf.all.forwarding=1

Load the new settings:

$ sysctl -p

Forwarding will be preserved after a reboot.

Client

The client is very similar to the server config, but has an optional additional entry of PersistentKeepalive set to 30 seconds. This is to prevent NAT from causing issues, and depending on your setup might not be needed. Setting AllowedIPs to 0.0.0.0/0 will forward all traffic over the tunnel. Edit the client’s /etc/wireguard/wg0.conf file as follows, using your client’s IP address for Address and the server IP address at the Endpoint.

[Interface]
Address = 192.168.2.2/32, fd00:7::2/48
PrivateKey = <CLIENT_PRIVATE_KEY> [Peer]
PublicKey = <SERVER_PUBLIC_KEY>
PresharedKey = LpI+UivLx1ZqbzjyRaWR2rWN20tbBsOroNdNnjWKLM=
AllowedIPs = 0.0.0.0/0, ::/0 Endpoint = <SERVER_IP>:51820
PersistentKeepalive = 30

Test Wireguard

Start and check the status of the tunnel on both the server and client:

$ systemctl start wg-quick@wg0
$ systemctl status wg-quick@wg0

To test the connections, try the following:

ping google.com
ping6 ipv6.google.com

Then check external IP addresses:

dig +short myip.opendns.com @resolver1.opendns.com
dig +short -6 myip.opendns.com aaaa @resolver1.ipv6-sandbox.opendns.com

* “Formally verified,” in this sense, means that the design has been proved to have mathematically correct messages and key secrecy, forward secrecy, mutual authentication, session uniqueness, channel binding, and resistance against replay, key compromise impersonation, and denial of server attacks.


Photo by Black Zheng on Unsplash.