Posted on Leave a comment

Learning about Partitions and How to Create Them for Fedora

Operating system distributions try to craft a one size fits all partition layout for their file systems. Distributions cannot know the details about how your hardware is configured or how you use your system though. Do you have more than one storage drive? If so, you might be able to get a performance benefit by putting the write-heavy partitions (var and swap for example) on a separate drive from the others that tend to be more read-intensive since most drives cannot read and write at the same time. Or maybe you are running a database and have a small solid-state drive that would improve the database’s performance if its files are stored on the SSD.

The following sections attempt to describe in brief some of the historical reasons for separating some parts of the file system out into separate partitions so that you can make a more informed decision when you install your Linux operating system.

If you know more (or contradictory) historical details about the partitioning decisions that shaped the Linux operating systems used today, contribute what you know below in the comments section!

Common partitions and why or why not to create them

The boot partition

One of the reasons for putting the /boot directory on a separate partition was to ensure that the boot loader and kernel were located within the first 1024 cylinders of the disk. Most modern computers do not have the 1024 cylinder restriction. So for most people, this concern is no longer relevant. However, modern UEFI-based computers have a different restriction that makes it necessary to have a separate partition for the boot loader. UEFI-based computers require that the boot loader (which can be the Linux kernel directly) be on a FAT-formatted file system. The Linux operating system, however, requires a POSIX-compliant file system that can designate access permissions to individual files. Since FAT file systems do not support access permissions, the boot loader must be on a separate file system than the rest of the operating system on modern UEFI-based computers. A single partition cannot be formatted with more than one type of file system.

The var partition

One of the historical reasons for putting the /var directory on a separate partition was to prevent files that were frequently written to (/var/log/* for example) from filling up the entire drive. Since modern drives tend to be much larger and since other means like log rotation and disk quotas are available to manage storage utilization, putting /var on a separate partition may not be necessary. It is much easier to change a disk quota than it is to re-partition a drive.

Another reason for isolating /var was that file system corruption was much more common in the original version of the Linux Extended File System (EXT). The file systems that had more write activity were much more likely to be irreversibly corrupted by a power outage than those that did not. By partitioning the disk into separate file systems, one could limit the scope of the damage in the event of file system corruption. This concern is no longer as significant because modern file systems support journaling.

The home partition

Having /home on a separate partition makes it possible to re-format the other partitions without overwriting your home directories. However, because modern Linux distributions are much better at doing in-place operating system upgrades, re-formatting shouldn’t be needed as frequently as it might have been in the past.

It can still be useful to have /home on a separate partition if you have a dual-boot setup and want both operating systems to share the same home directories. Or if your operating system is installed on a file system that supports snapshots and rollbacks and you want to be able to rollback your operating system to an older snapshot without reverting the content in your user profiles. Even then, some file systems allow their descendant file systems to be rolled back independently, so it still may not be necessary to have a separate partition for /home. On ZFS, for example, one pool/partition can have multiple descendant file systems.

The swap partition

The swap partition reserves space for the contents of RAM to be written to permanent storage. There are pros and cons to having a swap partition. A pro of having swap memory is that it theoretically gives you time to gracefully shutdown unneeded applications before the OOM killer takes matters into its own hands. This might be important if the system is running mission-critical software that you don’t want abruptly terminated. A con might be that your system runs so slow when it starts swapping memory to disk that you’d rather the OOM killer take care of the problem for you.

Another use for swap memory is hibernation mode. This might be where the rule that the swap partition should be twice the size of your computer’s RAM originated. Ideally, you should be able to put a system into hibernation even if nearly all of its RAM is in use. Beware that Linux’s support for hibernation is not perfect. It is not uncommon that after a Linux system is resumed from hibernation some hardware devices are left in an inoperable state (for example, no video from the video card or no internet from the WiFi card).

In any case, having a swap partition is more a matter of taste. It is not required.

The root partition

The root partition (/) is the catch-all for all directories that have not been assigned to a separate partition. There is always at least one root partition. BIOS-based systems that are new enough to not have the 1024 cylinder limit can be configured with only a root partition and no others so that there is never a need to resize a partition or file system if space requirements change.

The EFI system partition

The EFI System Partition (ESP) serves the same purpose on UEFI-based computers as the boot partition did on the older BIOS-based computers. It contains the boot loader and kernel. Because the files on the ESP need to be accessible by the computer’s firmware, the ESP has a few restrictions that the older boot partition did not have. The restrictions are:

  1. The ESP must be formatted with a FAT file system (vfat in Anaconda)
  2. The ESP must have a special type-code (EF00 when using gdisk)

Because the older boot partition did not have file system or type-code restrictions, it is permissible to apply the above properties to the boot partition and use it as your ESP. Note, however, that the GRUB boot loader does not support combining the boot and ESP partitions. If you use GRUB, you will have to create a separate partition and mount it beneath the /boot directory.

The Boot Loader Specification (BLS) lists several reasons why it is ideal to use the legacy boot partition as your ESP. The reasons include:

  1. The UEFI firmware should be able to load the kernel directly. Having a separate, non-ESP compliant boot partition for the kernel prevents the UEFI firmware from being able to directly load the kernel.
  2. Nesting the ESP mount point three mount levels deep increases the likelihood that an intermediate mount could fail or otherwise be unavailable when needed. That is, requiring root (/), then boot (/boot), then efi (/efi) to be consecutively mounted is unnecessarily complex and prone to error.
  3. Requiring the boot loader to be able to read other partitions/disks which may be formatted with arbitrary file systems is non-trivial. Even when the boot loader does contain such code, the code that works at installation time can become outdated and fail to access the kernel/initrd after a file system update. This is currently true of GRUB’s ZFS file system driver, for example. You must be careful not to update your ZFS file system if you use the GRUB boot loader or else your system may not come back up the next time you reboot.

Besides the concerns listed above, it is a good idea to have your startup environment — up to and including your initramfs — on a single self-contained file system for recovery purposes. Suppose, for example, that you need to rollback your root file system because it has become corrupted or it has become infected with malware. If your kernel and initramfs are on the root file system, you may be unable to perform the recovery. By having the boot loader, kernel, and initramfs all on a single file system that is rarely accessed or updated, you can increase your chances of being able to recover the rest of your system.

In summary, there are many ways that you can layout your partitions and the type of hardware (BIOS or UEFI) and the brand of boot loader (GRUB, Syslinux or systemd-boot) are among the factors that will influence which layouts will work.

Other considerations

MBR vs. GPT

GUID Partition Table (GPT) is the newer partition format that supports larger disks. GPT was designed to work with the newer UEFI firmware. It is backward-compatible with the older Master Boot Record (MBR) partition format but not all boot loaders support the MBR boot method. GRUB and Syslinux support both MBR and UEFI, but systemd-boot only supports the newer UEFI boot method.

By using GPT now, you can increase the likelihood that your storage device, or an image of it, can be transferred over to a newer computer in the future should you wish to do so. If you have an older computer that natively supports only MBR-partitioned drives, you may need to add the inst.gpt parameter to Anaconda when starting the installer to get it to use the newer format. How to add the inst.gpt parameter is shown in the below video titled “Partitioning a BIOS Computer”.

If you use the GPT partition format on a BIOS-based computer, and you use the GRUB boot loader, you must additionally create a one megabyte biosboot partition at the start of your storage device. The biosboot partition is not needed by any other brand of boot loader. How to create the biosboot partition is demonstrated in the below video titled “Partitioning a BIOS Computer”.

LVM

One last thing to consider when manually partitioning your Linux system is whether to use standard partitions or logical volumes. Logical volumes are managed by the Logical Volume Manager (LVM). You can setup LVM volumes directly on your disk without first creating standard partitions to hold them. However, most computers still require that the boot partition be a standard partition and not an LVM volume. Consequently, having LVM volumes only increases the complexity of the system because the LVM volumes must be created within standard partitions.

The main features of LVM — online storage resizing and clustering — are not really applicable to the typical end user. Most laptops do not have hot-swappable drive bays for adding or reconfiguring storage while the system is running. And not many laptop or desktop users have clvmd configured so they can access a centralized storage device concurrently from multiple client computers.

LVM is great for servers and clusters. But it adds extra complexity for the typical end user. Go with standard partitions unless you are a server admin who needs the more advanced features.

Video demonstrations

Now that you know which partitions you need, you can watch the sort video demonstrations below to see how to manually partition a Fedora Linux computer from the Anaconda installer.

These videos demonstrate creating only the minimally required partitions. You can add more if you choose.

Because the GRUB boot loader requires a more complex partition layout on UEFI systems, the below video titled “Partitioning a UEFI Computer” additionally demonstrates how to install the systemd-boot boot loader. By using the systemd-boot boot loader, you can reduce the number of needed partitions to just two — boot and root. How to use a boot loader other than the default (GRUB) with Fedora’s Anaconda installer is officially documented here.

Partitioning a UEFI Computer
Partitioning a BIOS Computer
Posted on Leave a comment

Best of 2019: Fedora for system administrators

The end of the year is a perfect time to look back on some of the Magazine’s most popular articles of 2019. One of the Fedora operating systems’s many strong points is its wide array of tools for system administrators. As your skills progress, you’ll find that the Fedora OS has even more to offer. And because Linux is the sysadmin’s best friend, you’ll always be in good company. In 2019, there were quite a few articles about sysadmin tools our readers enjoyed. Here’s a sampling.

Introducing Fedora CoreOS

If you follow modern IT topics, you know that containers are a hot topic — and containers mean Linux. This summer brought the first preview release of Fedora CoreOS. This new edition of Fedora can run containerized workloads. You can use it to deploy apps and services in a modern way.

InitRAMFS, dracut and the dracut emergency shell

To be a good sysadmin, you need to understand system startup and the boot process. From time to time, you’ll encounter software errors, configuration problems, or other issues that keep your system from starting normally. With the information in the article below, you can do some life-saving surgery on your system, and restore it to working order.

How to reset your root password

Although this article was published a few years ago, it continues to be one of the most popular. Apparently, we’re not the only people who sometimes get locked out of our own system! If this happens to you, and you need to reset the root password, the article below should do the trick.

Systemd: unit dependencies and order

This article is part of an entire series on systemd, the modern system and process manager in Fedora and other distributions. As you may know, systemd has sophisticated but easy to use methods to start up or shut own services in the right order. This article shows you how they work. That way you can apply the right options to unit files you create for systemd.

Setting kernel command line arguments

Fedora 30 introduced new ways to change the boot options for your kernel. This article from Laura Abbott on the Fedora kernel team explains the new Bootloader Spec (BLS). It also tells you how to use it to set options on your kernel for boot time.

Stay tuned to the Magazine for other upcoming “Best of 2019” categories. All of us at the Magazine hope you have a great end of year and holiday season.

Posted on Leave a comment

Using Ansible to organize your SSH keys in AWS

If you’ve worked with instances in Amazon Web Services (AWS) for a long time, you may run into this common issue. It’s not technical, but more to do with the human nature of getting too comfortable. When you launch a new instance in a region you haven’t used recently, you may end up creating a new SSH key pair. This leads to having too many keys, which can become complicated and disordered.

This article shows you a way to have your public key in all regions. A recent Fedora Magazine article includes one solution. But the solution in this article is automated even further, and in a more concise and scalable way.

Say you have a Fedora 30 or 31 desktop system where your key is stored, and Ansible is installed as well. These two things together provide the solution to this problem and many more.

With Ansible’s ec2_key module, you can create a simple playbook that will maintain your SSH key pair in all regions. If you need to add or remove keys, it’s as simple as adding and removing lines from a file.

Setting up and running the playbook

To use the playbook, first install necessary dependencies for the ec2_key module:

$ sudo dnf install python3-boto python3-boto3

The playbook is simple: you need only to change your key and its name as in the example below. After that, run the playbook and it iterates over all the public AWS regions listed. The example also includes the restricted regions in case you have access. To include them, uncomment each line as needed, save the file, and then run the playbook again.

---
- name: Maintain an ssh key pair in ec2 hosts: localhost connection: local gather_facts: no vars: ansible_python_interpreter: python tasks: - name: Make available your ssh public key in ec2 for new instances ec2_key: name: "YOUR KEY NAME GOES HERE" key_material: 'YOUR KEY GOES HERE' state: present region: "{{ item }}" with_items: - us-east-2 #US East (Ohio) - us-east-1 #US East (N. Virginia) - us-west-1 #US West (N. California) - us-west-2 #US West (Oregon) - ap-east-1 #Asia Pacific (Hong Kong) - ap-south-1 #Asia Pacific (Mumbai) - ap-northeast-2 #Asia Pacific (Seoul) - ap-southeast-1 #Asia Pacific (Singapore) - ap-southeast-2 #Asia Pacific (Sydney) - ap-northeast-1 #Asia Pacific (Tokyo) - ca-central-1 #Canada (Central) - eu-central-1 #EU (Frankfurt) - eu-west-1 #EU (Ireland) - eu-west-2 #EU (London) - eu-west-3 #EU (Paris) - eu-north-1 #EU (Stockholm) - me-south-1 #Middle East (Bahrain) - sa-east-1 #South America (Sao Paulo) # - us-gov-east-1 #AWS GovCloud (US-East) # - us-gov-west-1 #AWS GovCloud (US-West) # - ap-northeast-3 #Asia Pacific (Osaka-Local) # - cn-north-1 #China (Beijing) # - cn-northwest-1 #China (Ningxia)

This playbook requires AWS access via API, as well. To do this, use environment variables as follows:

$ AWS_ACCESS_KEY="aws-access-key-id" AWS_SECRET_KEY="aws-secret-key-id" ansible-playbook ec2-playbook.yml

Another option is to install the aws cli tools and add the credentials as explained in a previous Fedora Magazine article. It is not recommended to insert these values in the playbook if you store it anywhere online! You can find this playbook code on GitHub.

After the playbook finishes, confirm that your key is available on the AWS console. To do that:

  1. Log into your AWS console
  2. Go to EC2 > Key Pairs
  3. You should see your key listed. The only limitation is that you have to check region-by-region with this method.

Another way is to use a quick command in a shell to do this check for you.

First create a variable with all regions on the playbook:

AWS_REGION="us-east-1 us-west-1 us-west-2 ap-east-1 ap-south-1 ap-northeast-2 ap-southeast-1 ap-southeast-2 ap-northeast-1 ca-central-1 eu-central-1 eu-west-1 eu-west-2 eu-west-3 eu-north-1 me-south-1 sa-east-1"

Then do a for loop and you will get the result from aws API:

for each in ${AWS_REGION} ; do aws ec2 describe-key-pairs --key-name <YOUR KEY GOES HERE> ; done

Keep in mind that to do the above you need to have the aws cli installed.

Posted on Leave a comment

A quick introduction to Toolbox on Fedora

Toolbox allows you to sort and manage your development environments in containers without requiring root privileges or manually attaching volumes. It creates a container where you can install your own CLI tools, without installing them on the base system itself. You can also utilize it when you do not have root access or cannot install programs directly. This article gives you an introduction to toolbox and what it does.

Installing Toolbox

Silverblue includes Toolbox by default. For the Workstation and Server editions, you can grab it from the default repositories using dnf install toolbox.

Creating Toolboxes

Open your terminal and run toolbox enter. The utility will automatically request permission to download the latest image, create your first container, and place your shell inside this container.

$ toolbox enter
No toolbox containers found. Create now? [y/N] y
Image required to create toolbox container.
Download registry.fedoraproject.org/f30/fedora-toolbox:30 (500MB)? [y/N]: y

Currently there is no difference between the toolbox and your base system. Your filesystems and packages appear unchanged. Here is an example using a repository that contains documentation source for a resume under a ~/src/resume folder. The resume is built using the pandoc tool.

$ pwd /home/rwaltr $ cd src/resume/ $ head -n 5 Makefile all: pdf html rtf text docx pdf: init pandoc -s -o BUILDS/resume.pdf markdown/* $ make pdf
bash: make: command not found
$ pandoc -v
bash: pandoc: command not found

This toolbox does not have the programs required to build the resume. You can remedy this by installing the tools with dnf. You will not be prompted for the root password, because you are running in a container.

$ sudo dnf groupinstall "Authoring and Publishing" -y && sudo dnf install pandoc make -y
... $ make all #Successful builds
mkdir -p BUILDS
pandoc -s -o BUILDS/resume.pdf markdown/*
pandoc -s -o BUILDS/resume.html markdown/*
pandoc -s -o BUILDS/resume.rtf markdown/*
pandoc -s -o BUILDS/resume.txt markdown/*
pandoc -s -o BUILDS/resume.docx markdown/*
$ ls BUILDS/
resume.docx resume.html resume.pdf resume.rtf resume.txt

Run exit at any time to exit the toolbox.

$ cd BUILDS/
$ pandoc --version || ls
pandoc 2.2.1
Compiled with pandoc-types 1.17.5.4, texmath 0.11.1.2, skylighting 0.7.5
...
for a particular purpose.
resume.docx resume.html resume.pdf resume.rtf resume.txt
$ exit logout
$ pandoc --version || ls
bash: pandoc: command not found...
resume.docx resume.html resume.pdf resume.rtf resume.txt

You retain the files created by your toolbox in your home directory. None of the programs installed in your toolbox will be available outside of it.

Tips and tricks

This introduction to toolbox only scratches the surface. Here are some additional tips, but you can also check out the official documentation.

  • Toolbox –help will show you the man page for Toolbox
  • You can have multiple toolboxes at once. Use toolbox create -c Toolboxname and toolbox enter -c Toolboxname
  • Toolbox uses Podman to do the heavy lifting. Use toolbox list to find the IDs of the containers Toolbox creates. Podman can use these IDs to perform actions such as rm and stop. (You can also read more about Podman in this Magazine article.)

Photo courtesy of Florian Richter from Flickr.

Posted on Leave a comment

Create virtual machines with Cockpit in Fedora

This article shows you how to install the software you need to use Cockpit to create and manage virtual machines on Fedora 31. Cockpit is an interactive admin interface that lets you access and manage systems from any supported web browser. With virt-manager being deprecated users are encouraged to use Cockpit instead, which is meant to replace it.

Cockpit is an actively developed project, with many plugins available that extend how it works. For example, one such plugin is “Machines,” which interacts with libvirtd and lets users create and manage virtual machines.

Installing software

The required software prerequisites are libvirt, cockpit and cockpit-machines. To install them on Fedora 31, run the following command from a terminal using sudo:

$ sudo dnf install libvirt cockpit cockpit-machines

Cockpit is also included as part of the “Headless Management” package group. This group is useful for a Fedora based server that you only access through a network. In that case, to install it, use this command:

$ sudo dnf groupinstall "Headless Management"

Setting up Cockpit services

After installing the necessary packages it’s time to enable the services. The libvirtd service runs the virtual machines, while Cockpit has a socket activated service to let you access the Web GUI:

$ sudo systemctl enable libvirtd --now
$ sudo systemctl enable cockpit.socket --now

This should be enough to run virtual machines and manage them through Cockpit. Optionally, if you want to access and manage your machine from another device on your network, you need to expose the service to the network. To do this, add a new rule in your firewall configuration:

$ sudo firewall-cmd --zone=public --add-service=cockpit --permanent
$ sudo firewall-cmd --reload

To confirm the services are running and no issues occurred, check the status of the services:

$ sudo systemctl status libvirtd
$ sudo systemctl status cockpit.socket

At this point everything should be working. The Cockpit web GUI should be available at https://localhost:9090 or https://127.0.0.1:9090. Or, enter the local network IP in a web browser on any other device connected to the same network. (Without SSL certificates setup, you may need to allow a connection from your browser.)

Creating and installing a machine

Log into the interface using the user name and password for that system. You can also choose whether to allow your password to be used for administrative tasks in this session.

Select Virtual Machines and then select Create VM to build a new box. The console gives you several options:

  • Download an OS using Cockpit’s built in library
  • Use install media already downloaded on the system you’re managing
  • Point to a URL for an OS installation tree
  • Boot media over the network via the PXE protocol

Enter all the necessary parameters. Then select Create to power up the new virtual machine.

At this point, a graphical console appears. Most modern web browsers let you use your keyboard and mouse to interact with the VM console. Now you can complete your installation and use your new VM, just as you would via virt-manager in the past.


Photo by Miguel Teixeira on Flickr (CC BY-SA 2.0).

Posted on Leave a comment

Build a virtual private network with Wireguard

Wireguard is a new VPN designed as a replacement for IPSec and OpenVPN. Its design goal is to be simple and secure, and it takes advantage of recent technologies such as the Noise Protocol Framework. Some consider Wireguard’s ease of configuration akin to OpenSSH. This article shows you how to deploy and use it.

It is currently in active development, so it might not be the best for production machines. However, Wireguard is under consideration to be included into the Linux kernel. The design has been formally verified,* and proven to be secure against a number of threats.

When deploying Wireguard, keep your Fedora Linux system updated to the most recent version, since Wireguard does not have a stable release cadence.

Set the timezone

To check and set your timezone, first display current time information:

timedatectl

Then if needed, set the correct timezone, for example to Europe/London.

timedatectl set-timezone Europe/London

Note that your system’s real time clock (RTC) may continue to be set to UTC or another timezone.

Install Wireguard

To install, enable the COPR repository for the project and then install with dnf, using sudo:

$ sudo dnf copr enable jdoss/wireguard
$ sudo dnf install wireguard-dkms wireguard-tools

Once installed, two new commands become available, along with support for systemd:

  • wg: Configuration of wireguard interfaces
  • wg-quick Bringing up the VPN tunnels

Create the configuration directory for Wireguard, and apply a umask of 077. A umask of 077 allows read, write, and execute permission for the file’s owner (root), but prohibits read, write, and execute permission for everyone else.

mkdir /etc/wireguard
cd /etc/wireguard
umask 077

Generate Key Pairs

Generate the private key, then derive the public key from it.

$ wg genkey > /etc/wireguard/privkey
$ wg pubkey < /etc/wireguard/privkey > /etc/wireguard/publickey

Alternatively, this can be done in one go:

wg genkey | tee /etc/wireguard/privatekey | wg pubkey > /etc/wireguard/publickey

There is a vanity address generator, which might be of interest to some. You can also generate a pre-shared key to provide a level of quantum protection:

wg genpsk > psk

This will be the same value for both the server and client, so you only need to run the command once.

Configure Wireguard server and client

Both the client and server have an [Interface] option to specify the IP address assigned to the interface, along with the private keys.

Each peer (server and client) has a [Peer] section containing its respective PublicKey, along with the PresharedKey. Additionally, this block can list allowed IP addresses which can use the tunnel.

Server

A firewall rule is added when the interface is brought up, along with enabling masquerading. Make sure to note the /24 IPv4 address range within Interface, which differs from the client. Edit the /etc/wireguard/wg0.conf file as follows, using the IP address for your server for Address, and the client IP address in AllowedIPs.

[Interface]
Address = 192.168.2.1/24, fd00:7::1/48
PrivateKey = <SERVER_PRIVATE_KEY>
PostUp = firewall-cmd --zone=public --add-port 51820/udp && firewall-cmd --zone=public --add-masquerade
PostDown = firewall-cmd --zone=public --remove-port 51820/udp && firewall-cmd --zone=public --remove-masquerade
ListenPort = 51820 [Peer]
PublicKey = <CLIENT_PUBLIC_KEY>
PresharedKey = LpI+UivLx1ZqbzjyRaWR2rWN20tbBsOroNdNnjKLMQ=
AllowedIPs = 192.168.2.2/32, fd00:7::2/48

Allow forwarding of IP packets by adding the following to /etc/sysctl.conf:

net.ipv4.ip_forward=1
net.ipv6.conf.all.forwarding=1

Load the new settings:

$ sysctl -p

Forwarding will be preserved after a reboot.

Client

The client is very similar to the server config, but has an optional additional entry of PersistentKeepalive set to 30 seconds. This is to prevent NAT from causing issues, and depending on your setup might not be needed. Setting AllowedIPs to 0.0.0.0/0 will forward all traffic over the tunnel. Edit the client’s /etc/wireguard/wg0.conf file as follows, using your client’s IP address for Address and the server IP address at the Endpoint.

[Interface]
Address = 192.168.2.2/32, fd00:7::2/48
PrivateKey = <CLIENT_PRIVATE_KEY> [Peer]
PublicKey = <SERVER_PUBLIC_KEY>
PresharedKey = LpI+UivLx1ZqbzjyRaWR2rWN20tbBsOroNdNnjWKLM=
AllowedIPs = 0.0.0.0/0, ::/0 Endpoint = <SERVER_IP>:51820
PersistentKeepalive = 30

Test Wireguard

Start and check the status of the tunnel on both the server and client:

$ systemctl start wg-quick@wg0
$ systemctl status wg-quick@wg0

To test the connections, try the following:

ping google.com
ping6 ipv6.google.com

Then check external IP addresses:

dig +short myip.opendns.com @resolver1.opendns.com
dig +short -6 myip.opendns.com aaaa @resolver1.ipv6-sandbox.opendns.com

* “Formally verified,” in this sense, means that the design has been proved to have mathematically correct messages and key secrecy, forward secrecy, mutual authentication, session uniqueness, channel binding, and resistance against replay, key compromise impersonation, and denial of server attacks.


Photo by Black Zheng on Unsplash.

Posted on Leave a comment

Using SSH port forwarding on Fedora

You may already be familiar with using the ssh command to access a remote system. The protocol behind ssh allows terminal input and output to flow through a secure channel. But did you know that you can also use ssh to send and receive other data securely as well? One way is to use port forwarding, which allows you to connect network ports securely while conducting your ssh session. This article shows you how it works.

About ports

A standard Linux system has a set of network ports already assigned, from 0-65535. Your system reserves ports up to 1023 for system use. In many systems you can’t elect to use one of these low-numbered ports. Quite a few ports are commonly expected to run specific services. You can find these defined in your system’s /etc/services file.

You can think of a network port like a physical port or jack to which you can connect a cable. That port may connect to some sort of service on the system, like wiring behind that physical jack. An example is the Apache web server (also known as httpd). The web server usually claims port 80 on the host system for HTTP non-secure connections, and 443 for HTTPS secure connections.

When you connect to a remote system, such as with a web browser, you are also “wiring” your browser to a port on your host. This is usually a random high port number, such as 54001. The port on your host connects to the port on the remote host, such as 443 to reach its secure web server.

So why use port forwarding when you have so many ports available? Here are a couple common cases in the life of a web developer.

Local port forwarding

Imagine that you are doing web development on a remote system called remote.example.com. You usually reach this system via ssh but it’s behind a firewall that allows very little additional access, and blocks most other ports. To try out your web app, it’s helpful to be able to use your web browser to point to the remote system. But you can’t reach it via the normal method of typing the URL in your browser, thanks to that pesky firewall.

Local forwarding allows you to tunnel a port available via the remote system through your ssh connection. The port appears as a local port on your system (thus “local forwarding.”)

Let’s say your web app is running on port 8000 on the remote.example.com box. To locally forward that system’s port 8000 to your system’s port 8000, use the -L option with ssh when you start your session:

$ ssh -L 8000:localhost:8000 remote.example.com

Wait, why did we use localhost as the target for forwarding? It’s because from the perspective of remote.example.com, you’re asking the host to use its own port 8000. (Recall that any host usually can refer to itself as localhost to connect to itself via a network connection.) That port now connects to your system’s port 8000. Once the ssh session is ready, keep it open, and you can type http://localhost:8000 in your browser to see your web app. The traffic between systems now travels securely over an ssh tunnel!

If you have a sharp eye, you may have noticed something. What if we used a different hostname than localhost for the remote.example.com to forward? If it can reach a port on another system on its network, it usually can forward that port just as easily. For example, say you wanted to reach a MariaDB or MySQL service on the db.example.com box also on the remote network. This service typically runs on port 3306. So you could forward it with this command, even if you can’t ssh to the actual db.example.com host:

$ ssh -L 3306:db.example.com:3306 remote.example.com

Now you can run MariaDB commands against your localhost and you’re actually using the db.example.com box.

Remote port forwarding

Remote forwarding lets you do things the opposite way. Imagine you’re designing a web app for a friend at the office, and want to show them your work. Unfortunately, though, you’re working in a coffee shop, and because of the network setup, they can’t reach your laptop via a network connection. However, you both use the remote.example.com system at the office and you can still log in there. Your web app seems to be running well on port 5000 locally.

Remote port forwarding lets you tunnel a port from your local system through your ssh connection, and make it available on the remote system. Just use the -R option when you start your ssh session:

$ ssh -R 6000:localhost:5000 remote.example.com

Now when your friend inside the corporate firewall runs their browser, they can point it at http://remote.example.com:6000 and see your work. And as in the local port forwarding example, the communications travel securely over your ssh session.

By default the sshd daemon running on a host is set so that only that host can connect to its remote forwarded ports. Let’s say your friend wanted to be able to let people on other example.com corporate hosts see your work, and they weren’t on remote.example.com itself. You’d need the owner of the remote.example.com host to add one of these options to /etc/ssh/sshd_config on that box:

GatewayPorts yes # OR
GatewayPorts clientspecified

The first option means remote forwarded ports are available on all the network interfaces on remote.example.com. The second means that the client who sets up the tunnel gets to choose the address. This option is set to no by default.

With this option, you as the ssh client must still specify the interfaces on which the forwarded port on your side can be shared. Do this by adding a network specification before the local port. There are several ways to do this, including the following:

$ ssh -R *:6000:localhost:5000 # all networks
$ ssh -R 0.0.0.0:6000:localhost:5000 # all networks
$ ssh -R 192.168.1.15:6000:localhost:5000 # single network
$ ssh -R remote.example.com:6000:localhost:5000 # single network

Other notes

Notice that the port numbers need not be the same on local and remote systems. In fact, at times you may not even be able to use the same port. For instance, normal users may not to forward onto a system port in a default setup.

In addition, it’s possible to restrict forwarding on a host. This might be important to you if you need tighter security on a network-connected host. The PermitOpen option for the sshd daemon controls whether, and which, ports are available for TCP forwarding. The default setting is any, which allows all the examples above to work. To disallow any port fowarding, choose none, or choose only a specific host:port setting to permit. For more information, search for PermitOpen in the manual page for sshd daemon configuration:

$ man sshd_config

Finally, remember port forwarding only happens as long as the controlling ssh session is open. If you need to keep the forwarding active for a long period, try running the session in the background using the -N option. Make sure your console is locked to prevent tampering while you’re away from it.

Posted on Leave a comment

How to setup an anonymous FTP download server

Sometimes you may not need to set up a full FTP server with authenticated users with upload and download privileges. If you are simply looking for a quick way to allow users to grab a few files, an anonymous FTP server can fit the bill. This article shows you show to set it up.

This example uses the vsftp server.

Installing and configuring the anonymous FTP server

Install the vsftp server using sudo:

$ sudo dnf install vsftpd

Enable the vsftp server.

$ sudo systemctl enable vsftpd

Next, edit your /etc/vsftpd/vsftpd.conf file to allow anonymous downloads. Make sure you have the following entries.

anonymous_enable=YES

This option controls whether anonymous logins are permitted or not. If enabled, both the usernames ftp and anonymous are recognized as anonymous logins.

local_enable=NO

This option controls whether local logins are permitted.

write_enable=NO

This option controls whether any FTP commands which change the filesystem are allowed.

no_anon_password=YES

When enabled, this option prevents vsftpd from asking for an anonymous password. With this setting, the anonymous user will log straight in without one.

hide_ids=YES

Enable this option to display all user and group information in directory listings as ftp.

pasv_min_port=40000
pasv_max_port=40001

Finally, these options set the minimum and maximum port to allocate for PASV style data connections. Use them to specify a narrow port range to assist firewalling. You should choose a range for ports that aren’t currently in use. This example uses port 40000-40001 to limit the ports to a range of 1.

Final steps

Now that you’ve set the options, add the appropriate firewall rules to allow vsftp connections along with the passive port range you specified.

$ firewall-cmd --add-service=ftp --perm
$ firewall-cmd --add-port=40000-40001/tcp --perm
$ firewall-cmd --reload

Next, configure SELinux to allow passive FTP:

$ setsebool -P ftpd_use_passive_mode on

And finally, start the vsftp server:

$ systemctl start vsftpd

At this point you have a working FTP server. Place the content you want to offer in /var/ftp. (Typically, system administrators put publicly downloadable content under /var/ftp/pub.) Now you can connect to your server using an FTP client on another system.


Image courtesy of Tom Woodward on Flickr, CC-BY-SA 2.0.

Posted on Leave a comment

Fedora and CentOS Stream

From the desk of the Fedora Project Leader:

Hi everyone! You may have seen the announcement about changes over at the CentOS Project. (If not, please go ahead and take a few minutes and read it — I’ll wait!) And now you may be wondering: if CentOS is now upstream of RHEL, what happens to Fedora? Isn’t that Fedora’s role in the Red Hat ecosystem?

First, don’t worry. There are changes to the big picture, but they’re all for the better.

If you’ve been following the conference talks from Red Hat Enterprise Linux leadership about the relationship between Fedora, CentOS, and RHEL, you have heard about “the Penrose Triangle”. That’s a shape like something from an M. C. Escher drawing: it’s impossible in real life!

We’ve been thinking for a while that maybe impossible geometry is not actually the best model. 

For one thing, the imagined flow where contributions at the end would flow back into Fedora and grow in a “virtuous cycle” never actually worked that way. That’s a shame, because there’s a huge, awesome CentOS community and many great people working on it — and there’s a lot of overlap with the Fedora community too. We’re missing out.

But that gap isn’t the only one: there’s not really been a consistent flow between the projects and product at all. So far, the process has gone like this: 

  1. Some time after the previous RHEL release, Red Hat would suddenly turn more attention to Fedora than usual.
  2. A few months later, Red Hat would split off a new RHEL version, developed internally.
  3. After some months, that’d be put into the world, including all of the source — from which CentOS is built. 
  4. Source drops continue for updates, and sometimes those updates include patches that were in Fedora — but there’s no visible connection.

Each step here has its problems: intermittent attention, closed-door development, blind drops, and little ongoing transparency. But now Red Hat and CentOS Project are fixing that, and that’s good news for Fedora, too.

Fedora will remain the first upstream of RHEL. It’s where every RHEL came from, and is where RHEL 9 will come from, too. But after RHEL branches off, CentOS will be upstream for ongoing work on those RHEL versions. I like to call it “the midstream”, but the marketing folks somehow don’t, so that’s going to be called “CentOS Stream”.

We — Fedora, CentOS, and Red Hat — still need to work out all of the technical details, but the idea is that these branches will live in the same package source repository. (The current plan is to make a “src.centos.org” with a  parallel view of the same data as src.fedoraproject.org). This change gives public visibility into ongoing work on released RHEL, and a place for developers and Red Hat’s partners to collaborate at that level.

CentOS SIGs — the special interest groups for virtualization, storage, config management and so on — will do their work in shared space right next to Fedora branches. This will allow much easier collaboration and sharing between the projects, and I’m hoping we’ll even be able to merge some of our similar SIGs to work together directly. Fixes from Fedora packages can be cherry-picked into the CentOS “midstream” ones — and where useful, vice versa.

Ultimately, Fedora, CentOS, and RHEL are part of the same big project family. This new, more natural flow opens possibilities for collaboration which were locked behind artificial (and extra-dimensional!) barriers. I’m very excited for what we can now do together!

— Matthew Miller, Fedora Project Leader