Posted on Leave a comment

Getting started with Fedora CoreOS

This has been called the age of DevOps, and operating systems seem to be getting a little bit less attention than tools are. However, this doesn’t mean that there has been no innovation in operating systems. [Edit: The diversity of offerings from the plethora of distributions based on the Linux kernel is a fine example of this.] Fedora CoreOS has a specific philosophy of what an operating system should be in this age of DevOps.

Fedora CoreOS’ philosophy

Fedora CoreOS (FCOS) came from the merging of CoreOS Container Linux and Fedora Atomic Host. It is a minimal and monolithic OS focused on running containerized applications. Security being a first class citizen, FCOS provides automatic updates and comes with SELinux hardening.

For automatic updates to work well they need to be very robust. The goal being that servers running FCOS won’t break after an update. This is achieved by using different release streams (stable, testing and next). Each stream is released every 2 weeks and content is promoted from one stream to the other (next -> testing -> stable). That way updates landing in the stable stream have had the opportunity to be tested over a long period of time.

Getting Started

For this example let’s use the stable stream and a QEMU base image that we can run as a virtual machine. You can use coreos-installer to download that image.

From your (Workstation) terminal, run the following commands after updating the link to the image. [Edit: On Silverblue the container based coreos tools are the simplest method to try. Instructions can be found at https://docs.fedoraproject.org/en-US/fedora-coreos/tutorial-setup/ , in particular “Setup with Podman or Docker”.]

$ sudo dnf install coreos-installer
$ coreos-installer download --image-url https://builds.coreos.fedoraproject.org/prod/streams/stable/builds/32.20200907.3.0/x86_64/fedora-coreos-32.20200907.3.0-qemu.x86_64.qcow2.xz
$ xz -d fedora-coreos-32.20200907.3.0-qemu.x86_64.qcow2.xz
$ ls
fedora-coreos-32.20200907.3.0-qemu.x86_64.qcow2

Create a configuration

To customize a FCOS system, you need to provide a configuration file that will be used by Ignition to provision the system. You may use this file to configure things like creating a user, adding a trusted SSH key, enabling systemd services, and more.

The following configuration creates a ‘core’ user and adds an SSH key to the authorized_keys file. It is also creating a systemd service that uses podman to run a simple hello world container.

version: "1.0.0"
variant: fcos
passwd: users: - name: core ssh_authorized_keys: - ssh-ed25519 my_public_ssh_key_hash fcos_key
systemd: units: - contents: | [Unit] Description=Run a hello world web service After=network-online.target Wants=network-online.target [Service] ExecStart=/bin/podman run --pull=always --name=hello --net=host -p 8080:8080 quay.io/cverna/hello ExecStop=/bin/podman rm -f hello [Install] WantedBy=multi-user.target enabled: true name: hello.service

After adding your SSH key in the configuration save it as config.yaml. Next use the Fedora CoreOS Config Transpiler (fcct) tool to convert this YAML configuration into a valid Ignition configuration (JSON format).

Install fcct directly from Fedora’s repositories or get the binary from GitHub.

$ sudo dnf install fcct
$ fcct -output config.ign config.yaml

Install and run Fedora CoreOS

To run the image, you can use the libvirt stack. To install it on a Fedora system using the dnf package manager

$ sudo dnf install @virtualization

Now let’s create and run a Fedora CoreOS virtual machine

$ chcon --verbose unconfined_u:object_r:svirt_home_t:s0 config.ign
$ virt-install --name=fcos \
--vcpus=2 \
--ram=2048 \
--import \
--network=bridge=virbr0 \
--graphics=none \
--qemu-commandline="-fw_cfg name=opt/com.coreos/config,file=${PWD}/config.ign" \
--disk=size=20,backing_store=${PWD}/fedora-coreos-32.20200907.3.0-qemu.x86_64.qcow2

Once the installation is successful, some information is displayed and a login prompt is provided.

Fedora CoreOS 32.20200907.3.0
Kernel 5.8.10-200.fc32.x86_64 on an x86_64 (ttyS0)
SSH host key: SHA256:BJYN7AQZrwKZ7ZF8fWSI9YRhI++KMyeJeDVOE6rQ27U (ED25519)
SSH host key: SHA256:W3wfZp7EGkLuM3z4cy1ZJSMFLntYyW1kqAqKkxyuZrE (ECDSA)
SSH host key: SHA256:gb7/4Qo5aYhEjgoDZbrm8t1D0msgGYsQ0xhW5BAuZz0 (RSA)
ens2: 192.168.122.237 fe80::5054:ff:fef7:1a73
Ignition: user provided config was applied
Ignition: wrote ssh authorized keys file for user: core

The Ignition configuration file did not provide any password for the core user, therefore it is not possible to login directly via the console. (Though, it is possible to configure a password for users via Ignition configuration.)

Use Ctrl + ] key combination to exit the virtual machine’s console. Then check if the hello.service is running.

$ curl http://192.168.122.237:8080
Hello from Fedora CoreOS!

Using the preconfigured SSH key, you can also access the VM and inspect the services running on it.

$ ssh [email protected]
$ systemctl status hello
● hello.service - Run a hello world web service
Loaded: loaded (/etc/systemd/system/hello.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2020-10-28 10:10:26 UTC; 42s ago

zincati, rpm-ostree and automatic updates

The zincati service drives rpm-ostreed with automatic updates.
Check which version of Fedora CoreOS is currently running on the VM, and check if Zincati has found an update.

$ ssh [email protected]
$ rpm-ostree status
State: idle
Deployments:
● ostree://fedora:fedora/x86_64/coreos/stable
Version: 32.20200907.3.0 (2020-09-23T08:16:31Z)
Commit: b53de8b03134c5e6b683b5ea471888e9e1b193781794f01b9ed5865b57f35d57
GPGSignature: Valid signature by 97A1AE57C3A2372CCA3A4ABA6C13026D12C944D0
$ systemctl status zincati
● zincati.service - Zincati Update Agent
Loaded: loaded (/usr/lib/systemd/system/zincati.service; enabled; vendor preset: enabled)
Active: active (running) since Wed 2020-10-28 13:36:23 UTC; 7s ago
…
Oct 28 13:36:24 cosa-devsh zincati[1013]: [INFO ] initialization complete, auto-updates logic enabled
Oct 28 13:36:25 cosa-devsh zincati[1013]: [INFO ] target release '32.20201004.3.0' selected, proceeding to stage it ... zincati reboot ...

After the restart, let’s remote login once more to check the new version of Fedora CoreOS.

$ ssh [email protected]
$ rpm-ostree status
State: idle
Deployments:
● ostree://fedora:fedora/x86_64/coreos/stable
Version: 32.20201004.3.0 (2020-10-19T17:12:33Z)
Commit: 64bb377ae7e6949c26cfe819f3f0bd517596d461e437f2f6e9f1f3c24376fd30
GPGSignature: Valid signature by 97A1AE57C3A2372CCA3A4ABA6C13026D12C944D0
ostree://fedora:fedora/x86_64/coreos/stable
Version: 32.20200907.3.0 (2020-09-23T08:16:31Z)
Commit: b53de8b03134c5e6b683b5ea471888e9e1b193781794f01b9ed5865b57f35d57
GPGSignature: Valid signature by 97A1AE57C3A2372CCA3A4ABA6C13026D12C944D0

rpm-ostree status now shows 2 versions of Fedora CoreOS, the one that came in the QEMU image, and the latest one received from the update. By having these 2 versions available, it is possible to rollback to the previous version using the rpm-ostree rollback command.

Finally, you can make sure that the hello service is still running and serving content.

$ curl http://192.168.122.237:8080
Hello from Fedora CoreOS!

More information: Fedora CoreOS updates

Deleting the Virtual Machine

To clean up afterwards, the following commands will delete the VM and associated storage.

$ virsh destroy fcos
$ virsh undefine --remove-all-storage fcos

Conclusion

Fedora CoreOS provides a solid and secure operating system tailored to run applications in containers. It excels in a DevOps environment which encourages the hosts to be provisioned using declarative configuration files. Automatic updates and the ability to rollback to a previous version of the OS, bring a peace of mind during the operation of a service.

Learn more about Fedora CoreOS by following the tutorials available in the project’s documentation.

Posted on Leave a comment

Podman with capabilities on Fedora

Containerization is a booming technology. As many as seventy-five percent of global organizations could be running some type of containerization technology in the near future. Since widely used technologies are more likely to be targeted by hackers, securing containers is especially important. This article will demonstrate how POSIX capabilities are used to secure Podman containers. Podman is the default container management tool in RHEL8.

Determine the Podman container’s privilege mode

Containers run in either privileged or unprivileged mode. In privileged mode, the container uid 0 is mapped to the host’s uid 0. For some use cases, unprivileged containers lack sufficient access to the resources of the host machine. Technologies and techniques including Mandatory Access Control (apparmor, SELinux), seccomp filters, dropping of capabilities, and namespaces help to secure containers regardless of their mode of operation.

To determine the privilege mode from outside the container:

$ podman inspect --format="{{.HostConfig.Privileged}}" <container id>

If the above command returns true then the container is running in privileged mode. If it returns false then the container is running in unprivileged mode.

To determine the privilege mode from inside the container:

$ ip link add dummy0 type dummy

If this command allows you to create an interface then you are running a privileged container. Otherwise you are running an unprivileged container.

Capabilities

Namespaces isolate a container’s processes from arbitrary access to the resources of its host and from access to the resources of other containers running on the same host. Processes within privileged containers, however, might still be able to do things like alter the IP routing table, trace arbitrary processes, and load kernel modules. Capabilities allow one to apply finer-grained restrictions on what resources the processes within a container can access or alter; even when the container is running in privileged mode. Capabilities also allow one to assign privileges to an unprivileged container that it would not otherwise have.

For example, to add the NET_ADMIN capability to an unprivileged container so that a network interface can be created inside of the container, you would run podman with parameters similar to the following:

[root@vm1 ~]# podman run -it --cap-add=NET_ADMIN centos
[root@b27fea33ccf1 /]# ip link add dummy0 type dummy
[root@b27fea33ccf1 /]# ip link

The above commands demonstrate a dummy0 interface being created in an unprivileged container. Without the NET_ADMIN capability, an unprivileged container would not be able to create an interface. The above commands demonstrate how to grant a capability to an unprivileged container.

Currently, there are about 39 capabilities that can be granted or denied. Privileged containers are granted many capabilities by default. It is advisable to drop unneeded capabilities from privileged containers to make them more secure.

To drop all capabilities from a container:

$ podman run -it -d --name mycontainer --cap-drop=all centos

To list a container’s capabilities:

$ podman exec -it 48f11d9fa512 capsh --print

The above command should show that no capabilities are granted to the container.

Refer to the capabilities man page for a complete list of capabilities:

$ man capabilities

Use the capsh command to list the capabilities you currently possess:

$ capsh --print

As another example, the below command demonstrates dropping the NET_RAW capability from a container. Without the NET_RAW capability, servers on the internet cannot be pinged from within the container.

$ podman run -it --name mycontainer1 --cap-drop=net_raw centos
>>> ping google.com (will output error, operation not permitted)

As a final example, if your container were to only need the SETUID and SETGID capabilities, you could achieve such a permission set by dropping all capabilities and then re-adding only those two.

$ podman run -d --cap-drop=all --cap-add=setuid --cap-add=setgid fedora sleep 5 > /dev/null; pscap | grep sleep

The pscap command shown above should show the capabilities that have been granted to the container.

I hope you enjoyed this brief exploration of how capabilities are used to secure Podman containers.

Thank You!

Posted on Leave a comment

Getting started with Stratis encryption

Stratis is described on its official website as an “easy to use local storage management for Linux.” See this short video for a quick demonstration of the basics. The video was recorded on a Red Hat Enterprise Linux 8 system. The concepts shown in the video also apply to Stratis in Fedora.

Stratis version 2.1 introduces support for encryption. Continue reading to learn how to get started with encryption in Stratis.

Prerequisites

Encryption requires Stratis version 2.1 or greater. The examples in this post use a pre-release of Fedora 33. Stratis 2.1 will be available in the final release of Fedora 33.

You’ll also need at least one available block device to create an encrypted pool. The examples shown below were done on a KVM virtual machine with a 5 GB virtual disk drive (/dev/vdb).

Create a key in the kernel keyring

The Linux kernel keyring is used to store the encryption key. For more information on the kernel keyring, refer to the keyrings manual page (man keyrings).  

Use the stratis key set command to set up the key within the kernel keyring.  You must specify where the key should be read from. To read the key from standard input, use the –capture-key option. To retrieve the key from a file, use the –keyfile-path <file> option. The last parameter is a key description. It will be used later when you create the encrypted Stratis pool.

For example, to create a key with the description pool1key, and to read the key from standard input, you would enter:

# stratis key set --capture-key pool1key
Enter desired key data followed by the return key:

The command prompts us to type the key data / passphrase, and the key is then created within the kernel keyring.  

To verify that the key was created, run stratis key list:

# stratis key list
Key Description
pool1key

This verifies that the pool1key was created. Note that these keys are not persistent. If the host is rebooted, the key will need to be provided again before the encrypted Stratis pool can be accessed (this process is covered later).

If you have multiple encrypted pools, they can have a separate keys, or they can share the same key.

The keys can also be viewed using the following keyctl commands:

# keyctl get_persistent @s
318044983
# keyctl show
Session Keyring
 701701270 --alswrv      0     0  keyring: _ses
 649111286 --alswrv      0 65534   \_ keyring: _uid.0
 318044983 ---lswrv      0 65534   \_ keyring: _persistent.0
1051260141 --alswrv      0     0       \_ user: stratis-1-key-pool1key

Create the encrypted Stratis pool

Now that a key has been created for Stratis, the next step is to create the encrypted Stratis pool. Encrypting a pool can only be done at pool creation. It isn’t currently possible to encrypt an existing pool.

Use the stratis pool create command to create a pool. Add –key-desc and the key description that you provided in the previous step (pool1key). This will signal to Stratis that the pool should be encrypted using the provided key. The below example creates the Stratis pool on /dev/vdb, and names it pool1. Be sure to specify an empty/available device on your system.

# stratis pool create --key-desc pool1key pool1 /dev/vdb

You can verify that the pool has been created with the stratis pool list command:

# stratis pool list 
Name                     Total Physical   Properties
pool1   4.98 GiB / 37.63 MiB / 4.95 GiB      ~Ca, Cr

In the sample output shown above, ~Ca indicates that caching is disabled (the tilde negates the property). Cr indicates that encryption is enabled.  Note that caching and encryption are mutually exclusive. Both features cannot be simultaneously enabled.

Next, create a filesystem. The below example, demonstrates creating a filesystem named filesystem1, mounting it at the /filesystem1 mountpoint, and creating a test file in the new filesystem:

# stratis filesystem create pool1 filesystem1
# mkdir /filesystem1
# mount /stratis/pool1/filesystem1 /filesystem1
# cd /filesystem1
# echo "this is a test file" > testfile

Access the encrypted pool after a reboot

When you reboot you’ll notice that Stratis no longer shows your encrypted pool or its block device:

# stratis pool list
Name   Total Physical   Properties
# stratis blockdev list
Pool Name   Device Node   Physical Size   Tier

To access the encrypted pool, first re-create the key with the same key description and key data / passphrase that you used previously:

# stratis key set --capture-key pool1key
Enter desired key data followed by the return key:

Next, run the stratis pool unlock command, and verify that you can now see the pool and its block device:

# stratis pool unlock
# stratis pool list
Name                      Total Physical   Properties
pool1   4.98 GiB / 583.65 MiB / 4.41 GiB      ~Ca, Cr
# stratis blockdev list
Pool Name   Device Node   Physical Size   Tier
pool1       /dev/dm-2          4.98 GiB   Data

Next, mount the filesystem and verify that you can access the test file you created previously:

# mount /stratis/pool1/filesystem1 /filesystem1/
# cat /filesystem1/testfile 
this is a test file

Use a systemd unit file to automatically unlock a Stratis pool at boot

It is possible to automatically unlock your Stratis pool at boot without manual intervention. However, a file containing the key must be available. Storing the key in a file might be a security concern in some environments.

The systemd unit file shown below provides a simple method to unlock a Stratis pool at boot and mount the filesystem. Feedback on a better/alternative methods is welcome. You can provide suggestions in the comment section at the end of this article.

Start by creating your key file with the following command. Be sure to substitute passphrase with the same key data / passphrase you entered previously.

# echo -n passphrase > /root/pool1key

Make sure that the file is only readable by root:

# chmod 400 /root/pool1key
# chown root:root /root/pool1key

Create a systemd unit file at /etc/systemd/system/stratis-filesystem1.service with the following content:

[Unit]
Description = stratis mount pool1 filesystem1 file system
After = stratisd.service [Service]
ExecStartPre=sleep 2
ExecStartPre=stratis key set --keyfile-path /root/pool1key pool1key
ExecStartPre=stratis pool unlock
ExecStartPre=sleep 3
ExecStart=mount /stratis/pool1/filesystem1 /filesystem1
RemainAfterExit=yes [Install]
WantedBy = multi-user.target

Next, enable the service so that it will run at boot:

# systemctl enable stratis-filesystem1.service

Now reboot and verify that the Stratis pool has been automatically unlocked and that its filesystem is mounted.

Summary and conclusion

In today’s environment, encryption is a must for many people and organizations. This post demonstrated how to enable encryption in Stratis 2.1.

Posted on Leave a comment

Reclaim hard-drive space with LVM

LVM is a tool for logical volume management which includes allocating disks, striping, mirroring and resizing logical volumes. It is commonly used on Fedora installations (prior to BTRFS as default it was LVM+Ext4). But have you ever started up your system to find a message like the image above, after you logged in? Uh oh, Gnome just said the home volume is almost out of space! Luckily, there is likely some space sitting around in another volume, unused and ready to re-alocate. Here’s how to reclaim hard-drive space with LVM.

The key to easily re-alocate space between volumes is the Logical Volume Manager (LVM). Fedora 32 and before use LVM to divide disk space by default. This technology is similar to standard hard-drive partitions, but LVM is a lot more flexible. LVM enables not only flexible volume size management, but also advanced capabilities such as read-write snapshots, striping or mirroring data across multiple drives, using a high-speed drive as a cache for a slower drive, and much more. All of these advanced options can get a bit overwhelming, but resizing a volume is straight-forward.

LVM basics

The volume group serves as the main container in the LVM system. By default Fedora only defines a single volume group, but there can be as many as needed. Actual hard-drive and hard-drive partitions are added to the volume group as physical volumes. Physical volumes add available free space to the volume group. A typical Fedora install has one formatted boot partition, and the rest of the drive is a partition configured as an LVM physical volume.

Out of this pool of available space, the volume group allocates one or more logical volumes. These volumes are similar to hard-drive partitions, but without the limitation of contiguous space on the disk. LVM logical volumes can even span multiple devices! Just like hard-drive partitions, logical volumes have a defined size and can contain any filesystem which can then be mounted to specific directories.

What’s needed

Confirm the system uses LVM with the gnome-disks application, and make sure there is free space available in some other volume. Without space to reclaim from another volume, this guide isn’t useful. A Fedora live CD/USB is also needed. Any file system that needs to shrink must be unmounted. Running from a live image allows all the volumes on the hard-disk to remain unmounted, even important directories like / and /home.

Use gnome-disks to verify free space.
Use gnome-disks to verify free space

A word of warning

No data should be lost by following this guide, but it does muck around with some very low-level and powerful commands. One mistake could destroy all data on the hard-drive. So backup all the data on the disk first!

Resizing LVM volumes

To begin, boot the Fedora live image and select Try Fedora at the dialog. Next, use the Run Command to launch the blivet-gui application (accessible by pressing Alt-F2, typing blivet-gui, then pressing enter). Select the volume group on the left under LVM. The logical volumes are on the right.

Explore logical volumes in blivet gui.
Explore logical volumes in blivet-gui

The logical volume labels consist of both the volume group name and the logical volume name. In the example, the volume group is “fedora_localhost-live” and there are “home”, “root”, and “swap” logical volumes allocated. To find the full volume, select each one, click on the gear icon, and choose resize. The slider in the resize dialog indicates the allowable sizes for the volume. The minimum value on the left is the space already in use within the file system, so this is the minimum possible volume size (without deleting data). The maximum value on the right is the greatest size the volume can have based on available free space in the volume group.

Use the resize dialog in blivet-gui to set the new volume sizes.
Resize dialog in blivet-gui

A grayed out resize option means the volume is full and there is no free space in the volume group. It’s time to change that! Look through all of the volumes to find one with plenty of extra space, like in the screenshot above. Move the slider to the left to set the new size. Free up enough space to be useful for the full volume, but still leave plenty of space for future data growth. Otherwise, this volume will be the next to fill up.

Click resize and note that a new item appears in the volume listing: free space. Now select the full volume that started this whole endeavor, and move the slider all the way to the right. Press resize and marvel at the new improved volume layout. However, nothing has changed on the hard drive yet. Click on the check-mark to commit the changes to disk.

Review the changes in blivet-gui and then accept to reclaim hard-drive space.
Review changes in blivet-gui

Review the summary of the changes, and if everything looks right, click Ok to proceed. Wait for blivet-gui to finish. Now reboot back into the main Fedora install and enjoy all the new space in the previously full volume.

Planning for the future

It is challenging to know how much space any particular volume will need in the future. Instead of immediately allocating all available free space, consider leaving it free in the volume group. In fact, Fedora Server reserves space in the volume group by default. Extending a volume is possible while it is online and in use. No live image or reboot needed. When a volume is almost full, easily extend the volume using part of the available free space and keep working. Unfortunately the default disk manager, gnome-disks, does not support LVM volume resizing, so install blivet-gui for a graphical management tool. Alternately, there is a simple terminal command to extend a volume:

lvresize -r -L +1G /dev/fedora_localhost-live/root 

Wrap-up

Reclaiming hard-drive space with LVM just scratches the surface of LVM capabilities. Most people, especially on the desktop, probably don’t need the more advanced features. However, LVM is there when the need arises, though it can get a bit complex to implement. BTRFS is the default filesystem, without LVM, starting with Fedora 33. BTRFS can be easier to manage while still flexible enough for most common usages. Check out the recent Fedora Magazine articles on BTRFS to learn more.

Posted on Leave a comment

What’s new in Fedora 33 Workstation

Fedora 33 Workstation is the latest release of our free, leading-edge operating system. You can download it from the official website here right now. There are several new and noteworthy changes in Fedora 33 Workstation. Read more details below.

GNOME 3.38

Fedora 33 Workstation includes the latest release of GNOME Desktop Environment for users of all types. GNOME 3.38 in Fedora 33 Workstation includes many updates and improvements, including:

A new GNOME Tour app

New users are now greeted by “a new Tour application, highlighting the main functionality of the desktop and providing first time users a nice welcome to GNOME.”

The new GNOME Tour application in Fedora 33

Drag to reorder apps

GNOME 3.38 replaces the previously split Frequent and All apps views with a single customizable and consistent view that allows you to reorder apps and organize them into custom folders. Simply click and drag to move apps around.

GNOME 3.38 Drag to Reorder

Improved screen recording

The screen recording infrastructure in GNOME Shell has been improved to take advantage of PipeWire and kernel APIs. This will help reduce resource consumption and improve responsiveness.

GNOME 3.38 also provides many additional features and enhancements. Check out the GNOME 3.38 Release Notes for further information.


B-tree file system

As announced previously, new installations of Fedora 33 will default to using Btrfs. Features and enhancements are added to Btrfs with each new kernel release. The change log has a complete summary of the features that each new kernel version brings to Btrfs.


Swap on ZRAM

Anaconda and Fedora IoT have been using swap-on-zram by default for years. With Fedora 33, swap-on-zram will be enabled by default instead of a swap partition. Check out the Fedora wiki page for more details about swap-on-zram.


Nano by default

Fresh Fedora 33 installations will set the EDITOR environment variable to nano by default. This change affects several command line tools that spawn a text editor when they require user input. With earlier releases, this environment variable default was unspecified, leaving it up to the individual application to pick a default editor. Typically, applications would use vi as their default editor due to it being a small application that is traditionally available on the base installation of most Unix/Linux operating systems. Since Fedora 33 includes nano in its base installation, and since nano is more intuitive for a beginning user to use, Fedora 33 will use nano by default. Users who want vi can, of course, override the value of the EDITOR variable in their own environment. See the Fedora change request for more details.

Posted on Leave a comment

Secure NTP with NTS

Many computers use the Network Time Protocol (NTP) to synchronize their system clocks over the internet. NTP is one of the few unsecured internet protocols still in common use. An attacker that can observe network traffic between a client and server can feed the client with bogus data and, depending on the client’s implementation and configuration, force it to set its system clock to any time and date. Some programs and services might not work if the client’s system clock is not accurate. For example, a web browser will not work correctly if the web servers’ certificates appear to be expired according to the client’s system clock. Use Network Time Security (NTS) to secure NTP.

Fedora 331 is the first Fedora release to support NTS. NTS is a new authentication mechanism for NTP. It enables clients to verify that the packets they receive from the server have not been modified while in transit. The only thing an attacker can do when NTS is enabled is drop or delay packets. See RFC8915 for further details about NTS.

NTP can be secured well with symmetric keys. Unfortunately, the server has to have a different key for each client and the keys have to be securely distributed. That might be practical with a private server on a local network, but it does not scale to a public server with millions of clients.

NTS includes a Key Establishment (NTS-KE) protocol that automatically creates the encryption keys used between the server and its clients. It uses Transport Layer Security (TLS) on TCP port 4460. It is designed to scale to very large numbers of clients with a minimal impact on accuracy. The server does not need to keep any client-specific state. It provides clients with cookies, which are encrypted and contain the keys needed to authenticate the NTP packets. Privacy is one of the goals of NTS. The client gets a new cookie with each server response, so it doesn’t have to reuse cookies. This prevents passive observers from tracking clients migrating between networks.

The default NTP client in Fedora is chrony. Chrony added NTS support in version 4.0. The default configuration hasn’t changed. Chrony still uses public servers from the pool.ntp.org project and NTS is not enabled by default.

Currently, there are very few public NTP servers that support NTS. The two major providers are Cloudflare and Netnod. The Cloudflare servers are in various places around the world. They use anycast addresses that should allow most clients to reach a close server. The Netnod servers are located in Sweden. In the future we will probably see more public NTP servers with NTS support.

A general recommendation for configuring NTP clients for best reliability is to have at least three working servers. For best accuracy, it is recommended to select close servers to minimize network latency and asymmetry caused by asymmetric network routing. If you are not concerned about fine-grained accuracy, you can ignore this recommendation and use any NTS servers you trust, no matter where they are located.

If you do want high accuracy, but you don’t have a close NTS server, you can mix distant NTS servers with closer non-NTS servers. However, such a configuration is less secure than a configuration using NTS servers only. The attackers still cannot force the client to accept arbitrary time, but they do have a greater control over the client’s clock and its estimate of accuracy, which may be unacceptable in some environments.

Enable client NTS in the installer

When installing Fedora 33, you can enable NTS in the Time & Date dialog in the Network Time configuration. Enter the name of the server and check the NTS support before clicking the + (Add) button. You can add one or more servers or pools with NTS. To remove the default pool of servers (2.fedora.pool.ntp.org), uncheck the corresponding mark in the Use column.

Network Time configuration in
Fedora installer

Enable client NTS in the configuration file

If you upgraded from a previous Fedora release, or you didn’t enable NTS in the installer, you can enable NTS directly in /etc/chrony.conf. Specify the server with the nts option in addition to the recommended iburst option. For example:

server time.cloudflare.com iburst nts
server nts.sth1.ntp.se iburst nts
server nts.sth2.ntp.se iburst nts

You should also allow the client to save the NTS keys and cookies to disk,
so it doesn’t have to repeat the NTS-KE session on each start. Add the
following line to chrony.conf, if it is not already present:

ntsdumpdir /var/lib/chrony

If you don’t want NTP servers provided by DHCP to be mixed with the servers you
have specified, remove or comment out the following line in
chrony.conf:

sourcedir /run/chrony-dhcp

After you have finished editing chrony.conf, save your changes and restart the chronyd service:

systemctl restart chronyd

Check client status

Run the following command under the root user to check whether the NTS key
establishment was successful:

# chronyc -N authdata
Name/IP address Mode KeyID Type KLen Last Atmp NAK Cook CLen
=========================================================================
time.cloudflare.com NTS 1 15 256 33m 0 0 8 100
nts.sth1.ntp.se NTS 1 15 256 33m 0 0 8 100
nts.sth2.ntp.se NTS 1 15 256 33m 0 0 8 100

The KeyID, Type, and KLen columns should have non-zero values. If they are zero, check the system log for error messages from chronyd. One possible cause of failure is a firewall is blocking the client’s connection to the server’s TCP port ( port 4460).

Another possible cause of failure is a certificate that is failing to verify because the client’s clock is wrong. This is a chicken-or-the-egg type problem with NTS. You may need to manually correct the date or temporarily disable NTS in order to get NTS working. If your computer has a real-time clock, as almost all computers do, and it’s backed up by a good battery, this operation should be needed only once.

If the computer doesn’t have a real-time clock or battery, as is common with
some small ARM computers like the Raspberry Pi, you can add the -s
option to /etc/sysconfig/chronyd to restore time saved on the last
shutdown or reboot. The clock will be behind the true time, but if the
computer wasn’t shut down for too long and the server’s certificates were not
renewed too close to their expiration, it should be sufficient for the time
checks to succeed. As a last resort, you can disable the time checks with the
nocerttimecheck directive. See the chrony.conf(5) man page
for details.

Run the following command to confirm that the client is making NTP
measurements:

# chronyc -N sources
MS Name/IP address Stratum Poll Reach LastRx Last sample ===============================================================================
^* time.cloudflare.com 3 6 377 45 +355us[ +375us] +/- 11ms
^+ nts.sth1.ntp.se 1 6 377 44 +237us[ +237us] +/- 23ms
^+ nts.sth2.ntp.se 1 6 377 44 -170us[ -170us] +/- 22ms

The Reach column should have a non-zero value; ideally 377. The value 377 shown above is an octal number. It indicates that the last eight requests all had a valid response. The validation check will include NTS authentication if enabled. If the value only rarely or never gets to 377, it indicates that NTP requests or responses are getting lost in the network. Some major network operators are known to have middleboxes that block or limit rate of large NTP packets as a mitigation for amplification attacks that exploit the monitoring protocol of ntpd. Unfortunately, this impacts NTS-protected NTP packets, even though they don’t cause any amplification. The NTP working group is considering an alternative port for NTP as a workaround for this issue.

Enable NTS on the server

If you have your own NTP server running chronyd, you can enable server NTS support to allow its clients to be synchronized securely. If the server is a client of other servers, it should use NTS or a symmetric key for its own synchronization. The clients assume the synchronization chain is secured between all servers up to the primary time servers.

Enabling server NTS is similar to enabling HTTPS on a web server. You just need a private key and certificate. The certificate could be signed by the Let’s Encrypt authority using the certbot tool, for example. When you have the key and certificate file (including intermediate certificates), specify them in chrony.conf with the following directives:

ntsserverkey /etc/pki/tls/private/foo.example.net.key
ntsservercert /etc/pki/tls/certs/foo.example.net.crt

Make sure the ntsdumpdir directive mentioned previously in the
client configuration is present in chrony.conf. It allows the server
to save its keys to disk, so the clients of the server don’t have to get new
keys and cookies when the server is restarted.

Restart the chronyd service:

systemctl restart chronyd

If there are no error messages in the system log from chronyd, it should be
accepting client connections. If the server has a firewall, it needs to allow
both the UDP 123 and TCP 4460 ports for NTP and NTS-KE respectively.

You can perform a quick test from a client machine with the following command:

$ chronyd -Q -t 3 'server foo.example.net iburst nts maxsamples 1'
2020-10-13T12:00:52Z chronyd version 4.0 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
2020-10-13T12:00:52Z Disabled control of system clock
2020-10-13T12:00:55Z System clock wrong by -0.001032 seconds (ignored)
2020-10-13T12:00:55Z chronyd exiting

If you see a System clock wrong message, it’s working
correctly.

On the server, you can use the following command to check how many NTS-KE
connections and authenticated NTP packets it has handled:

# chronyc serverstats
NTP packets received : 2143106240
NTP packets dropped : 117180834
Command packets received : 16819527
Command packets dropped : 0
Client log records dropped : 574257223
NTS-KE connections accepted: 104
NTS-KE connections dropped : 0
Authenticated NTP packets : 52139

If you see non-zero NTS-KE connections accepted and Authenticated
NTP packets
, it means at least some clients were able to connect to the
NTS-KE port and send an authenticated NTP request.

— Cover photo by Louis. K on Unsplash —


1. The Fedora 33 Beta installer contains an older chrony prerelease which doesn’t work with current NTS servers because the NTS-KE port has changed. Consequently, in the Network Time configuration in the installer, the servers will always appear as not working. After installation, the chrony package needs to be updated before it will work with current servers.

Posted on Leave a comment

Incremental backup with Butterfly Backup

Introduction

This article explains how to make incremental or differential backups, with a catalog available to restore (or export) at the point you want, with Butterfly Backup.

Requirements

Butterfly Backup is a simple wrapper of rsync written in python; the first requirement is python3.3 or higher (plus module cryptography for init action). Other requirements are openssh and rsync (version 2.5 or higher). Ok, let’s go!

[Editors note: rsync version 3.2.3 is already installed on Fedora 33 systems]

$ sudo dnf install python3 openssh rsync git
$ sudo pip3 install cryptography

Installation

After that, installing Butterfly Backup is very simple by using the following commands to clone the repository locally, and set up Butterfly Backup for use:

$ git clone https://github.com/MatteoGuadrini/Butterfly-Backup.git
$ cd Butterfly-Backup
$ sudo python3 setup.py
$ bb --help
$ man bb

To upgrade, you would use the same commands too.

Example

Butterfly Backup is a server to client tool and is installed on a server (or workstation). The restore process restores the files into the specified client. This process shares some of the options available to the backup process.

Backups are organized accord to precise catalog; this is an example:

$ tree destination/of/backup
.
├── destination
│ ├── hostname or ip of the PC under backup
│ │ ├── timestamp folder
│ │ │ ├── backup folders
│ │ │ ├── backup.log
│ │ │ └── restore.log
│ │ ├─── general.log
│ │ └─── symlink of last backup
│
├── export.log
├── backup.list
└── .catalog.cfg

Butterfly Backup has six main operations, referred to as actions, you can get information about them with the –help command.

$ bb --help
usage: bb [-h] [--verbose] [--log] [--dry-run] [--version] {config,backup,restore,archive,list,export} ... Butterfly Backup optional arguments: -h, --help show this help message and exit --verbose, -v Enable verbosity --log, -l Create a log --dry-run, -N Dry run mode --version, -V Print version action: Valid action {config,backup,restore,archive,list,export} Available actions config Configuration options backup Backup options restore Restore options archive Archive options list List options export Export options

Configuration

Configuration mode is straight forward; If you’re already familiar with the exchange keys and OpenSSH, you probably won’t need it. First, you must create a configuration (rsa keys), for instance:

$ bb config --new
SUCCESS: New configuration successfully created!

After creating the configuration, the keys will be installed (copied) on the hosts you want to backup:

$ bb config --deploy host1
Copying configuration to host1; write the password:
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/arthur/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
arthur@host1's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'arthur@host1'"
and check to make sure that only the key(s) you wanted were added. SUCCESS: Configuration copied successfully on host1!

Backup

There are two backup modes: single and bulk.
The most relevant features of the two backup modes are the parallelism and retention of old backups. See the two parameters –parallel and –retention in the documentation.

Single backup

The backup of a single machine consists in taking the files and folders indicated in the command line, and putting them into the cataloging structure indicated above. In other words, copy all file and folders of a machine into a path.

This is an examples:

$ bb backup --computer host1 --destination /mnt/backup --data User Config --type Unix
Start backup on host1
SUCCESS: Command rsync -ah --no-links arthur@host1:/home :/etc /mnt/backup/host1/2020_09_19__10_28

Bulk backup

Above all, bulk mode backups share the same options as single mode, with the difference that they accept a file containing a list of hostnames or ips. In this mode backups will performed in parallel (by default 5 machines at a time). Above all, if you want to run fewer or more machines in parallel, specify the –parallel parameter.

Incremental of the previous backup, for instance:

$ cat /home/arthur/pclist.txt
host1
host2
host3
$ bb backup --list /home/arthur/pclist.txt --destination /mnt/backup --data User Config --type Unix
ERROR: The port 22 on host2 is closed!
ERROR: The port 22 on host3 is closed!
Start backup on host1
SUCCESS: Command rsync -ahu --no-links --link-dest=/mnt/backup/host1/2020_09_19__10_28 arthur@host1:/home :/etc /mnt/backup/host1/2020_09_19__10_50

There are four backup modes, which you specify with the –mode flag: Full (backup all files) , Mirror (backup all files in mirror mode), Differential (is based on the latest Full backup) and Incremental (is based on the latest backup).
The default mode is Incremental; Full mode is set by default when the flag is not specified.

Listing catalog

The first time you run backup commands, the catalog is created. The catalog is used for future backups and all the restores that are made through Butterfly Backup. To query this catalog use the list command.
First, let’s query the catalog in our example:

$ bb list --catalog /mnt/backup BUTTERFLY BACKUP CATALOG Backup id: aba860b0-9944-11e8-a93f-005056a664e0
Hostname or ip: host1
Timestamp: 2020-09-19 10:28:12 Backup id: dd6de2f2-9a1e-11e8-82b0-005056a664e0
Hostname or ip: host1
Timestamp: 2020-09-19 10:50:59

Press q for exit and select a backup-id:

$ bb list --catalog /mnt/backup --backup-id dd6de2f2-9a1e-11e8-82b0-005056a664e0
Backup id: dd6de2f2-9a1e-11e8-82b0-005056a664e0
Hostname or ip: host1
Type: Incremental
Timestamp: 2020-09-19 10:50:59
Start: 2020-09-19 10:50:59
Finish: 2020-09-19 11:43:51
OS: Unix
ExitCode: 0
Path: /mnt/backup/host1/2020_09_19__10_50
List: backup.log
etc
home

To export the catalog list use it with an external tool like cat, include the log flag:

$ bb list --catalog /mnt/backup --log
$ cat /mnt/backup/backup.list

Restore

The restore process is the exact opposite of the backup process. It takes the files from a specific backup and push it to the destination computer.
This command perform a restore on the same machine of the backup, for instance:

$ bb restore --catalog /mnt/backup --backup-id dd6de2f2-9a1e-11e8-82b0-005056a664e0 --computer host1 --log
Want to do restore path /mnt/backup/host1/2020_09_19__10_50/etc? To continue [Y/N]? y
Want to do restore path /mnt/backup/host1/2020_09_19__10_50/home? To continue [Y/N]? y
SUCCESS: Command rsync -ahu -vP --log-file=/mnt/backup/host1/2020_09_19__10_50/restore.log /mnt/backup/host1/2020_09_19__10_50/etc arthur@host1:/restore_2020_09_19__10_50
SUCCESS: Command rsync -ahu -vP --log-file=/mnt/backup/host1/2020_09_19__10_50/restore.log /mnt/backup/host1/2020_09_19__10_50/home/* arthur@host1:/home

Without specifying the “type” flag that indicates the operating system on which the data are being retrieved, Butterfly Backup will select it directly from the catalog via the backup-id.

Archive old backup

Archive operations are used to store backups by saving disk space.

$ bb archive --catalog /mnt/backup/ --days 1 --destination /mnt/archive/ --verbose --log
INFO: Check archive this backup f65e5afe-9734-11e8-b0bb-005056a664e0. Folder /mnt/backup/host1/2020_09_18__17_50
INFO: Check archive this backup 4f2b5f6e-9939-11e8-9ab6-005056a664e0. Folder /mnt/backup/host1/2020_09_15__07_26
SUCCESS: Delete /mnt/backup/host1/2020_09_15__07_26 successfully.
SUCCESS: Archive /mnt/backup/host1/2020_09_15__07_26 successfully.
$ ls /mnt/archive
host1
$ ls /mnt/archive/host1
2020_09_15__07_26.zip

After that, look in the catalog and see that the backup was actually archived:

$ bb list --catalog /mnt/backup/ -i 4f2b5f6e-9939-11e8-9ab6-005056a664e0
Backup id: 4f2b5f6e-9939-11e8-9ab6-005056a664e0
Hostname or ip: host1
Type: Incremental
Timestamp: 2020-09-15 07:26:46
Start: 2020-09-15 07:26:46
Finish: 2020-09-15 08:43:45
OS: Unix
ExitCode: 0
Path: /mnt/backup/host1/2020_09_15__07_26
Archived: True

Conclusion

Butterfly Backup was born from a very complex need; this tool provides superpowers to rsync, automates the backup and restore process. In addition, the catalog allows you to have a system similar to a “time machine”.

In conclusion, Butterfly Backup is a lightweight, versatile, simple and scriptable backup tool.

One more thing; Easter egg: bb -Vv

Thank you for reading my post.

Full documentation: https://butterfly-backup.readthedocs.io/
Github: https://github.com/MatteoGuadrini/Butterfly-Backup


Photo by Manu M on Unsplash.

Posted on Leave a comment

systemd-resolved: introduction to split DNS

Fedora 33 switches the default DNS resolver to systemd-resolved. In simple terms, this means that systemd-resolved will run as a daemon. All programs wanting to translate domain names to network addresses will talk to it. This replaces the current default lookup mechanism where each program individually talks to remote servers and there is no shared cache.

If necessary, systemd-resolved will contact remote DNS servers. systemd-resolved is a “stub resolver”—it doesn’t resolve all names itself (by starting at the root of the DNS hierarchy and going down label by label), but forwards the queries to a remote server.

A single daemon handling name lookups provides significant benefits. The daemon caches answers, which speeds answers for frequently used names. The daemon remembers which servers are non-responsive, while previously each program would have to figure this out on its own after a timeout. Individual programs only talk to the daemon over a local transport and are more isolated from the network. The daemon supports fancy rules which specify which name servers should be used for which domain names—in fact, the rest of this article is about those rules.

Split DNS

Consider the scenario of a machine that is connected to two semi-trusted networks (wifi and ethernet), and also has a VPN connection to your employer. Each of those three connections has its own network interface in the kernel. And there are multiple name servers: one from a DHCP lease from the wifi hotspot, two specified by the VPN and controlled by your employer, plus some additional manually-configured name servers. Routing is the process of deciding which servers to ask for a given domain name. Do not mistake this with the process of deciding where to send network packets, which is called routing too.

The network interface is king in systemd-resolved. systemd-resolved first picks one or more interfaces which are appropriate for a given name, and then queries one of the name servers attached to that interface. This is known as “split DNS”.

There are two flavors of domains attached to a network interface: routing domains and search domains. They both specify that the given domain and any subdomains are appropriate for that interface. Search domains have the additional function that single-label names are suffixed with that search domain before being resolved. For example, a lookup for “server” is treated as a lookup for “server.example.com” if the search domain is “example.com.” In systemd-resolved config files, routing domains are prefixed with the tilde (~) character.

Specific example

Now consider a specific example: your VPN interface tun0 has a search domain private.company.com and a routing domain ~company.com. If you ask for mail.private.company.com, it is matched by both domains, so this name would be routed to tun0.

A request for www.company.com is matched by the second domain and would also go to tun0. If you ask for www, (in other words, if you specify a single-label name without any dots), the difference between routing and search domains comes into play. systemd-resolved attempts to combine the single-label name with the search domain and tries to resolve www.private.company.com on tun0.

If you have multiple interfaces with search domains, single-label names are suffixed with all search domains and resolved in parallel. For multi-label names, no suffixing is done; search and routing domains are are used to route the name to the appropriate interface. The longest match wins. When there are multiple matches of the same length on different interfaces, they are resolved in parallel.

A special case is when an interface has a routing domain ~. (a tilde for a routing domain and a dot for the root DNS label). Such an interface always matches any names, but with the shortest possible length. Any interface with a matching search or routing domain has higher priority, but the interface with ~. is used for all other names. Finally, if no routing or search domains matched, the name is routed to all interfaces that have at least one name server attached.

Lookup routing in systemd-resolved

Domain routing

This seems fairly complex, partially because of the historic names which are confusing. In actual practice it’s not as complicated as it seems.

To introspect a running system, use the resolvectl domain command. For example:

$ resolvectl domain
Global:
Link 4 (wlp4s0): ~.
Link 18 (hub0):
Link 26 (tun0): redhat.com

You can see that www would resolve as www.redhat.com. over tun0. Anything ending with redhat.com resolves over tun0. Everything else would resolve over wlp4s0 (the wireless interface). In particular, a multi-label name like www.foobar would resolve over wlp4s0, and most likely fail because there is no foobar top-level domain (yet).

Server routing

Now that you know which interface or interfaces should be queried, the server or servers to query are easy to determine. Each interface has one or more name servers configured. systemd-resolved will send queries to the first of those. If the server is offline and the request times out or if the server sends a syntactically-invalid answer (which shouldn’t happen with “normal” queries, but often becomes an issue when DNSSEC is enabled), systemd-resolved switches to the next server on the list. It will use that second server as long as it keeps responding. All servers are used in a round-robin rotation.

To introspect a running system, use the resolvectl dns command:

$ resolvectl dns
Global:
Link 4 (wlp4s0): 192.168.1.1 8.8.4.4 8.8.8.8
Link 18 (hub0):
Link 26 (tun0): 10.45.248.15 10.38.5.26

When combined with the previous listing, you know that for www.redhat.com, systemd-resolved will query 10.45.248.15, and—if it doesn’t respond—10.38.5.26. For www.google.com, systemd-resolved will query 192.168.1.1 or the two Google servers 8.8.4.4 and 8.8.8.8.

Differences from nss-dns

Before going further detail, you may ask how this differs from the previous default implementation (nss-dns). With nss-dns there is just one global list of up to three name servers and a global list of search domains (specified as nameserver and search in /etc/resolv.conf).

Each name to query is sent to the first name server. If it doesn’t respond, the same query is sent to the second name server, and so on. systemd-resolved implements split-DNS and remembers which servers are currently considered active.

For single-label names, the query is performed with each of the the search domains suffixed. This is the same with systemd-resolved. For multi-label names, a query for the unsuffixed name is performed first, and if that fails, a query for the name suffixed by each of the search domains in turn is performed. systemd-resolved doesn’t do that last step; it only suffixes single-label names.

A second difference is that with nss-dns, this module is loaded into each process. The process itself communicates with remote servers and implements the full DNS stack internally. With systemd-resolved, the nss-resolve module is loaded into the process, but it only forwards the query to systemd-resolved over a local transport (D-Bus) and doesn’t do any work itself. The systemd-resolved process is heavily sandboxed using systemd service features.

The third difference is that with systemd-resolved all state is dynamic and can be queried and updated using D-Bus calls. This allows very strong integration with other daemons or graphical interfaces.

Configuring systemd-resolved

So far, this article talked about servers and the routing of domains without explaining how to configure them. systemd-resolved has a configuration file (/etc/systemd/resolv.conf) where you specify name servers with DNS= and routing or search domains with Domains= (routing domains with ~, search domains without). This corresponds to the Global: lists in the two listings above.

In this article’s examples, both lists are empty. Most of the time configuration is attached to specific interfaces, and “global” configuration is not very useful. Interfaces come and go and it isn’t terribly smart to contact servers on an interface which is down. As soon as you create a VPN connection, you want to use the servers configured for that connection to resolve names, and as soon as the connection goes down, you want to stop.

How does then systemd-resolved acquire the configuration for each interface? This happens dynamically, with the network management service pushing this configuration over D-Bus into systemd-resolved. The default in Fedora is NetworkManager and it has very good integration with systemd-resolved. Alternatives like systemd’s own systemd-networkd implement similar functionality. But the interface is open and other programs can do the appropriate D-Bus calls.

Alternatively, resolvectl can be used for this (it is just a wrapper around the D-Bus API). Finally, resolvconf provides similar functionality in a form compatible with a tool in Debian with the same name.

Scenario: Local connection more trusted than VPN

The important thing is that in the common scenario, systemd-resolved follows the configuration specified by other tools, in particular NetworkManager. So to understand how systemd-resolved names, you need to see what NetworkManager tells it to do. Normally NM will tell systemd-resolved to use the name servers and search domains received in a DHCP lease on some interface. For example, look at the source of configuration for the two listings shown above:

There are two connections: “Parkinson” wifi and “Brno (BRQ)” VPN. In the first panel DNS:Automatic is enabled, which means that the DNS server received as part of the DHCP lease (192.168.1.1) is passed to systemd-resolved. Additionally. 8.8.4.4 and 8.8.8.8 are listed as alternative name servers. This configuration is useful if you want to resolve the names of other machines in the local network, which 192.168.1.1 provides. Unfortunately the hotspot DNS server occasionally gets stuck, and the other two servers provide backup when that happens.

The second panel is similar, but doesn’t provide any special configuration. NetworkManager combines routing domains for a given connection from DHCP, SLAAC RDNSS, and VPN, and finally manual configuration and forward this to systemd-resolved. This is the source of the search domain redhat.com in the listing above.

There is an important difference between the two interfaces though: in the second panel, “Use this connection only for resources on its network” is checked. This tells NetworkManager to tell systemd-resolved to only use this interface for names under the search domain received as part of the lease (Link 26 (tun0): redhat.com in the first listing above). In the first panel, this checkbox is unchecked, and NetworkManager tells systemd-resolved to use this interface for all other names (Link 4 (wlp4s0): ~.). This effectively means that the wireless connection is more trusted.

Scenario: VPN more trusted than local network

In a different scenario, a VPN would be more trusted than the local network and the domain routing configuration reversed. If a VPN without “Use this connection only for resources on its network” is active, NetworkManager tells systemd-resolved to attach the default routing domain to this interface. After unchecking the checkbox and restarting the VPN connection:

$ resolvectl domain
Global:
Link 4 (wlp4s0):
Link 18 (hub0):
Link 28 (tun0): ~. redhat.com
$ resolvectl dns
Global:
Link 4 (wlp4s0):
Link 18 (hub0):
Link 28 (tun0): 10.45.248.15 10.38.5.26

Now all domain names are routed to the VPN. The network management daemon controls systemd-resolved and the user controls the network management daemon.

Additional systemd-resolved functionality

As mentioned before, systemd-resolved provides a common name lookup mechanism for all programs running on the machine. Right now the effect is limited: shared resolver and cache and split DNS (the lookup routing logic described above). systemd-resolved provides additional resolution mechanisms beyond the traditional unicast DNS. These are the local resolution protocols MulticastDNS and LLMNR, and an additional remote transport DNS-over-TLS.

Fedora 33 does not enable MulticastDNS and DNS-over-TLS in systemd-resolved. MulticastDNS is implemented by nss-mdns4_minimal and Avahi. Future Fedora releases may enable these as the upstream project improves support.

Implementing this all in a single daemon which has runtime state allows smart behaviour: DNS-over-TLS may be enabled in opportunistic mode, with automatic fallback to classic DNS if the remote server does not support it. Without the daemon which can contain complex logic and runtime state this would be much harder. When enabled, those additional features will apply to all programs on the system.

There is more to systemd-resolved: in particular LLMNR and DNSSEC, which only received brief mention here. A future article will explore those subjects.

Posted on Leave a comment

Use dnsmasq to provide DNS & DHCP services

Many tech enthusiasts find the ability to control their host name resolution important. Setting up servers and services usually requires some form of fixed address, and sometimes also requires special forms of resolution such as defining Kerberos or LDAP servers, mail servers, etc. All of this can be achieved with dnsmasq.

dnsmasq is a lightweight and simple program which enables issuing DHCP addresses on your network and registering the hostname & IP address in DNS. This configuration also allows external resolution, so your whole network will be able to speak to itself and find external sites too.

This article covers installing and configuring dnsmasq on either a virtual machine or small physical machine like a Raspberry Pi so it can provide these services in your home network or lab. If you have an existing setup and just need to adjust the settings for your local workstation, read the previous article which covers configuring the dnsmasq plugin in NetworkManager.

Install dnsmasq

First, install the dnsmasq package:

sudo dnf install dnsmasq

Next, enable and start the dnsmasq service:

sudo systemctl enable --now dnsmasq

Configure dnsmasq

First, make a backup copy of the dnsmasq.conf file:

sudo cp /etc/dnsmasq.conf /etc/dnsmasq.conf.orig

Next, edit the file and make changes to the following to reflect your network. In this example, mydomain.org is the domain name, 192.168.1.10 is the IP address of the dnsmasq server and 192.168.1.1 is the default gateway.

sudo vi /etc/dnsmasq.conf

Insert the following contents:

domain-needed
bogus-priv
no-resolv
server=8.8.8.8
server=8.8.4.4
local=/mydomain.org/
listen-address=::1,127.0.0.1,192.168.1.10
expand-hosts
domain=mydomain.org
dhcp-range=192.168.1.100,192.168.1.200,24h
dhcp-option=option:router,192.168.1.1
dhcp-authoritative
dhcp-leasefile=/var/lib/dnsmasq/dnsmasq.leases

Test the config to check for typos and syntax errors:

$ sudo dnsmasq --test
dnsmasq: syntax check OK.

Now edit the hosts file, which can contain both statically- and dynamically-allocated hosts. Static addresses should lie outside the DHCP range you specified earlier. Hosts using DHCP but which need a fixed address should be entered here with an address within the DHCP range.

sudo vi /etc/hosts

The first two lines should be there already. Add the remaining lines to configure the router, the dnsmasq server, and two additional servers.

127.0.0.1   localhost localhost.localdomain
::1         localhost localhost.localdomain
192.168.1.1    router
192.168.1.10   dnsmasq
192.168.1.20   server1
192.168.1.30   server2

Restart the dnsmasq service:

sudo systemctl restart dnsmasq

Next add the services to the firewall to allow the clients to connect:

sudo firewall-cmd --add-service={dns,dhcp}
sudo firewall-cmd --runtime-to-permanent

Test name resolution

First, install bind-utils to get the nslookup and dig packages. These allow you to perform both forward and reverse lookups. You could use ping if you’d rather not install extra packages. but these tools are worth installing for the additional troubleshooting functionality they can provide.

sudo dnf install bind-utils

Now test the resolution. First, test the forward (hostname to IP address) resolution:

$ nslookup server1
Server:       127.0.0.1
Address:    127.0.0.1#53
Name:    server1.mydomain.org
Address: 192.168.1.20

Next, test the reverse (IP address to hostname) resolution:

$ nslookup 192.168.1.20
20.1.168.192.in-addr.arpa    name = server1.mydomain.org.

Finally, test resolving hostnames outside of your network:

$ nslookup fedoramagazine.org
Server:       127.0.0.1
Address:    127.0.0.1#53
Non-authoritative answer:
Name:    fedoramagazine.org
Address: 35.196.109.67

Test DHCP leases

To test DHCP leases, you need to boot a machine which uses DHCP to obtain an IP address. Any Fedora variant will do that by default. Once you have booted the client machine, check that it has an address and that it corresponds to the lease file for dnsmasq.

From the machine running dnsmasq:

$ sudo cat /var/lib/dnsmasq/dnsmasq.leases
1598023942 52:54:00:8e:d5:db 192.168.1.100 server3 01:52:54:00:8e:d5:db
1598019169 52:54:00:9c:5a:bb 192.168.1.101 server4 01:52:54:00:9c:5a:bb

Extending functionality

You can assign hosts a fixed IP address via DHCP by adding it to your hosts file with the address you want (within your DHCP range). Do this by adding into the dnsmasq.conf file the following line, which assigns the IP listed to any host that has that name:

dhcp-host=myhost

Alternatively, you can specify a MAC address which should always be given a fixed IP address:

dhcp-host=11:22:33:44:55:66,192.168.1.123

You can specify a PXE boot server if you need to automate machine builds

tftp-root=/tftpboot
dhcp-boot=/tftpboot/pxelinux.0,boothost,192.168.1.240

This should point to the actual URL of your TFTP server.

If you need to specify SRV or TXT records, for example for LDAP, Kerberos or similar, you can add these:

srv-host=_ldap._tcp.mydomain.org,ldap-server.mydomain.org,389
srv-host=_kerberos._udp.mydomain.org,krb-server.mydomain.org,88
srv-host=_kerberos._tcp.mydomain.org,krb-server.mydomain.org,88
srv-host=_kerberos-master._udp.mydomain.org,krb-server.mydomain.org,88
srv-host=_kerberos-adm._tcp.mydomain.org,krb-server.mydomain.org,749
srv-host=_kpasswd._udp.mydomain.org,krb-server.mydomain.org,464
txt-record=_kerberos.mydomain.org,KRB-SERVER.MYDOMAIN.ORG

There are many other options in dnsmasq. The comments in the original config file describe most of them. For full details, read the man page, either locally or online.

Posted on Leave a comment

Announcing the release of Fedora 33 Beta

The Fedora Project is pleased to announce the immediate availability of Fedora 33 Beta, the next step towards our planned Fedora 33 release at the end of October.

Download the prerelease from our Get Fedora site:

Or, check out one of our popular variants, including KDE Plasma, Xfce, and other desktop environments, as well as images for ARM devices like the Raspberry Pi 2 and 3:

Beta Release Highlights

BTRFS by default

All of the desktop variants of Fedora 33 Beta – including Fedora Workstation, Fedora KDE, and others – will use BTRFS as the default filesystem. This is a big shift: we’ve been using ext filesystems since Fedora Core 1. BTRFS offers some really compelling features for users, including transparent compression and copy-on-write. For Fedora 33, we’re only defaulting to the basic features of BTRFS, but we’ll build out the default feature set to include more goodies in future releases.

Fedora Workstation

Fedora 33 Workstation Beta includes GNOME 3.38, the newest release of the GNOME desktop environment. It is full of performance enhancements and improvements. GNOME 3.38 now includes a welcome tour after installation to help users learn about all of the great features this desktop environment offers. It also improves screen recording and multi-monitor support. For a full list of GNOME 3.38 highlights, see the release notes.

Fedora 33 Workstation Beta also provides better thermal management and peak performance on Intel CPUs by including thermald in the default install. And because your desktop should be fun to look at as well as easy to use, Fedora 33 Workstation Beta includes animated backgrounds (a time-of-day slideshow with hue changes) by default.

Fedora IoT

With Fedora 33 Beta, Fedora IoT is now an official Fedora Edition. Fedora IoT is geared toward edge devices on a wide variety of hardware platforms. It is based on ostree technology for safe update and rollback. It includes the Platform AbstRaction for SECurity (PARSEC), an open-source initiative to provide a common API to hardware security and cryptographic services in a platform-agnostic way.

Other updates

Fedora 33 Beta defaults to using nano as the editor. nano is a more approachable editor that is more welcoming to new users. Of course, those who want to use vim, emacs, or any other editor are still able to.

Fedora 33 KDE Beta enables earlyOOM by default, as Fedora Workstation did in the previous release. This helps improve system responsiveness on systems that are running out of memory. 

Fedora 33 Beta includes updated versions of many popular packages like Ruby, Python, and Perl. .NET Core will now be available on Fedora on aarch64, in addition to x86_64. We’re also dropping a few older versions: Python 2.6 and Python 3.4 are retired. The httpd module mod_php is also dropped, as php-fpm is a more performant and more secure PHP module.

Testing needed

Since this is a Beta release, we expect that you may encounter bugs or missing features. To report issues encountered during testing, contact the Fedora QA team via the mailing list or in the #fedora-qa channel on Freenode IRC. As testing progresses, common issues are tracked on the Common F33 Bugs page.

For tips on reporting a bug effectively, read how to file a bug.

What is the Beta Release?

A Beta release is code-complete and bears a very strong resemblance to the final release. If you take the time to download and try out the Beta, you can check and make sure the things that are important to you are working. Every bug you find and report doesn’t just help you, it improves the experience of millions of Fedora users worldwide! Together, we can make Fedora rock-solid. We have a culture of coordinating new features and pushing fixes upstream as much as we can. Your feedback improves not only Fedora, but Linux and free software as a whole.

More information

For more detailed information about what’s new on Fedora 33 Beta release, you can consult the Fedora 33 Change set. It contains more technical information about the new packages and improvements shipped with this release.