This article shows the reader how easy it is to get started using pods with Podman on Fedora. But what is Podman? Well, we will start by saying that Podman is a container engine developed by Red Hat, and yes, if you thought about Docker when reading container engine, you are on the right track. A whole new revolution of containerization started with Docker, and Kubernetes added the concept of pods in the area of container orchestration when dealing with containers that share some common resources. But hold on! Do you really think it is worth sticking with Docker alone by assuming it’s the only effective way of containerization? Podman can also manage pods on Fedora as well as the containers used in those pods.
Podman is a daemonless, open source, Linux native tool designed to make it easy to find, run, build, share and deploy applications using Open Containers Initiative (OCI) Containers and Container Images.
From the official Podman documentation at http://docs.podman.io/en/latest/
Why should we switch to Podman?
Podman is a daemonless container engine for developing, managing, and running OCI Containers on your Linux System. Containers can either be run as root or in rootless mode. Podman directly interacts with an image registry, containers and image storage.
Install Podman:
sudo dnf -y install podman
Creating a Pod:
To start using the pod we first need to create it and for that we have a basic command structure
$ podman pod create
The command above contains no arguments and hence it will create a pod with a randomly generated name. You might however, want to give your pod a relevant name. For that you just need to modify the above command a bit.
$ podman pod create --name climoiselle
The pod will be created and will report back to you the ID of the pod. In the example shown the pod was given the name ‘climoiselle’. To view the newly created pod is easy by using the command shown below:
$ podman pod list
Newly created pods have been deployed
As you can see, there are two pods listed here, one named darshna and the one created from the example named climoiselle. No doubt you notice that both pods already include one container, yet we sisn’t deploy a container to the pods yet.
What is that extra container inside the pod? This randomly generated container is an infra container. Every podman pod includes this infra container and in practice these containers do nothing but go to sleep. Their purpose is to hold the namespaces associated with the pod and to allow Podman to connect other containers to the pod. The other purpose of the infra container is to allow the pod to keep running when all associated containers have been stopped.
You can also view the individual containers within a pod with the command:
$ podman ps -a --pod
Add a container
The cool thing is, you can add more containers to your newly deployed pod. Always remember the name of your pod. It’s important as you’ll need that name in order to deploy the container in that pod. We’ll use the official ubuntu image and deploy a container using it running the top command.
$ podman run -dt --pod climoiselle ubuntu top
Everything in a Single Command:
Podman has an agile characteristic when it comes to deploying a container in a pod which you created. You can create a pod and deploy a container to the said pod with a single command using Podman. Let’s say you want to deploy an NGINX container, exposing external port 8080 to internal port 80 to a new pod named test_server.
$ podman run -dt --pod new:test_server -p 8080:80 nginx
Created a new pod and deployed a container together
Let’s check all pods that have been created and the number of containers running in each of them …
$ podman pod list
List of the containers, their state and number of containers running into them
Do you want to know a detailed configuration of the pods which are running? Just type in the command shown below:
podman pod inspect [pod's name/id]
Make it stop!
To stop the pods, we need to use the name or ID of the pod. With the information from podman’s pod list command, we can view the pods and their infra id. Simply use podman with the command stop and give the particular name/infra id of the pod.
$ podman pod stop climoiselle
Hey take a look!
My pod climoiselle stopped
After following this short tutorial, you can see how quickly you can use pods with podman on fedora. It’s an easy and convenient way to use containers that share resources and interact together.
There are, like most things in the Unix/Linux world, many ways of doing things with Vagrant, but here are some examples of ways to grow your Vagrantfile portfolio and increase your knowledge and use.
If you have not yet installed vagrant you can follow the first part of this series.
Also in this section you can configure provider-specific options. In this case the provider is libvirt, and the specific config looks like this:
config.vm.provider :libvirt do |libvirt|
libvirt.cpus = 1
libvirt.memory = 512
In the example above, all libvirt VMs will be created with a single CPU and 512Mb of memory unless specifically overridden.
The VM namespace is where you define all machines you want this Vagrantfile to build. Notice that this is still a part of the config section, and lines should therefore begin with ‘config’. All sections or parts of sections have an ‘end’ statement to close them off.
Creating multiple machines at once
Depending on what you need to achieve, this can be a simple loop or multiple machine definitions. To create any number of machines in a series, with the same settings but perhaps different names and/or IP addresses, you can just provide a range as shown here:
(1..5).each do |i|
config.vm.define "server#{i}" do |server|
server.vm.hostname = "server#{i}.example.com"
end
end
This will create 5 servers, named server1, server2, server3 etc.
Of note, using Ruby style “for i in 1..3 do” doesn’t work despite Vagrantfile syntax actually being Ruby, so use the method from the example above.
If you need servers with different hostnames, different hardware etc then you’ll need to specify them individually, or at least in groups if the situation lends itself to that. Let’s say you need to create a typical web/db/load balancer infrastructure, with 2 web servers, a single database server and a load balancer for the web traffic. Ignoring the specific software setup for this, to simply create the virtual machines ready for provisioning you could use something like this:
# Load Balancer
config.vm.define "loadbal", primary: true do |loadbal|
loadbal.vm.hostname = "loadbal"
end
# Database
config.vm.define "db", primary: true do |db|
db.vm.hostname = "db"
end
# Web Servers x2
(1..2).each do |i|
config.vm.define "web#{i}" do |web|
web.vm.hostname = "web#{i}"
end
end
This uses a combination of multiple machine calls and a small loop to build 4 VMs with a single ‘vagrant up’ command.
Networking
Vagrant generally creates its own network for VM access, and you use this with ‘vagrant ssh’. If you create more than one VM then you must use the VM name to identify which one you wish to connect to – vagrant ssh vmname.
There are a number of configuration options available which allow you to interact with your VMs in various ways.
The vagrant-libvirt plugin creates a network for the guests to use. This is automated and will always be present even if you define your own networks. The network is named “vagrant-libvirt” and can be seen either in the Virtual Networks tab of virt-manager’s connection details or by issuing a sudo virsh net-list command.
If you use dhcp for your guests, you can find the individual IP addresses with the virsh net-dhcp-list command: sudo virsh net-dhcp-leases vagrant-libvirt
Port Forwarding
The simplest change to default networking is port forwarding. This uses a simple format like most Vagrant config: config.vm.network “forwarded_port”, guest: 80, host: 8080
This listens to port 8080 on your local machine and forwards connections to port 80 on the Vagrant machine. If you need to use a UDP port, simply add , protocol: “udp” to the end of that line (notice that comma which should come immediately after the second port number).
Obviously for more complex configurations this might not be ideal, as you need to specify every single port you want to forward. If you then add multiple machines the complexity can really become too much.
In addition to this, anyone on your network can access these ports if they know your IP address, so that’s something you should be aware of.
Public Network
This creates a network card for the Vagrant VM which connects to your host network, and will therefore be visible to all machines on that network. As Vagrant is not designed to be secure, you should be aware of any vulnerabilities and take steps to protect against them.
To configure a public network, add config.vm.network “public_network” to your Vagrantfile. This will use DHCP to obtain a network address.
If you wish to assign a static IP address, you can add one to the end of the network declaration: config.vm.network “public_network”, ip: “192.168.0.1”
If you’re creating multiple guests you can put the network configuration in the vm namespace, and even allocate IPs based on iteration too:
Vagrant.configure("2") do |config|
config.vm.box = "centos/8"
config.vm.provider :libvirt do |libvirt|
libvirt.qemu_use_session = false
end
# Servers x2
(1..2).each do |i|
config.vm.define "server#{i}" do |server|
server.vm.hostname = "server#{i}"
server.vm.network "public_network", ip: "192.168.122.20#{i}"
end
end
end
Private Network
This works very much like the Public Network option, only the network is only available to the host machine and the Vagrant guests. The syntax is almost identical too: config.vm.network “private_network”, type: “dhcp”
This will create a new network in libvirt, usually named something like “vagrant-private-dhcp” – you can see this with the command sudo virsh net-list while the VM is running. This network is created and destroyed along with the vagrant guests.
Again, the network config can be specified for all guests, or per guest as shown in the public network example above.
Provisioning
Once you have your VMs defined, you can obviously then do whatever you want with them, but as soon as you issue a ‘vagrant destroy’ command any changes will be lost. This is where automated provisioning comes in.
You can use several methods to provision your machines, from simple file copies to shell scripts, Ansible, Chef and Puppet. Many of the main methods can be used, but I’ll cover the simple ones here – if you need to use something else please read the documentation as it’s all covered.
File uploads
To copy a file to the Vagrant guest, add a line to the Vagrantfile like this:
The directory structure should already exist on the Vagrant host, and will be copied in its entirety, including subdirectories and files.
Note: If you add a trailing slash to the destination path, the source path will be placed under this so make sure you only do this if you want that outcome. For example, if the above destination was “$HOME/remote/newfolder/”, then the result would see “$HOME/remote/newfolder/folder” created with the contents of the source placed here.
Shell commands
You can include individual commands, inline scripts or external scripts to perform provisioning tasks.
A single command would take this form, and any valid command line command can be used here: config.vm.provision “shell”, inline: “sudo dnf update -y”
An inline script is less common, and declared at the top of the Vagrantfile then called during provisioning:
$script = <<-SCRIPT
echo I am provisioning...
date > /etc/vagrant_provisioned_at
SCRIPT
Vagrant.configure("2") do |config|
config.vm.provision "shell", inline: $script
end
More common is the external shell script, which gives more flexibility and makes code more modular. Vagrant uploads the file to the guest then executes it. Simply call the script in the provisioning line:
config.vm.provision “shell”, path: “script.sh”
The file need not be local to the Vagrant host either:
You specify an Ansible playbook to provision your VM in the following way:
config.vm.provision "ansible" do |ansible|
ansible.playbook = "playbook.yml"
end
This then calls the playbook, which will run as any externally-run ansible playbook would.
If you’re building multiple VMs with your Vagrantfile then it’s likely you want different configurations for some of them, and in this case you should provision within the definition of each VM, as shown here:
# Web Servers x2
(1..2).each do |i|
config.vm.define "web#{i}" do |web|
web.vm.hostname = "web#{i}"
web.vm.provision "ansible" do |ansible|
ansible.playbook = "web.yml"
end
end
end
Ansible provisioners come in two formats – ansible and ansible_local. The ansible provisioner requires that Ansible is installed on the Vagrant host, and will connect remotely to your guest VMs to provision them. This means all necessary ssh authentication must be in place for it to work. The ansible_local provisioner executes playbooks directly on the guest VMs, which therefore requires Ansible be installed on each of the guests you want to provision. Vagrant will try to install Ansible on the guests in order to do this, (This can be controlled with the install option, but is enabled by default). On RHEL-style systems like Fedora, Ansible is installed from the EPEL repository. Simply use either ansible or ansible_local in the config_vm_provision command to choose the style you need.
Synced Folders
Vagrant allows you to sync folders between your Vagrant host and your guests, allowing access to configuration files, data etc. By default, the folder containing the Vagrant file is shared and mounted under /vagrant on each guest.
To configure additional synced folders, use the config.vm.synced.folder command:
config.vm.synced_folder "src/", "/srv/website"
The two parameters are the source folder on the Vagrant host and the mount directory on the guest. The destination folder will be created if it does not exist, recursively if necessary.
Options for synced folders allow you to configure them better, including the option to disable them completely. Other options allow you to specify a group owner of the folder (group), the folder owner (owner), plus mount options. There are others but these are the main ones.
You can disable the default share with the following command:
When using Vagrant on a Linux host, synced folders use NFS (with the exception of the default share which uses rsync; see below) so you must have NFS installed on the Vagrant host, and the guests also need NFS support installation. To use NFS with non-Linux hosts, simply specify the folder type as ‘nfs’:
These are the easiest to use as they usually work without any intervention on a Linux host. This is a one-way sync from host to guest performed at startup (vagrant up) or after a vagrant reload command is issued. The default share of the Vagrant project directory is done with rsync. To configure a synced folder with rsync, specify the type as ‘rsync’:
LVM is a tool for logical volume management which includes allocating disks, striping, mirroring and resizing logical volumes. It is commonly used on Fedora installations (prior to BTRFS as default it was LVM+Ext4). But have you ever started up your system to find a message like the image above, after you logged in? Uh oh, Gnome just said the home volume is almost out of space! Luckily, there is likely some space sitting around in another volume, unused and ready to re-alocate. Here’s how to reclaim hard-drive space with LVM.
The key to easily re-alocate space between volumes is the Logical Volume Manager (LVM). Fedora 32 and before use LVM to divide disk space by default. This technology is similar to standard hard-drive partitions, but LVM is a lot more flexible. LVM enables not only flexible volume size management, but also advanced capabilities such as read-write snapshots, striping or mirroring data across multiple drives, using a high-speed drive as a cache for a slower drive, and much more. All of these advanced options can get a bit overwhelming, but resizing a volume is straight-forward.
LVM basics
The volume group serves as the main container in the LVM system. By default Fedora only defines a single volume group, but there can be as many as needed. Actual hard-drive and hard-drive partitions are added to the volume group as physical volumes. Physical volumes add available free space to the volume group. A typical Fedora install has one formatted boot partition, and the rest of the drive is a partition configured as an LVM physical volume.
Out of this pool of available space, the volume group allocates one or more logical volumes. These volumes are similar to hard-drive partitions, but without the limitation of contiguous space on the disk. LVM logical volumes can even span multiple devices! Just like hard-drive partitions, logical volumes have a defined size and can contain any filesystem which can then be mounted to specific directories.
What’s needed
Confirm the system uses LVM with the gnome-disks application, and make sure there is free space available in some other volume. Without space to reclaim from another volume, this guide isn’t useful. A Fedora live CD/USB is also needed. Any file system that needs to shrink must be unmounted. Running from a live image allows all the volumes on the hard-disk to remain unmounted, even important directories like / and /home.
Use gnome-disks to verify free space
A word of warning
No data should be lost by following this guide, but it does muck around with some very low-level and powerful commands. One mistake could destroy all data on the hard-drive. So backup all the data on the disk first!
Resizing LVM volumes
To begin, boot the Fedora live image and select Try Fedora at the dialog. Next, use the Run Command to launch the blivet-gui application (accessible by pressing Alt-F2, typing blivet-gui, then pressing enter). Select the volume group on the left under LVM. The logical volumes are on the right.
Explore logical volumes in blivet-gui
The logical volume labels consist of both the volume group name and the logical volume name. In the example, the volume group is “fedora_localhost-live” and there are “home”, “root”, and “swap” logical volumes allocated. To find the full volume, select each one, click on the gear icon, and choose resize. The slider in the resize dialog indicates the allowable sizes for the volume. The minimum value on the left is the space already in use within the file system, so this is the minimum possible volume size (without deleting data). The maximum value on the right is the greatest size the volume can have based on available free space in the volume group.
Resize dialog in blivet-gui
A grayed out resize option means the volume is full and there is no free space in the volume group. It’s time to change that! Look through all of the volumes to find one with plenty of extra space, like in the screenshot above. Move the slider to the left to set the new size. Free up enough space to be useful for the full volume, but still leave plenty of space for future data growth. Otherwise, this volume will be the next to fill up.
Click resize and note that a new item appears in the volume listing: free space. Now select the full volume that started this whole endeavor, and move the slider all the way to the right. Press resize and marvel at the new improved volume layout. However, nothing has changed on the hard drive yet. Click on the check-mark to commit the changes to disk.
Review changes in blivet-gui
Review the summary of the changes, and if everything looks right, click Ok to proceed. Wait for blivet-gui to finish. Now reboot back into the main Fedora install and enjoy all the new space in the previously full volume.
Planning for the future
It is challenging to know how much space any particular volume will need in the future. Instead of immediately allocating all available free space, consider leaving it free in the volume group. In fact, Fedora Server reserves space in the volume group by default. Extending a volume is possible while it is online and in use. No live image or reboot needed. When a volume is almost full, easily extend the volume using part of the available free space and keep working. Unfortunately the default disk manager, gnome-disks, does not support LVM volume resizing, so install blivet-gui for a graphical management tool. Alternately, there is a simple terminal command to extend a volume:
Reclaiming hard-drive space with LVM just scratches the surface of LVM capabilities. Most people, especially on the desktop, probably don’t need the more advanced features. However, LVM is there when the need arises, though it can get a bit complex to implement. BTRFS is the default filesystem, without LVM, starting with Fedora 33. BTRFS can be easier to manage while still flexible enough for most common usages. Check out the recent Fedora Magazine articles on BTRFS to learn more.
COPR is a collection of personal repositories for software that isn’t carried in Fedora. Some software doesn’t conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open-source. COPR can offer these projects outside the Fedora set of packages. Software in COPR isn’t supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.
This article presents a few new and interesting projects in COPR. If you’re new to using COPR, see the COPR User Documentation for how to get started.
Dialect
Dialect translates text to foreign languages using Google Translate. It remembers your translation history and supports features such as automatic language detection and text to speech. The user interface is minimalistic and mimics the Google Translate tool itself, so it is really easy to use.
Installation instructions
The repo currently provides Dialect for Fedora 33 and Fedora Rawhide. To install it, use these commands:
gh is an official GitHub command-line client. It provides fast access and full control over your project issues, pull requests, and releases, right in the terminal. Issues (and everything else) can also be easily viewed in the web browser for a more standard user interface or sharing with others.
Installation instructions
The repo currently provides gh for Fedora 33 and Fedora Rawhide. To install it, use these commands:
Glide is a minimalistic media player based on GStreamer. It can play both local and remote files in any multimedia format supported by GStreamer itself. If you are in need of a multi-platform media player with a simple user interface, you might want to give Glide a try.
Installation instructions
The repo currently provides Glide for Fedora 32, 33, and Rawhide. To install it, use these commands:
ALE is a plugin for Vim text editor, providing syntax and semantic error checking. It also brings support for fixing code and many other IDE-like features such as TAB-completion, jumping to definitions, finding references, viewing documentation, etc.
Installation instructions
The repo currently provides vim-ale for Fedora 31, 32, 33, and Rawhide, as well as for EPEL8. To install it, use these commands:
Silverblue is an operating system for your desktop built on Fedora. It’s excellent for daily use, development, and container-based workflows. It offers numerous advantages such as being able to roll back in case of any problems. If you want to update to Fedora 33 on your Silverblue system, this article tells you how. It not only shows you what to do, but also how to revert things if something unforeseen happens.
Prior to actually doing the rebase to Fedora 33, you should apply any pending updates. Enter the following in the terminal:
$ rpm-ostree update
or install updates through GNOME Software and reboot.
Rebasing using GNOME Software
The GNOME Software shows you that there is new version of Fedora available on the Updates screen.
Fedora 33 is available
First thing you need to do is to download the new image, so click on the Download button. This will take some time and after it’s done you will see that the update is ready to install.
Fedora 33 is ready for installation
Click on the Install button. This step will take only a few moments and then you will be prompted to restart your computer.
Restart is needed to rebase to Fedora 33 Silverblue
Click on Restart button and you are done. After restart you will end up in new and shiny release of Fedora 33. Easy, isn’t it?
Rebasing using terminal
If you prefer to do everything in a terminal, than this next guide is for you.
Rebasing to Fedora 33 using terminal is easy. First, check if the 33 branch is available:
$ ostree remote refs fedora
You should see the following in the output:
fedora:fedora/33/x86_64/silverblue
Next, rebase your system to the Fedora 33 branch.
$ rpm-ostree rebase fedora:fedora/33/x86_64/silverblue
Finally, the last thing to do is restart your computer and boot to Fedora 33.
How to roll back
If anything bad happens—for instance, if you can’t boot to Fedora 33 at all—it’s easy to go back. Pick the previous entry in the GRUB menu at boot, and your system will start in its previous state before switching to Fedora 33. To make this change permanent, use the following command:
$ rpm-ostree rollback
That’s it. Now you know how to rebase Silverblue to Fedora 33 and roll back. So why not do it today?
The kernel team is working on final integration for kernel 5.9. This version was just recently released, and will arrive soon in Fedora. As a result, the Fedora kernel and QA teams have organized a test week from Monday, October 26, 2020 through Monday, November 02, 2020. Refer to the wiki page for links to the test images you’ll need to participate. Read below for details.
How does a test week work?
A test week is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.
To contribute, you only need to be able to do the following things:
Download test materials, which include some large files
Read and follow directions step by step
The wiki page for the kernel test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the day of the event, please do some testing and report your results. We have a document which provides all the steps written.
Happy testing, and we hope to see you on test day.
Many computers use the Network Time Protocol (NTP) to synchronize their system clocks over the internet. NTP is one of the few unsecured internet protocols still in common use. An attacker that can observe network traffic between a client and server can feed the client with bogus data and, depending on the client’s implementation and configuration, force it to set its system clock to any time and date. Some programs and services might not work if the client’s system clock is not accurate. For example, a web browser will not work correctly if the web servers’ certificates appear to be expired according to the client’s system clock. Use Network Time Security (NTS) to secure NTP.
Fedora 331 is the first Fedora release to support NTS. NTS is a new authentication mechanism for NTP. It enables clients to verify that the packets they receive from the server have not been modified while in transit. The only thing an attacker can do when NTS is enabled is drop or delay packets. See RFC8915 for further details about NTS.
NTP can be secured well with symmetric keys. Unfortunately, the server has to have a different key for each client and the keys have to be securely distributed. That might be practical with a private server on a local network, but it does not scale to a public server with millions of clients.
NTS includes a Key Establishment (NTS-KE) protocol that automatically creates the encryption keys used between the server and its clients. It uses Transport Layer Security (TLS) on TCP port 4460. It is designed to scale to very large numbers of clients with a minimal impact on accuracy. The server does not need to keep any client-specific state. It provides clients with cookies, which are encrypted and contain the keys needed to authenticate the NTP packets. Privacy is one of the goals of NTS. The client gets a new cookie with each server response, so it doesn’t have to reuse cookies. This prevents passive observers from tracking clients migrating between networks.
The default NTP client in Fedora is chrony. Chrony added NTS support in version 4.0. The default configuration hasn’t changed. Chrony still uses public servers from the pool.ntp.org project and NTS is not enabled by default.
Currently, there are very few public NTP servers that support NTS. The two major providers are Cloudflare and Netnod. The Cloudflare servers are in various places around the world. They use anycast addresses that should allow most clients to reach a close server. The Netnod servers are located in Sweden. In the future we will probably see more public NTP servers with NTS support.
A general recommendation for configuring NTP clients for best reliability is to have at least three working servers. For best accuracy, it is recommended to select close servers to minimize network latency and asymmetry caused by asymmetric network routing. If you are not concerned about fine-grained accuracy, you can ignore this recommendation and use any NTS servers you trust, no matter where they are located.
If you do want high accuracy, but you don’t have a close NTS server, you can mix distant NTS servers with closer non-NTS servers. However, such a configuration is less secure than a configuration using NTS servers only. The attackers still cannot force the client to accept arbitrary time, but they do have a greater control over the client’s clock and its estimate of accuracy, which may be unacceptable in some environments.
Enable client NTS in the installer
When installing Fedora 33, you can enable NTS in the Time & Date dialog in the Network Time configuration. Enter the name of the server and check the NTS support before clicking the + (Add) button. You can add one or more servers or pools with NTS. To remove the default pool of servers (2.fedora.pool.ntp.org), uncheck the corresponding mark in the Use column.
Network Time configuration in
Fedora installer
Enable client NTS in the configuration file
If you upgraded from a previous Fedora release, or you didn’t enable NTS in the installer, you can enable NTS directly in /etc/chrony.conf. Specify the server with the nts option in addition to the recommended iburst option. For example:
server time.cloudflare.com iburst nts
server nts.sth1.ntp.se iburst nts
server nts.sth2.ntp.se iburst nts
You should also allow the client to save the NTS keys and cookies to disk,
so it doesn’t have to repeat the NTS-KE session on each start. Add the
following line to chrony.conf, if it is not already present:
ntsdumpdir /var/lib/chrony
If you don’t want NTP servers provided by DHCP to be mixed with the servers you
have specified, remove or comment out the following line in chrony.conf:
sourcedir /run/chrony-dhcp
After you have finished editing chrony.conf, save your changes and restart the chronyd service:
systemctl restart chronyd
Check client status
Run the following command under the root user to check whether the NTS key
establishment was successful:
The KeyID, Type, and KLen columns should have non-zero values. If they are zero, check the system log for error messages from chronyd. One possible cause of failure is a firewall is blocking the client’s connection to the server’s TCP port ( port 4460).
Another possible cause of failure is a certificate that is failing to verify because the client’s clock is wrong. This is a chicken-or-the-egg type problem with NTS. You may need to manually correct the date or temporarily disable NTS in order to get NTS working. If your computer has a real-time clock, as almost all computers do, and it’s backed up by a good battery, this operation should be needed only once.
If the computer doesn’t have a real-time clock or battery, as is common with
some small ARM computers like the Raspberry Pi, you can add the -s
option to /etc/sysconfig/chronyd to restore time saved on the last
shutdown or reboot. The clock will be behind the true time, but if the
computer wasn’t shut down for too long and the server’s certificates were not
renewed too close to their expiration, it should be sufficient for the time
checks to succeed. As a last resort, you can disable the time checks with the nocerttimecheck directive. See the chrony.conf(5) man page
for details.
Run the following command to confirm that the client is making NTP
measurements:
The Reach column should have a non-zero value; ideally 377. The value 377 shown above is an octal number. It indicates that the last eight requests all had a valid response. The validation check will include NTS authentication if enabled. If the value only rarely or never gets to 377, it indicates that NTP requests or responses are getting lost in the network. Some major network operators are known to have middleboxes that block or limit rate of large NTP packets as a mitigation for amplification attacks that exploit the monitoring protocol of ntpd. Unfortunately, this impacts NTS-protected NTP packets, even though they don’t cause any amplification. The NTP working group is considering an alternative port for NTP as a workaround for this issue.
Enable NTS on the server
If you have your own NTP server running chronyd, you can enable server NTS support to allow its clients to be synchronized securely. If the server is a client of other servers, it should use NTS or a symmetric key for its own synchronization. The clients assume the synchronization chain is secured between all servers up to the primary time servers.
Enabling server NTS is similar to enabling HTTPS on a web server. You just need a private key and certificate. The certificate could be signed by the Let’s Encrypt authority using the certbot tool, for example. When you have the key and certificate file (including intermediate certificates), specify them in chrony.conf with the following directives:
Make sure the ntsdumpdir directive mentioned previously in the
client configuration is present in chrony.conf. It allows the server
to save its keys to disk, so the clients of the server don’t have to get new
keys and cookies when the server is restarted.
Restart the chronyd service:
systemctl restart chronyd
If there are no error messages in the system log from chronyd, it should be
accepting client connections. If the server has a firewall, it needs to allow
both the UDP 123 and TCP 4460 ports for NTP and NTS-KE respectively.
You can perform a quick test from a client machine with the following command:
$ chronyd -Q -t 3 'server foo.example.net iburst nts maxsamples 1'
2020-10-13T12:00:52Z chronyd version 4.0 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
2020-10-13T12:00:52Z Disabled control of system clock
2020-10-13T12:00:55Z System clock wrong by -0.001032 seconds (ignored)
2020-10-13T12:00:55Z chronyd exiting
If you see a System clock wrong message, it’s working
correctly.
On the server, you can use the following command to check how many NTS-KE
connections and authenticated NTP packets it has handled:
If you see non-zero NTS-KE connections accepted and Authenticated
NTP packets, it means at least some clients were able to connect to the
NTS-KE port and send an authenticated NTP request.
— Cover photo by Louis. K on Unsplash —
1. The Fedora 33 Beta installer contains an older chrony prerelease which doesn’t work with current NTS servers because the NTS-KE port has changed. Consequently, in the Network Time configuration in the installer, the servers will always appear as not working. After installation, the chrony package needs to be updated before it will work with current servers.
This article explains how to make incremental or differential backups, with a catalog available to restore (or export) at the point you want, with Butterfly Backup.
Requirements
Butterfly Backup is a simple wrapper of rsync written in python; the first requirement is python3.3 or higher (plus module cryptography for init action). Other requirements are openssh and rsync (version 2.5 or higher). Ok, let’s go!
[Editors note: rsync version 3.2.3 is already installed on Fedora 33 systems]
After that, installing Butterfly Backup is very simple by using the following commands to clone the repository locally, and set up Butterfly Backup for use:
$ git clone https://github.com/MatteoGuadrini/Butterfly-Backup.git
$ cd Butterfly-Backup
$ sudo python3 setup.py
$ bb --help
$ man bb
To upgrade, you would use the same commands too.
Example
Butterfly Backup is a server to client tool and is installed on a server (or workstation). The restore process restores the files into the specified client. This process shares some of the options available to the backup process.
Backups are organized accord to precise catalog; this is an example:
$ tree destination/of/backup
.
├── destination
│ ├── hostname or ip of the PC under backup
│ │ ├── timestamp folder
│ │ │ ├── backup folders
│ │ │ ├── backup.log
│ │ │ └── restore.log
│ │ ├─── general.log
│ │ └─── symlink of last backup
│
├── export.log
├── backup.list
└── .catalog.cfg
Butterfly Backup has six main operations, referred to as actions, you can get information about them with the –help command.
$ bb --help
usage: bb [-h] [--verbose] [--log] [--dry-run] [--version] {config,backup,restore,archive,list,export} ... Butterfly Backup optional arguments: -h, --help show this help message and exit --verbose, -v Enable verbosity --log, -l Create a log --dry-run, -N Dry run mode --version, -V Print version action: Valid action {config,backup,restore,archive,list,export} Available actions config Configuration options backup Backup options restore Restore options archive Archive options list List options export Export options
Configuration
Configuration mode is straight forward; If you’re already familiar with the exchange keys and OpenSSH, you probably won’t need it. First, you must create a configuration (rsa keys), for instance:
$ bb config --new
SUCCESS: New configuration successfully created!
After creating the configuration, the keys will be installed (copied) on the hosts you want to backup:
$ bb config --deploy host1
Copying configuration to host1; write the password:
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/arthur/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
arthur@host1's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'arthur@host1'"
and check to make sure that only the key(s) you wanted were added. SUCCESS: Configuration copied successfully on host1!
Backup
There are two backup modes: single and bulk. The most relevant features of the two backup modes are the parallelism and retention of old backups. See the two parameters –parallel and –retention in the documentation.
Single backup
The backup of a single machine consists in taking the files and folders indicated in the command line, and putting them into the cataloging structure indicated above. In other words, copy all file and folders of a machine into a path.
Above all, bulk mode backups share the same options as single mode, with the difference that they accept a file containing a list of hostnames or ips. In this mode backups will performed in parallel (by default 5 machines at a time). Above all, if you want to run fewer or more machines in parallel, specify the –parallel parameter.
Incremental of the previous backup, for instance:
$ cat /home/arthur/pclist.txt
host1
host2
host3
$ bb backup --list /home/arthur/pclist.txt --destination /mnt/backup --data User Config --type Unix
ERROR: The port 22 on host2 is closed!
ERROR: The port 22 on host3 is closed!
Start backup on host1
SUCCESS: Command rsync -ahu --no-links --link-dest=/mnt/backup/host1/2020_09_19__10_28 arthur@host1:/home :/etc /mnt/backup/host1/2020_09_19__10_50
There are four backup modes, which you specify with the –mode flag: Full (backup all files) , Mirror (backup all files in mirror mode), Differential (is based on the latest Full backup) and Incremental (is based on the latest backup). The default mode is Incremental; Full mode is set by default when the flag is not specified.
Listing catalog
The first time you run backup commands, the catalog is created. The catalog is used for future backups and all the restores that are made through Butterfly Backup. To query this catalog use the list command. First, let’s query the catalog in our example:
$ bb list --catalog /mnt/backup BUTTERFLY BACKUP CATALOG Backup id: aba860b0-9944-11e8-a93f-005056a664e0
Hostname or ip: host1
Timestamp: 2020-09-19 10:28:12 Backup id: dd6de2f2-9a1e-11e8-82b0-005056a664e0
Hostname or ip: host1
Timestamp: 2020-09-19 10:50:59
To export the catalog list use it with an external tool like cat, include the ––log flag:
$ bb list --catalog /mnt/backup --log
$ cat /mnt/backup/backup.list
Restore
The restore process is the exact opposite of the backup process. It takes the files from a specific backup and push it to the destination computer. This command perform a restore on the same machine of the backup, for instance:
$ bb restore --catalog /mnt/backup --backup-id dd6de2f2-9a1e-11e8-82b0-005056a664e0 --computer host1 --log
Want to do restore path /mnt/backup/host1/2020_09_19__10_50/etc? To continue [Y/N]? y
Want to do restore path /mnt/backup/host1/2020_09_19__10_50/home? To continue [Y/N]? y
SUCCESS: Command rsync -ahu -vP --log-file=/mnt/backup/host1/2020_09_19__10_50/restore.log /mnt/backup/host1/2020_09_19__10_50/etc arthur@host1:/restore_2020_09_19__10_50
SUCCESS: Command rsync -ahu -vP --log-file=/mnt/backup/host1/2020_09_19__10_50/restore.log /mnt/backup/host1/2020_09_19__10_50/home/* arthur@host1:/home
Without specifying the “type” flag that indicates the operating system on which the data are being retrieved, Butterfly Backup will select it directly from the catalog via the backup-id.
Archive old backup
Archive operations are used to store backups by saving disk space.
Butterfly Backup was born from a very complex need; this tool provides superpowers to rsync, automates the backup and restore process. In addition, the catalog allows you to have a system similar to a “time machine”.
In conclusion, Butterfly Backup is a lightweight, versatile, simple and scriptable backup tool.
The previous article looked at how the Web of Trust works in concept, and how the Web of Trust is implemented at Fedora. In this article, you’ll learn how to do it yourself. The power of this system lies in everybody being able to validate the actions of others—if you know how to validate somebody’s work, you’re contributing to the strength of our shared security.
Choosing a project
Remmina is a remote desktop client written in GTK+. It aims to be useful for system administrators and travelers who need to work with lots of remote computers in front of either large monitors or tiny netbooks. In the current age, where many people must work remotely or at least manage remote servers, the security of a program like Remmina is critical. Even if you do not use it yourself, you can contribute to the Web of Trust by checking it for others.
The question is: how do you know that a given version of Remmina is good, and that the original developer—or distribution server—has not been compromised?
For this tutorial, you’ll use Flatpak and the Flathub repository. Flatpak is intentionally well-suited for making verifiable rebuilds, which is one of the tenets of the Web of Trust. It’s easier to work with since it doesn’t require users to download independent development packages. Flatpak also uses techniques to prevent in‑flight tampering, using hashes to validate its read‑only state. As far as the Web of Trust is concerned, Flatpak is the future.
For this guide, you use Remmina, but this guide generally applies to every application you use. It’s also not exclusive to Flatpak, and the general steps also apply to Fedora’s repositories. In fact, if you’re currently reading this article on Debian or Arch, you can still follow the instructions. If you want to follow along using traditional RPM repositories, make sure to check out this article.
Installing and checking
To install Remmina, use the Software Center or run the following from a terminal:
Open a terminal here and find the following directories using ls -la:
total 44
drwxr-xr-x. 2 root root 4096 Jan 1 1970 bin
drwxr-xr-x. 3 root root 4096 Jan 1 1970 etc
drwxr-xr-x. 8 root root 4096 Jan 1 1970 lib
drwxr-xr-x. 2 root root 4096 Jan 1 1970 libexec
-rw-r--r--. 2 root root 18644 Aug 25 14:37 manifest.json
drwxr-xr-x. 2 root root 4096 Jan 1 1970 sbin
drwxr-xr-x. 15 root root 4096 Jan 1 1970 share
Getting the hashes
In the bin directory you will find the main binaries of the application, and in lib you find all dependencies that Remmina uses. Now calculate a hash for ./bin/remmina:
sha256sum ./bin/*
This will give you a list of numbers: checksums. Copy them to a temporary file, as this is the current version of Remmina that Flathub is distributing. These numbers have something special: only an exact copy of Remmina can give you the same numbers. Any change in the code—no matter how minor—will produce different numbers.
Like Fedora’s Koji and Bodhi build and update services, Flathub has all its build servers in plain view. In the case of Flathub, look at Buildbot to see who is responsible for the official binaries of a package. Here you will find all of the logs, including all the failed builds and their paper trail.
Getting the source
The main Flathub project is hosted on GitHub, where the exact compile instructions (“manifest” in Flatpak terms) are visible for all to see. Open a new terminal in your Home folder. Clone the instructions, and possible submodules, using one command:
This indicates that you need the GNOME SDK, which you can install with:
flatpak install org.gnome.Sdk//3.38
This provides the latest versions of the Free Desktop and GNOME SDK. There are also additional SDK’s for additional options, but those are beyond the scope of this tutorial.
Generating your own hashes
Now that everything is set up, compile your version of Remmina by running:
After this, your terminal will print a lot of text, your fans will start spinning, and you’re compiling Remmina. If things do not go so smoothly, refer to the Flatpak Documentation; troubleshooting is beyond the scope of this tutorial.
Once complete, you should have the directory ./build-dir/files/, which should contain the same layout as above. Now the moment of truth: it’s time to generate the hashes for the built project:
sha256sum ./bin/*
You should get exactly the same numbers. This proves that the version on Flathub is indeed the version that the Remmina developers and maintainers intended for you to run. This is great, because this shows that Flathub has not been compromised. The web of trust is strong, and you just made it a bit better.
Going deeper
But what about the ./lib/ directory? And what version of Remmina did you actually compile? This is where the Web of Trust starts to branch. First, you can also double-check the hashes of the ./lib/ directory. Repeat the sha256sum command using a different directory.
But what version of Remmina did you compile? Well, that’s in the Manifest. In the text file you’ll find (usually at the bottom) the git repository and branch that you just used. At the time of this writing, that is:
Here, you can decide to look at the Remmina code itself:
git clone --recurse-submodules https://gitlab.com/Remmina/Remmina.git cd ./Remmina git checkout tags/v1.4.8
The last two commands are important, since they ensure that you are looking at the right version of Remmina. Make sure you use the corresponding tag of the Manifest file. you can see everything that you just built.
What if…?
The question on some minds is: what if the hashes don’t match? Quoting a famous novel: “Don’t Panic.” There are multiple legitimate reasons as to why the hashes do not match.
It might be that you are not looking at the same version. If you followed this guide to a T, it should give matching results, but minor errors will cause vastly different results. Repeat the process, and ask for help if you’re unsure if you’re making errors. Perhaps Remmina is in the process of updating.
But if that still doesn’t justify the mismatch in hashes, go to the maintainers of Remmina on Flathub and open an issue. Assume good intentions, but you might be onto something that isn’t totally right.
The most obvious upstream issue is that Remmina does not properly support reproducible builds yet. The code of Remmina needs to be written in such a way that repeating the same action twice, gives the same result. For developers, there is an entire guide on how to do that. If this is the case, there should be an issue on the upstream bug-tracker, and if it is not there, make sure that you create one by explaining your steps and the impact.
If all else fails, and you’ve informed upstream about the discrepancies and they to don’t know what is happening, then it’s time to send an email to the Administrators of Flathub and the developer in question.
Conclusion
At this point, you’ve gone through the entire process of validating a single piece of a bigger picture. Here, you can branch off in different directions:
Try another Flatpak application you like or use regularly
Try the RPM version of Remmina
Do a deep dive into the C code of Remmina
Relax for a day, knowing that the Web of Trust is a collective effort
In the grand scheme of things, we can all carry a small part of responsibility in the Web of Trust. By taking free/libre open source software (FLOSS) concepts and applying them in the real world, you can protect yourself and others. Last but not least, by understanding how the Web of Trust works you can see how FLOSS software provides unique protections.
Fedora 33 switches the default DNS resolver to systemd-resolved. In simple terms, this means that systemd-resolved will run as a daemon. All programs wanting to translate domain names to network addresses will talk to it. This replaces the current default lookup mechanism where each program individually talks to remote servers and there is no shared cache.
If necessary, systemd-resolved will contact remote DNS servers. systemd-resolved is a “stub resolver”—it doesn’t resolve all names itself (by starting at the root of the DNS hierarchy and going down label by label), but forwards the queries to a remote server.
A single daemon handling name lookups provides significant benefits. The daemon caches answers, which speeds answers for frequently used names. The daemon remembers which servers are non-responsive, while previously each program would have to figure this out on its own after a timeout. Individual programs only talk to the daemon over a local transport and are more isolated from the network. The daemon supports fancy rules which specify which name servers should be used for which domain names—in fact, the rest of this article is about those rules.
Split DNS
Consider the scenario of a machine that is connected to two semi-trusted networks (wifi and ethernet), and also has a VPN connection to your employer. Each of those three connections has its own network interface in the kernel. And there are multiple name servers: one from a DHCP lease from the wifi hotspot, two specified by the VPN and controlled by your employer, plus some additional manually-configured name servers. Routing is the process of deciding which servers to ask for a given domain name. Do not mistake this with the process of deciding where to send network packets, which is called routing too.
The network interface is king in systemd-resolved. systemd-resolved first picks one or more interfaces which are appropriate for a given name, and then queries one of the name servers attached to that interface. This is known as “split DNS”.
There are two flavors of domains attached to a network interface: routing domains and search domains. They both specify that the given domain and any subdomains are appropriate for that interface. Search domains have the additional function that single-label names are suffixed with that search domain before being resolved. For example, a lookup for “server” is treated as a lookup for “server.example.com” if the search domain is “example.com.” In systemd-resolved config files, routing domains are prefixed with the tilde (~) character.
Specific example
Now consider a specific example: your VPN interface tun0 has a search domain private.company.com and a routing domain ~company.com. If you ask for mail.private.company.com, it is matched by both domains, so this name would be routed to tun0.
A request for www.company.com is matched by the second domain and would also go to tun0. If you ask for www, (in other words, if you specify a single-label name without any dots), the difference between routing and search domains comes into play. systemd-resolved attempts to combine the single-label name with the search domain and tries to resolve www.private.company.com on tun0.
If you have multiple interfaces with search domains, single-label names are suffixed with all search domains and resolved in parallel. For multi-label names, no suffixing is done; search and routing domains are are used to route the name to the appropriate interface. The longest match wins. When there are multiple matches of the same length on different interfaces, they are resolved in parallel.
A special case is when an interface has a routing domain ~. (a tilde for a routing domain and a dot for the root DNS label). Such an interface always matches any names, but with the shortest possible length. Any interface with a matching search or routing domain has higher priority, but the interface with ~. is used for all other names. Finally, if no routing or search domains matched, the name is routed to all interfaces that have at least one name server attached.
Lookup routing in systemd-resolved
Domain routing
This seems fairly complex, partially because of the historic names which are confusing. In actual practice it’s not as complicated as it seems.
To introspect a running system, use the resolvectl domain command. For example:
$ resolvectl domain Global: Link 4 (wlp4s0): ~. Link 18 (hub0): Link 26 (tun0): redhat.com
You can see that www would resolve as www.redhat.com. over tun0. Anything ending with redhat.com resolves over tun0. Everything else would resolve over wlp4s0 (the wireless interface). In particular, a multi-label name like www.foobar would resolve over wlp4s0, and most likely fail because there is no foobar top-level domain (yet).
Server routing
Now that you know which interface or interfaces should be queried, the server or servers to query are easy to determine. Each interface has one or more name servers configured. systemd-resolved will send queries to the first of those. If the server is offline and the request times out or if the server sends a syntactically-invalid answer (which shouldn’t happen with “normal” queries, but often becomes an issue when DNSSEC is enabled), systemd-resolved switches to the next server on the list. It will use that second server as long as it keeps responding. All servers are used in a round-robin rotation.
To introspect a running system, use the resolvectl dns command:
$ resolvectl dns Global: Link 4 (wlp4s0): 192.168.1.1 8.8.4.4 8.8.8.8 Link 18 (hub0): Link 26 (tun0): 10.45.248.15 10.38.5.26
When combined with the previous listing, you know that for www.redhat.com, systemd-resolved will query 10.45.248.15, and—if it doesn’t respond—10.38.5.26. For www.google.com, systemd-resolved will query 192.168.1.1 or the two Google servers 8.8.4.4 and 8.8.8.8.
Differences from nss-dns
Before going further detail, you may ask how this differs from the previous default implementation (nss-dns). With nss-dns there is just one global list of up to three name servers and a global list of search domains (specified as nameserver and search in /etc/resolv.conf).
Each name to query is sent to the first name server. If it doesn’t respond, the same query is sent to the second name server, and so on. systemd-resolved implements split-DNS and remembers which servers are currently considered active.
For single-label names, the query is performed with each of the the search domains suffixed. This is the same with systemd-resolved. For multi-label names, a query for the unsuffixed name is performed first, and if that fails, a query for the name suffixed by each of the search domains in turn is performed. systemd-resolved doesn’t do that last step; it only suffixes single-label names.
A second difference is that with nss-dns, this module is loaded into each process. The process itself communicates with remote servers and implements the full DNS stack internally. With systemd-resolved, the nss-resolve module is loaded into the process, but it only forwards the query to systemd-resolved over a local transport (D-Bus) and doesn’t do any work itself. The systemd-resolved process is heavily sandboxed using systemd service features.
The third difference is that with systemd-resolved all state is dynamic and can be queried and updated using D-Bus calls. This allows very strong integration with other daemons or graphical interfaces.
Configuring systemd-resolved
So far, this article talked about servers and the routing of domains without explaining how to configure them. systemd-resolved has a configuration file (/etc/systemd/resolv.conf) where you specify name servers with DNS= and routing or search domains with Domains= (routing domains with ~, search domains without). This corresponds to the Global: lists in the two listings above.
In this article’s examples, both lists are empty. Most of the time configuration is attached to specific interfaces, and “global” configuration is not very useful. Interfaces come and go and it isn’t terribly smart to contact servers on an interface which is down. As soon as you create a VPN connection, you want to use the servers configured for that connection to resolve names, and as soon as the connection goes down, you want to stop.
How does then systemd-resolved acquire the configuration for each interface? This happens dynamically, with the network management service pushing this configuration over D-Bus into systemd-resolved. The default in Fedora is NetworkManager and it has very good integration with systemd-resolved. Alternatives like systemd’s own systemd-networkd implement similar functionality. But the interface is open and other programs can do the appropriate D-Bus calls.
Alternatively, resolvectl can be used for this (it is just a wrapper around the D-Bus API). Finally, resolvconf provides similar functionality in a form compatible with a tool in Debian with the same name.
Scenario: Local connection more trusted than VPN
The important thing is that in the common scenario, systemd-resolved follows the configuration specified by other tools, in particular NetworkManager. So to understand how systemd-resolved names, you need to see what NetworkManager tells it to do. Normally NM will tell systemd-resolved to use the name servers and search domains received in a DHCP lease on some interface. For example, look at the source of configuration for the two listings shown above:
There are two connections: “Parkinson” wifi and “Brno (BRQ)” VPN. In the first panel DNS:Automatic is enabled, which means that the DNS server received as part of the DHCP lease (192.168.1.1) is passed to systemd-resolved. Additionally. 8.8.4.4 and 8.8.8.8 are listed as alternative name servers. This configuration is useful if you want to resolve the names of other machines in the local network, which 192.168.1.1 provides. Unfortunately the hotspot DNS server occasionally gets stuck, and the other two servers provide backup when that happens.
The second panel is similar, but doesn’t provide any special configuration. NetworkManager combines routing domains for a given connection from DHCP, SLAAC RDNSS, and VPN, and finally manual configuration and forward this to systemd-resolved. This is the source of the search domain redhat.com in the listing above.
There is an important difference between the two interfaces though: in the second panel, “Use this connection only for resources on its network” is checked. This tells NetworkManager to tell systemd-resolved to only use this interface for names under the search domain received as part of the lease (Link 26 (tun0): redhat.com in the first listing above). In the first panel, this checkbox is unchecked, and NetworkManager tells systemd-resolved to use this interface for all other names (Link 4 (wlp4s0): ~.). This effectively means that the wireless connection is more trusted.
Scenario: VPN more trusted than local network
In a different scenario, a VPN would be more trusted than the local network and the domain routing configuration reversed. If a VPN without “Use this connection only for resources on its network” is active, NetworkManager tells systemd-resolved to attach the default routing domain to this interface. After unchecking the checkbox and restarting the VPN connection:
$ resolvectl domain
Global:
Link 4 (wlp4s0):
Link 18 (hub0):
Link 28 (tun0): ~. redhat.com
$ resolvectl dns
Global:
Link 4 (wlp4s0):
Link 18 (hub0):
Link 28 (tun0): 10.45.248.15 10.38.5.26
Now all domain names are routed to the VPN. The network management daemon controls systemd-resolved and the user controls the network management daemon.
Additional systemd-resolved functionality
As mentioned before, systemd-resolved provides a common name lookup mechanism for all programs running on the machine. Right now the effect is limited: shared resolver and cache and split DNS (the lookup routing logic described above). systemd-resolved provides additional resolution mechanisms beyond the traditional unicast DNS. These are the local resolution protocols MulticastDNS and LLMNR, and an additional remote transport DNS-over-TLS.
Fedora 33 does not enable MulticastDNS and DNS-over-TLS in systemd-resolved. MulticastDNS is implemented by nss-mdns4_minimal and Avahi. Future Fedora releases may enable these as the upstream project improves support.
Implementing this all in a single daemon which has runtime state allows smart behaviour: DNS-over-TLS may be enabled in opportunistic mode, with automatic fallback to classic DNS if the remote server does not support it. Without the daemon which can contain complex logic and runtime state this would be much harder. When enabled, those additional features will apply to all programs on the system.
There is more to systemd-resolved: in particular LLMNR and DNSSEC, which only received brief mention here. A future article will explore those subjects.