Posted on Leave a comment

GitHub Actions: Use Podman to run Fedora Linux

Introduction

GitHub enables distributed and collaborative code development. To ensure software works correctly, many projects use continuous integration to build and test each new contribution before including it. The continuous integration service on GitHub is GitHub actions.

Background

GitHub offers testing on Ubuntu, macOS and Windows operating systems. However, there is a wide variety of other operating systems and you may want to ensure that an open source project developed on GitHub runs well on another operating system, in particular Fedora Linux.

Podman is a command line tool that can run a different Linux operating system in a container. This provides a convenient way to test software on other operating systems. The article Getting Started with Podman in Fedora Linux introduces how to run Podman on Fedora.

This article demonstrates how to run Fedora Linux in a container using Podman. The host operating system can be any distro that has Podman installed, even macOS or Windows. In the following demo, the host operating system is Ubuntu. This will allow us to test that projects developed on GitHub will work successfully on Fedora, even if Fedora is not available as a base operating system for GitHub actions.

Example GitHub Actions Configuration

As an example, we add continuous integration for Fedora Linux to RedAmber, a project enabling the use of dataframes for machine learning and other data science applications in Ruby. This project relies on Apache Arrow release 10 or greater, so we need to use Fedora Linux Rawhide (F38) since Fedora Linux 37 currently has Apache Arrow release 9 in the Fedora repositories.

GitHub has great documentation on using GitHub Actions. In summary, we need to create a yaml file in the .github/workflows directory of the project, and then enable GitHub Actions if it is not already enabled. A sample yaml file which you can easily modify is below:

name: CI
on: push: branches: - main pull_request: jobs: test: name: fedora runs-on: ubuntu-latest steps: - name: Setup Podman run: | sudo apt update sudo apt-get -y install podman podman pull fedora:38 - name: Get source uses: actions/checkout@v3 with: path: 'red_amber' - name: Create container and run tests run: | { echo 'FROM fedora:38' echo '# set TZ to ensure the test using timestamp' echo 'ENV TZ=Asia/Tokyo' echo 'RUN dnf -y update' echo 'RUN dnf -y install gcc-c++ git libarrow-devel libarrow-glib-devel ruby-devel' echo 'RUN dnf clean all' echo 'COPY red_amber red_amber' echo 'WORKDIR /red_amber' echo 'RUN bundle install' echo 'RUN bundle exec rake test' } > podmanfile podman build --tag fedora38test -f ./podmanfile

Adding the above yaml file enables testing on Fedora Linux running as a guest on Ubuntu. Similar workflows should work for other projects developed on GitHub, thereby ensuring a wide variety of software will run well on Fedora Linux.

Acknowledgements

Benson Muite is grateful to Hirokazu SUZUKI for creating RedAmber, improving the workflow, and using it to test RedAmber on Fedora Linux.

Posted on Leave a comment

Getting ready for an exciting 2023

[This message comes directly from the desk of Matthew Miller, the Fedora Project Leader. — Ed.]

This “love letter to the community” started in 2020 as a way to shine a little light in a very dark time, and to encourage everyone — including me — by reminding us all of the great work done by great people in Fedora. But it’s become one of my favorite things to do all year. We’re no longer just trying to get through a dark time. We’re looking forward to an exciting era in Fedora’s future.

The work we did this year sets a great foundation for building our future. I don’t just mean the Fedora Linux 36 and 37 releases, although we should definitely be proud of those. But there’s a continued sense of excitement around the community. We’re growing and bringing new energy.

This year, Nest With Fedora grew even more in a time where everyone is feeling virtual event fatigue. And we introduced Hatch — regional events where you could meet with other local-ish contributors. Reading the recaps, I wish I could have gone to all of them. But it was great to spend time with some of you in Rochester. I’ve really, really missed our in-person interactions. Virtual events help keep our global community connected, and help bring in new people who might not be able to join us otherwise, but they can’t substitute for face-to-face meetups. More on that in a moment.

It’s not just a few days of events that has me excited, though. When I look around the project, I see a lot happening. The Fedora CoreOS and Cloud teams promoted their deliverables to Edition status. We wrapped up a huge revamp of our community outreach that began in 2020. The Docs team is more active than it has been in years (and they’ve added a search bar to the site!). We have a complete renovation of our websites in the works. The Marketing team is exploring new ways to promote Fedora, including a presence in the Fediverse. We’re finally almost ready to merge Ask Fedora and Fedora Discussion, bringing more of our conversations together.

That’s a lot of work for one year. The best part is how organic this work is. This wasn’t some demand from on high (that’s not how Fedora works), but it was people in the community saying “I see work to be done. I’m going to do it!” Fedora is us.

We will celebrate so much more in 2023. We’re still working on the details, but we expect to have a greater in-person experience next year, including funding for hackfests and the return of Flock to Fedora! And of course, it’s the 20th anniversary of Fedora. The world — and the technology that drives it — has changed so much since then. But our values haven’t. The Fedora community remains an inclusive, welcoming, and open-minded community. I’m proud to be a part of it. Happy new year, everyone!

Posted on Leave a comment

Setting up Fedora IoT on Raspberry Pi and rootless Podman containers

Introduction

Fedora IoT is a foundation for Internet of Things (IoT) and Device Edge ecosystems. It’s a secure, immutable, and image-based operating system that supports the deployment of containerized applications. We’ll discuss how you can run Fedora IoT on a Raspberry Pi to deploy a rootless Podman container.

Running Fedora IoT on Raspberry Pi

Prerequisites:

  • PC (with Fedora)
  • SD-Card and SD-Card Reader
  • Raspberry Pi 3 or 4

Download the IoT image & CHECKSUM for your CPU from getfedora.org.

Screenshot of Fedora IoT image download.

After you download your Fedora IoT image, click Verify your Download to download the CHECKSUM file.

Screenshot to show where to find the "Verify your download." button.

Place the CHECKSUM file in the same location where you downloaded your Fedora IoT image.

Then, install gnupg and the arm image installer:

dnf install gnupg2 arm-image-installer

Next, import Fedora’s GPG keys to verify the image you downloaded:

$ curl -O https://getfedora.org/static/fedora.gpg

Then, verify the CHECKSUM file has a good signature:

$ gpgv --keyring ./fedora.gpg *-CHECKSUM

You should see something similar to the following in the output:

$ gpgv --keyring ./fedora.gpg *-CHECKSUM
gpgv: Signature made Fri 19 Mar 2021 10:10:28 AM EDT
gpgv: using RSA key 8C5BA6990BDB26E19F2A1A801161AE6945719A39
gpgv: Good signature from "Fedora (34) <[email protected]>"

Lastly, verify the checksum of your download to verify that the signature matches:

$ sha256sum -c *-CHECKSUM

Now, find the name of the SD-Card. You can use various tools, but in this article, we recommend using udisks command line tool udiskctl. First, verify that you have NOT inserted your SD-Card into your SD-Card reader.

Then, enter the following command:

udisksctl status 

The output displays all the connected devices on your machine. Review what devices are currently displayed. Next, plug in your SD-Card and enter the command again. Write down the name of the device that’s been added to the previous list.

Use caution when flashing your SD-Card. If you choose the wrong device, you might overwrite your hard drive.

Flash the image onto the SD-Card.

$ arm-image-installer --image=</path/to/fedora_image> \ --target=<RPi_Version> --media=/dev/<sd_card_device> \ --addkey=/path/to/pubkey \ --resizefs
  • Image – File path to the image you downloaded.
  • target – Type of arm board you are using (in this example it would be either the Raspberry Pi 3 or 4).
  • media – SD-Card path you identified.
  • addkey – Your SSH key.
  • resizefs – Resizes the image to the full SD-Card unless you have another partition to add.

The image won’t have a per-configured user or password.

Zezere is a provisioning service that can deploy devices without a physical console. Use Zezere to set up and deploy your device.

Navigate to provision.fedoraproject.org, then click the Claim Unowned Devices tab, and claim your device (i.e. your SD-Card). Click the Home tab to view your claimed device, then click the SSH Key Management tab to add your SSH key. This allows you to copy your SSH key to any of your Fedora IoT devices. The keys generated in the SSH Key Management tab are public, so they can be shared without risk to the security of your devices.

Image of Zezere to use as reference for instructions on how to deploy your device.

Return to the Home tab and click Submit provision request on your SD-Card to set up a provisioning request. Select fedora-iot-stable from the drop-down and click Schedule to copy your SSH Key onto your Fedora IoT device.

You’re now ready to run your applications.

Setting up rootless Podman containers

Fedora IoT uses Podman to develop, manage, and run Open Container Initiative (OCI) containers. Rootless containers can be run by unprivileged users, adding security against hackers to ensure they’re safe to share between machines.

Install slirpfnetns and fuse-overlays to begin setup of a rootless Podman container:

 sudo dnf -y install slirp4netns fuse-overlayfs shadow-utils

Rootless Podman containers require the root user to have a range of UIDs/GIDs listed in the /etc/subuid and /etc/subgid files. Update the /etc/subuid and /etc/subgid for each non-root user.

sudo usermod --add-subuids START-RANGE --add-subgids START-RANGE USERNAME 
  • START – Starting UID (ex. 1000)
  • RANGE – Range for you UID (ex. if you put 100, then your UID will range from 1000 to 1100)
  • USERNAME – The username you’re updating.

Podman is now set up to run rootless containers.

More setup recommendations

View the following resources for additional ways you can improve the setup of your containers:

Posted on Leave a comment

Automate container management on Fedora Linux with the Podman Linux System Role

Containers are a popular way to distribute and run software on Linux. One of the tools included in Fedora Linux to work with containers is the Pod Manager tool, also known as Podman. This article describes the use of the Ansible Podman Linux System Roles to automate container management.

With Podman, you can quickly and easily download container images and run containers. For more information on Podman, check out the Getting Started section on the podman.io site.

While Podman is very easy to use, many people are interested in automating Podman for a variety of reasons. For example, maybe you have multiple Fedora Linux systems that you would like to deploy a container workload across, or perhaps you’re a developer and would like to setup an automated process to deploy containers on your local workstation for testing purposes. Whether you are working with containers on a single system, or need to manage containers across a number of systems, automation can be critical to being efficient and saving time.

Overview of Linux System Roles

Linux System Roles are a set of Ansible roles/collections that can help automate the configuration and management of several aspects of Fedora Linux, CentOS Stream, RHEL, and RHEL derivatives. Linux System Roles is packaged in Fedora as an RPM (linux-system-roles) and is also available on Ansible Galaxy. For more information on Linux System Roles, and to see a list of included roles, refer to the Linux System Roles project page.

Linux System Roles recently added a new podman role for automating the management of Podman containers. One of Podman’s unique features is that it is daemonless, so the podman role directly sets the desired configuration on each host, and is capable of configuring the containers.conf, containers-registries.conf, containers-storage.conf, and containers-policy.json settings.

Podman systemd integration and Kubernetes YAML support

The podman system role utilizes the systemd integration with Kubernetes YAML introduced in Podman version 4.2. Podman supports the ability to run containers based on Kubernetes YAML, which can make it easier to transition between Podman and Kubernetes. Podman 4.2 introduced a new [email protected] which uses systemd to manage containers defined in Kubernetes YAML. You’ll see an example of how the podman system role utilizes this functionality below.

Demo environment overview

In my environment I have four systems running Fedora Linux. The fedora-controlnode.example.com system will be the Ansible control node — this is where I’ll install Ansible and Linux System Roles. The other three systems, fedora-node1.example.com, fedora-node2.example.com, and fedora3-node3.example.com are the systems that I would like to deploy container workloads on to.

On these three systems, I would like to deploy a Nextcloud container. I would also like to deploy a web server container on these systems and run this as a non-privileged user (also referred to as a rootless container). I’ll use the httpd-24 container image that is a Red Hat Universal Base Image (UBI).

Setting up the control node system

Starting on the fedora-controlnode.example.com system, I’ll need to install the linux-system-roles and ansible packages:

[ansible@fedora-controlnode ~]$ sudo dnf install linux-system-roles ansible 

I’ll also need to configure SSH keys and the sudo configuration so that a user on the fedora-controlnode.example.com host can authenticate and escalate to root privileges on each of the three managed nodes. In this example, I am using an account named ansible.

Defining the Kubernetes YAML for the Nextcloud container

I’ll create a Kubernetes YAML file named nextcloud.yml with the following content that defines how I want the Nextcloud container configured:

apiVersion: v1
kind: Pod
metadata: name: nextcloud
spec: containers: - name: nextcloud image: docker.io/library/nextcloud ports: - containerPort: 80 hostPort: 8000 volumeMounts: - mountPath: /var/www/html:Z name: nextcloud-html volumes: - name: nextcloud-html hostPath: path: /nextcloud-html

The key parts of this YAML specify:

  • the name of the container,
  • the URL for the container image,
  • that the container’s port 80 will be published on the host as port 8000,
  • that the /var/www/html directory should use a volume mount using the /nextcloud-html directory on the host.

Defining the Kubernetes YAML for the web server

I’d also like to deploy a container running a web server, so I’ll define the following Kubernetes YAML file for it, named ubi8-httpd.yml:

apiVersion: v1
kind: Pod
metadata: name: ubi8-httpd
spec: containers: - name: ubi8-httpd image: registry.access.redhat.com/ubi8/httpd-24 ports: - containerPort: 8080 hostPort: 8080 volumeMounts: - mountPath: /var/www/html:Z name: ubi8-html volumes: - name: ubi8-html hostPath: path: ubi8-html

This is similar to the nextcloud.yml file:

  • specifying the name of the container,
  • the URL for the container image,
  • that the container’s port 8080 should be published on the host as port 8080,
  • that the /var/www/html directory should use a volume mount using the ubi8-html directory on the host.

Note that later on we’ll configure this container to run as a non-privileged user, so this path will be relative to the user’s home directory.

Defining the Ansible inventory file

I need to define a Ansible inventory file that lists the host names of the systems I would like to deploy the containers on. I’ll create a simple inventory file, named inventory, with the list of my three managed nodes:

fedora-node1.example.com
fedora-node2.example.com
fedora-node3.example.com

Defining the Ansible playbook

The final file I need to create is the actual Ansible playbook file, which I’ll name podman.yml with the following content:

- name: Run the podman system role hosts: all vars: podman_firewall: - port: 8080/tcp state: enabled - port: 8000/tcp state: enabled podman_create_host_directories: true podman_host_directories: "ubi8-html": owner: ansible group: ansible mode: "0755" podman_kube_specs: - state: started run_as_user: ansible run_as_group: ansible kube_file_src: ubi8-httpd.yml - state: started kube_file_src: nextcloud.yml roles: - fedora.linux_system_roles.podman

- name: Create index.html file hosts: all tasks: - ansible.builtin.copy: content: "Hello from {{ ansible_hostname }}" dest: /home/ansible/ubi8-html/index.html owner: ansible group: ansible mode: 0644 serole: object_r setype: container_file_t seuser: system_u

This playbook contains two plays, the first is named Run the podman system role. This play defines variables that control the podman system role, which is called as part of this play. The variables defined are:

  • podman_firewall: specifies that port 8080/tcp and 8000/tcp should be enabled. These ports are used by the ubi8-httpd and nextcloud containers, respectively.
  • podman_create_host_directories: specifies that host directories defined in the Kubernetes files will be created if they don’t exist
  • podman_host_directories: Within the ubi8-httpd.html Kubernetes YAML file, I defined a ubi8-html volume. This variable specifies that this ubi8-html directory on the hosts will be created with the ansible owner and group, and with a 0755 mode. Note that the nextcloud-html volume, defined in the nextcloud.yml file, is not listed here so the default ownership and permissions will be used when the directory is created on the hosts.
  • podman_kube_specs: This lists the Kubernetes YAML files that the podman system role should manage. It refers to the two files that were previously explained, ubi8-httpd.yml, and nextcloud.yml . Note that for the ubi8-httpd.yml container, it is also specified that this should be run as the ansible user and group.

The second play, Create index.html file, uses the ansible.builtin.copy module to deploy a index.html file to the /home/ansible/ubi8-html/ directory. This will provide the web server running from the ubi8-html containers content to serve.

Running the playbook

The next step is to run the playbook from the fedora-controlnode.example.com host with the following command:

[ansible@fedora-controlnode ~]$ ansible-playbook -i inventory -b podman.yml

I’ll verify that the playbook completes successfully with no failed tasks:

At this point, the nextcloud and ubi8-html containers should be deployed on each of the three managed nodes.

Validating the Nextcloud containers

Now, I’ll validate the successful deployment of the nextcloud containers on the three managed nodes. I can validate that Nextcloud is accessible by connecting to each host on port 8000 using a web browser, which shows the Nextcloud configuration screen on each host:

I’ll further investigate the fedora-node1.example.com host by connecting to it over SSH and using sudo to access a root shell:

[ansible@fedora-controlnode ~]$ ssh fedora-node1.example.com [ansible@fedora-node1 ~]$ sudo su - [root@fedora-node1 ~]# 

Run podman ps to validate that the nextcloud container is running:

[root@fedora-node1 ~]# podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7b6b131a652d localhost/podman-pause:4.2.1-1662580699 4 minutes ago Up 4 minutes ago 0aa0edcf4b08-service
71a2a1a48232 localhost/podman-pause:4.2.1-1662580699 4 minutes ago Up 4 minutes ago 0.0.0.0:8000->80/tcp 8b226e4ad5c1-infra
c307a07c7cae docker.io/library/nextcloud:latest apache2-foregroun... 4 minutes ago Up 4 minutes ago 0.0.0.0:8000->80/tcp nextcloud-nextcloud

Validate that the /nextcloud-html directory on the host has been populated with content from the container:

[root@fedora-node1 ~]# ls -al /nextcloud-html/
total 112
drwxr-xr-x. 1 33 tape 420 Nov 7 13:16 .
dr-xr-xr-x. 1 root root 186 Nov 7 13:12 ..
drwxr-xr-x. 1 33 tape 880 Nov 7 13:16 3rdparty
drwxr-xr-x. 1 33 tape 1182 Nov 7 13:16 apps
-rw-r--r--. 1 33 tape 19327 Nov 7 13:16 AUTHORS
drwxr-xr-x. 1 33 tape 408 Nov 7 13:17 config
-rw-r--r--. 1 33 tape 4095 Nov 7 13:16 console.php
-rw-r--r--. 1 33 tape 34520 Nov 7 13:16 COPYING
drwxr-xr-x. 1 33 tape 440 Nov 7 13:16 core
...
...

I can also see that a systemd unit has created for this container:

[root@fedora-node1 ~]# systemctl list-units | grep nextcloud podman-kube@-etc-containers-ansible\x2dkubernetes.d-nextcloud.yml.service loaded active running A template for running K8s workloads via podman-play-kube [root@fedora-node1 ~]# systemctl status podman-kube@-etc-containers-ansible\\x2dkubernetes.d-nextcloud.yml.service ● podman-kube@-etc-containers-ansible\x2dkubernetes.d-nextcloud.yml.service - A template for running K8s workloads via podman-play-kube Loaded: loaded (/usr/lib/systemd/system/[email protected]; enabled; vendor preset: disabled) Active: active (running) since Mon 2022-11-07 13:16:52 MST; 7min ago Docs: man:podman-play-kube(1) Main PID: 7601 (conmon) Tasks: 3 (limit: 4655) Memory: 31.1M CPU: 2.562s
...
...

Note that the name of the service is quite long because it refers to the name of the Kubernetes YAML file, /etc/containers/ansible-kubernetes.d/nextcloud.yml. This file was deployed by the podman system role. If I display the contents of the file, it matches the contents of the nextcloud.yml Kubernetes YAML file I created on the control node host.

[root@fedora-node1 ~]# cat /etc/containers/ansible-kubernetes.d/nextcloud.yml apiVersion: v1
kind: Pod
metadata: name: nextcloud
spec: containers: - image: docker.io/library/nextcloud name: nextcloud ports: - containerPort: 80 hostPort: 8000 volumeMounts: - mountPath: /var/www/html:Z name: nextcloud-html volumes: - hostPath: path: /nextcloud-html name: nextcloud-html

Validating the ubi8-httpd containers

I’ll also validate that the ub8-httpd container, which was deployed to run as the ansible user and group, is working properly. Back on the fedora-controlnode.example.com host, I’ll validate that I can access the web server on port 8080 on each of the three managed nodes:

[ansible@fedora-controlnode ~]$ for server in fedora-node1.example.com fedora-node2.example.com fedora-node3.example.com; do curl ${server}:8080; echo; done
Hello from fedora-node1
Hello from fedora-node2
Hello from fedora-node3

I’ll also connect to one of the managed nodes as the ansible user to further investigate:

[ansible@fedora-controlnode ~]$ ssh fedora-node1.example.com
[ansible@fedora-node1 ~]$ whoami
ansible

I’ll run podman ps and validate that the ubi8-httpd container is running:

[ansible@fedora-node1 ~]$ podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7b42efd7c9c0 localhost/podman-pause:4.2.1-1662580699 20 minutes ago Up 20 minutes ago 1b46d9874ed0-service
f62b9a2ef9b8 localhost/podman-pause:4.2.1-1662580699 20 minutes ago Up 20 minutes ago 0.0.0.0:8080->8080/tcp 0938dc63acfd-infra
4b3a64783aeb registry.access.redhat.com/ubi8/httpd-24:latest /usr/bin/run-http... 20 minutes ago Up 20 minutes ago 0.0.0.0:8080->8080/tcp ubi8-httpd-ubi8-httpd

This container was deployed as a non-privileged user (the ansible user) so there is a systemd user instance running as the ansible user. I’ll need to specify the –user option on the systemctl command when validating that the systemd unit was created and is running:

[ansible@fedora-node1 ~]$ systemctl --user list-units | grep ubi8 [email protected]\x2dkubernetes.d-ubi8\x2dhttpd.yml.service loaded active running A template for running K8s workloads via podman-play-kube [ansible@fedora-node1 ~]$ systemctl --user status [email protected]\\x2dkubernetes.d-ubi8\\x2dhttpd.yml.service ● [email protected]\x2dkubernetes.d-ubi8\x2dhttpd.yml.service - A template for running K8s workloads via podman-play-kube Loaded: loaded (/usr/lib/systemd/user/[email protected]; enabled; vendor preset: disabled) Active: active (running) since Mon 2022-11-07 13:12:31 MST; 24min ago Docs: man:podman-play-kube(1) Main PID: 5260 (conmon) Tasks: 17 (limit: 4655) Memory: 9.3M CPU: 1.245s
...
...

As previously mentioned, the systemd unit name is so long because it contains the path to the Kubernetes YAML file, which in this case is /home/ansible/.config/containers/ansible-kubernetes.d/ubi8-httpd.yml. This file was deployed by the podman system role and contains the contents of the ubi8-httpd.yml file previously configured on the fedora-controlnode.example.com host.

Validating containers automatically start at boot

I’ll reboot the three managed nodes to validate that the containers automatically start up at boot.

After the reboot, the nextcloud containers are still accessible on each host on port 8000, and the ubi8-httpd containers are accessible on each host at port 8080.

The systemd units for the nextcloud containers and ubi8-httpd containers are both enabled to start at boot. However, note that the ubi8-httpd container is running as a non-privileged user (the ansible user) , so the podman system role has automatically enabled user lingering for the ansible user. This setting enables a systemd user instance to be started at boot, and to keep running when the user logs out, so that the container will automatically start at boot.

Conclusion

The podman Linux System Role can help automate the deployment of Podman containers across your Fedora Linux environment. You can also combine the podman system role with the other Linux System Roles in the Fedora linux-system-roles package to automate even more. For example, you could write a playbook that utilizes the storage Linux System Role to configure filesystems across your environment, and then use the podman system role to deploy containers that utilize those filesystems.

Posted on Leave a comment

Working with Btrfs – Subvolumes

This article is part of a series of articles that takes a closer look at Btrfs, the default filesystem for Fedora Workstation and Fedora Silverblue since Fedora Linux 33.

In case you missed it, here’s the previous article from the series: https://fedoramagazine.org/working-with-btrfs-general-concepts/

Introduction

Subvolumes allow for the partitioning of a Btrfs filesystem into separate sub-filesystems. This means that you can mount subvolumes from a Btrfs filesystem as if they were independent filesystems. In addition, you can, for example, define the maximum space a subvolume may take up via qgroups (We’ll talk about this in another article in this series), or use subvolumes to specifically include or exclude files from snapshots (We’ll talk about this, too, in another article in this series). Every default Fedora Workstation and Fedora Silverblue installation since Fedora Linux 33 makes use of subvolumes. In this article we will explore how it works.

Below you will find a lot of examples related to subvolumes. If you want to follow along, you must have access to some Btrfs filesystem and root access. You can verify whether your /home/ directory is Btrfs via the following command:

$ findmnt -no FSTYPE /home
btrfs

This command will output the name of the filesystem of your /home/ directory. If it says btrfs, you’re all set. Let’s create a new directory to perform some experiments in:

$ mkdir ~/btrfs-subvolume-test
$ cd ~/btrfs-subvolume-test

In the text below, you will find lots of command outputs in boxes such as shown above. Please keep in mind while reading/comparing command outputs that the box contents are wrapped at the end of the line. This makes it difficult to recognize long lines that are broken across multiple lines for readability. When in doubt, try to resize your browser window and see how the text behaves!

Creating and playing with subvolumes

We can create a Btrfs subvolume with the following command:

$ sudo btrfs subvolume create first
Create subvolume './first'

When we inspect the current directory we will see that it now has a new folder named first. Note the first character d in the output below:

$ ls -l
total 0
drwxr-xr-x. 1 root root 0 Oct 15 18:09 first

We can handle this like any regular folder: We can rename it, move it, create new files and folders inside, etc. Note that the folder belongs to root, so we must be root to do these things.

If it acts like a folder and looks like a folder, how do we know whether it’s a Btrfs subvolume? We can use the btrfs tools to list all subvolumes:

$ sudo btrfs subvolume list .
ID 256 gen 30 top level 5 path home
ID 257 gen 30 top level 5 path root
ID 258 gen 25 top level 257 path root/var/lib/machines
ID 259 gen 29 top level 256 path hartan/btrfs-subvolume-test/first

If you’re on a recent and unmodified Fedora Linux installation you will likely see the same output as above. We will inspect home and root as well as the meaning of all the numbers later. For now, we see that there is a subvolume at the path we specified. We can limit the output to the subvolumes below our current location:

$ sudo btrfs subvolume list -o .
ID 259 gen 29 top level 256 path home/hartan/btrfs-subvolume-test/first

Let’s rename the subvolume:

$ sudo mv first second
$ sudo btrfs subvolume list -o .
ID 259 gen 29 top level 256 path home/hartan/btrfs-subvolume-test/second

We can also nest subvolumes:

$ sudo btrfs subvolume create second/third
Create subvolume 'second/third'
$ sudo btrfs subvolume list .
ID 256 gen 34 top level 5 path home
ID 257 gen 37 top level 5 path root
ID 258 gen 25 top level 257 path root/var/lib/machines
ID 259 gen 37 top level 256 path hartan/btrfs-subvolume-test/second
ID 260 gen 37 top level 259 path hartan/btrfs-subvolume-test/second/third

And we can also remove subvolumes, either like we remove folders:

$ sudo rm -r second/third

or via special Btrfs commands:

$ sudo btrfs subvolume delete second
Delete subvolume (no-commit): '/home/hartan/btrfs-subvolume-test/second'

Handling Btrfs subvolumes like separate filesystems

The introduction mentioned that Btrfs subvolumes act like separate filesystems. This means that we can mount subvolumes and pass some mount options to them. First we will create a small folder structure to get a better understanding of what happens:

$ mkdir -p a a/1 a/1/b
$ sudo btrfs subvolume create a/2
Create subvolume 'a/2'
$ sudo touch a/1/c a/1/b/d a/2/e

Here’s what the structure looks like:

$ tree
.
└── a ├── 1 │   ├── b │   │   └── d │   └── c └── 2 └── e 4 directories, 3 files

Verify that there is now a new Btrfs subvolume:

$ sudo btrfs subvolume list -o .
ID 261 gen 41 top level 256 path home/hartan/btrfs-subvolume-test/a/2

To mount the subvolume we must know the path of the block device where the Btrfs filesystem subvolume resides. The following command tells us:

$ findmnt -vno SOURCE /home/
/dev/vda3

Now we can mount the subvolume. Make sure you replace the arguments with the values for your PC:

$ sudo mount -o subvol=home/hartan/btrfs-subvolume-test/a/2 /dev/vda3 a/1/b

Observe that we use the -o flag to give additional options to the mount program. In this case we tell it to mount the subvolume with name home/hartan/btrfs-subvolume-test/a/2 from the btrfs filesystem on device /dev/vda3. This is a Btrfs-specific option and isn’t available in other filesystems.

We see that the directory structure has changed:

$ tree
.
└── a ├── 1 │   ├── b │   │   └── e │   └── c └── 2 └── e 4 directories, 3 files

Note that the file e exists twice now and d is gone. We are now able to access the same Btrfs subvolume by two different paths. All changes we perform in either of the paths are immediately reflected in all other locations:

$ sudo touch a/1/b/x
$ ls -lA a/2
total 0
-rw-r--r--. 1 root root 0 Oct 15 18:14 e
-rw-r--r--. 1 root root 0 Oct 15 18:16 x

Let’s play some more with the mount options. For example we can mount the subvolume as read-only under a/1/b like this (Insert arguments for your PC!):

$ sudo umount a/1/b
$ sudo mount -o subvol=home/hartan/btrfs-subvolume-test/a/2,ro /dev/vda3 a/1/b

We use the same command as above, except that we add ro at the end. Now we can no longer create files via this mount:

$ sudo touch a/1/b/y
touch: cannot touch 'a/1/b/y': Read-only file system

but accessing the subvolume directly still works like before:

$ sudo touch a/2/y
$ tree
.
└── a ├── 1 │   ├── b │   │   ├── e │   │   ├── x │   │   └── y │   └── c └── 2 ├── e ├── x └── y 4 directories, 7 files

Don’t forget to clean up before we move on:

$ sudo rm -rf a
rm: cannot remove 'a/1/b/e': Read-only file system
rm: cannot remove 'a/1/b/x': Read-only file system
rm: cannot remove 'a/1/b/y': Read-only file system

Oh no, what happened? Well, since we mounted the subvolume read-only above, we cannot delete it. A deletion from a filesystems’ perspective is a write operation: To delete a/1/b/e, we remove the directory entry for e from the directory contents of its parent directory, a/1/b in this case. In other words, we must write to a/1/b to tell it that e doesn’t exist any longer. So first we unmount the subvolume again, and then we remove the folder:

$ sudo umount a/1/b
$ sudo rm -rf a
$ tree
. 0 directories, 0 files

Subvolume IDs

Remember the first output of the subvolume list subcommand? That contained a lot of numbers, so let’s see what that is all about. I copied the output here to take another look:

ID 256 gen 30 top level 5 path home
ID 257 gen 30 top level 5 path root
ID 258 gen 25 top level 257 path root/var/lib/machines
ID 259 gen 29 top level 256 path hartan/btrfs-subvolume-test/first

We see there are three columns of numbers, each prefixed with a few letters to describe what they do. The first column of numbers is a subvolumes ID. Subvolume IDs are unique in a Btrfs filesystem and as such uniquely identify subvolumes. This means that the subvolume named home can also be referred to by its ID 256. In the mount command above we wrote:

$ sudo mount -o subvol=hartan/...

Another perfectly legal option is to use subvolume IDs:

$ sudo mount -o subvolid=...

Subvolume IDs start at 256 and increase by 1 for every created subvolume. There is however one exception to this: The filesystem root always has the subvolume name / and the subvolume ID 5. That is right, even the root of a Btrfs filesystem is technically a subvolume. This is just implicitly known, hence it doesn’t show up in the output of btrfs subvolume list. If you mount a Btrfs filesystem without the subvol or subvolid argument, the root subvolume with subvolid=5 is assumed as default. Below we’ll see an example of when one may want to explicitly mount the filesystem root.

The second column of numbers is the generation counter and incremented on every Btrfs transaction. This is mostly an internal counter and won’t be discussed further here.

Finally, the third column of numbers is the subvolume ID of the subvolumes parent. In the output above we see that both subvolume home and root have 5 as their parent subvolume ID. Remember that ID 5 has a special meaning: It is the filesystem root. So we know that home and root are children to the root subvolume. hartan/btrfs-subvolume-test/first on the other hand is a child of the subvolume with ID 256, which in our case is home.

In the next section we have a look at where the subvolumes root and home come from.

Inspecting default subvolumes in Fedora Linux

When you create a new Btrfs filesystem from scratch, there will be no subvolumes in it (Except of course for the root subvolume). So where do the home and root subvolumes in Fedora Linux come from?

These are created by the installer at install time. Traditional installations would often include a separate filesystem partition for the / and /home directories. During boot, these are then appropriately mounted to assemble one full filesystem. But there is an issue with this approach: Unless you use technologies such as lvm, it is very hard to change a partitions size at some point in the future. As a consequence you may end up in a situation where either your / or /home runs out of space, while the respective other partition has lots of unused, free space left.

Since Btrfs subvolumes are all part of the same filesystem, they will share the space that the underlying filesystem offers. Remember when we created the subvolumes above? We never told Btrfs how big they are: A subvolume can take up all the space the filesystem has, by default nothing keeps it from doing so. However, we could dynamically impose size limits via Btrfs qgroups, which can also be modified during runtime (And we’ll see how in a later article in this series).

Another advantage of separating / and /home is that we can take snapshots separately. A subvolume is a boundary for snapshots, and snapshots will never contain the contents of other subvolumes below the subvolume that the snapshot is taken of. More details on snapshots follow in the next article in this series.

Enough of the theory! Let’s see what this is all about. First ensure that your root filesystem is in fact of type Btrfs:

$ findmnt -no FSTYPE /
btrfs

And then get the partition it resides on:

$ findmnt -vno SOURCE /
/dev/vda3

Remember we can mount the filesystem root by its special subvolume ID 5 (Adapt the filesystem partition!):

$ mkdir fedora-rootsubvol
$ sudo mount -o subvolid=5 /dev/vda3 ./fedora-rootsubvol
$ ls fedora-rootsubvol/
home root

And there are the subvolumes of our Fedora Linux installation! But how does Fedora Linux know that the subvolume root belongs to /, and home belongs to /home?

The file /etc/fstab contains so-called static information about the filesystem. In simple terms, during booting your system reads this file, line by line, and mounts all the filesystems listed there. On my system, the file looks like this:

$ cat /etc/fstab
# [ ... ]
# /etc/fstab
# Created by anaconda on Sat Oct 15 12:01:57 2022
# [ ... ]
#
UUID=5e4e42bb-4f2f-4f0e-895f-d1a46ea47807 / btrfs subvol=root,compress=zstd:1 0 0
UUID=e3a798a8-b8f2-40ca-9da7-5e292a6412aa /boot ext4 defaults 1 2
UUID=5e4e42bb-4f2f-4f0e-895f-d1a46ea47807 /home btrfs subvol=home,compress=zstd:1 0 0

(Note that the “UUID” lines above have been wrapped into two lines)

The UUID at the beginning of each line is simply a means to identify disks and filesystem partitions in your system (roughly equivalent to /dev/vda3 as I used above). The second column is the path in the filesystem tree where this filesystem should be mounted. The third column is the filesystem type. We see that the entries for / and /home are of type btrfs, just what we expect! Finally, in the fourth column we see the magic: These are the mount options, and there it says to mount / with the option subvol=root. That is exactly the subvolume we saw in the output of btrfs subvolume list / all the time!

With this information, we can reconstruct the call to mount that creates this filesystem entry:

$ sudo mount -o subvol=root,compress=zstd:1 UUID=5e4e42bb-4f2f-4f0e-895f-d1a46ea47807 /
(again, the line above has been wrapped into two)

And that is how Fedora Linux uses Btrfs subvolumes! If you’re curious as to why Fedora Linux decided to use Btrfs as the default filesystem, refer to the change proposal linked below [1].

More on Btrfs subvolumes

The Btrfs wiki has additional information on subvolumes and most importantly on the mount options that can be applied to Btrfs subvolumes. Some options, like compress can only be applied on a filesystem-wide level and thus affect all subvolumes of a Btrfs filesystem. You can find the entry linked below [2].

If you find it confusing to tell which directories are plain directories and which are subvolumes, you can feel free to adopt a special naming convention for your subvolumes. For example, you could prefix your subvolume names with an “@” to make them easily distinguishable.

Now that you know that subvolumes behave like filesystems, one may ask how best to place a subvolume in a certain location. Say you want a Btrfs subvolume under ~/games, where your home directory (~) is itself a subvolume, how can you achieve that? Given the example above, you may use a command like sudo btrfs subvolume create ~/games. This way, you create so-called nested subvolumes: Inside your subvolume ~, there is now a subvolume games. That is a perfectly fine way to approach this situation.

Another valid solution is to do what Fedora does by default: Create all subvolumes under the root subvolume (i.e. such that their parent subvolume ID is 5), and mount them into the appropriate locations. The Btrfs wiki has an overview of these approaches along with a short discussion about their respective implications on filesystem management [5].

Conclusion

In this article we discovered Btrfs subvolumes, which act like separate Btrfs filesystems inside a Btrfs filesystem. We learned how to create, mount and delete subvolumes. Finally, we explored how Fedora Linux makes use of subvolumes – without us noticing at all.

The next articles in this series will deal with:

  • Snapshots – Going back in time
  • Compression – Transparently saving storage space
  • Qgroups – Limiting your filesystem size
  • RAID – Replace your mdadm configuration

If there are other topics related to Btrfs that you want to know more about, have a look at the Btrfs Wiki [3] and Docs [4]. Don’t forget to check out the first article of this series, if you haven’t already! If you feel that there is something missing from this article series, let us know in the comments below. See you in the next article!

Sources

[1]: https://fedoraproject.org/wiki/Changes/BtrfsByDefault#Benefit_to_Fedora
[2]: https://btrfs.readthedocs.io/en/latest/Subvolumes.html
[3]: https://btrfs.wiki.kernel.org/index.php/Main_Page
[4]: https://btrfs.readthedocs.io/en/latest/Introduction.html
[5]: https://btrfs.wiki.kernel.org/index.php/SysadminGuide#Layout

Posted on Leave a comment

Anaconda Web UI preview image now public!

We are excited to announce the first public preview image of the new Anaconda web interface!  Our vision is to reimagine and modernize our installer’s user experience (see our blog post “Anaconda is getting a new suit”). We are doing this by redesigning the user experience on all fronts to make it more easy and approachable for everyone to use.

Today, we would like to introduce our plans for the public preview release, as our new project has already reached a point where core code functionality is already developed and the new interface can be used for real installations. 

So, we’re giving you something to play with! 🙂

Installation progress of the preview image

Why public preview image?

By giving you a working ISO as soon as we can, you have the opportunity to help us to define this new UI. This task allows us to rethink what we have and find new ways to overcome the challenges of the UI instead of re-creating what we had already. Please take this opportunity and reach us with your feedback to help us to create the best OS installer ever!

Please let us know what you require from Anaconda. What features are important to you and why are these important? That will allow us to prioritize our focus on development and design. See below for how to contact us.

How to get public preview image?

Download the Anaconda preview image here

Thanks a lot to the Image Builder team for providing us with a way to build ISO with the Fedora 37 Workstation GA content. We are planning to provide additional images with an updated installer to give you the newest features and fixes with the link above. There are no updates to the installation payload (installed system data) yet. We will announce important updates of the ISO image by sending mail to [email protected] with CC to [email protected]. Please subscribe to either of these to get information about the news. This way we will be able to iterate on your feedback.

What you will get with the preview ISO

The ISO will allow you to install the system and let you get a taste of the new UI, so you can provide us early feedback. However, it is pretty early in the development cycle. We advise you to not use this ISO to install critical infrastructure or machines where you have important data.

Let’s go to the more interesting part of what you can do with the ISO:

  • Choose installation language
  • Select your disks
  • Automatically partition the disks. BEWARE! This will erase everything on the selected disks.
  • Automatically install Fedora 37 GA Workstation system
  • Basic review screen of your selections
  • Installation progress screen
  • Built-in help (on Installation destination screen only)

Known issues:

  • In the bootloader menu you’ll see “Install Fedora 38”, it’s expected because the installation environment is from Rawhide. However, the content installed will be Fedora 37 GA, so don’t worry.
  • Virtual Box on Mac might have resolution issues. We are working on resolving this issue.
  • Aspect ratio and window handling. We know we need to solve this better, feedback is welcome.

How to provide feedback?

Your feedback is critical to have a project which you and we can be proud of, so please share it with us. To give us feedback:

Please take your time to play with the UI and tell us what you think. What works great, what is not working and what you would like to have. Ideally, follow future updates and tell us if the situation is better or worse. 

We are really counting on your feedback and we are thankful to have you all supporting us in this journey!

Posted on Leave a comment

How to rebase to Fedora Linux 37 on Silverblue

Fedora Silverblue is an operating system for your desktop built on Fedora Linux. It’s excellent for daily use, development, and container-based workflows. It offers numerous advantages such as being able to roll back in case of any problems. If you want to update or rebase to Fedora Linux 37 on your Fedora Silverblue system (these instructions are similar for Fedora Kinoite), this article tells you how. It not only shows you what to do, but also how to revert things if something unforeseen happens.

Prior to actually doing the rebase to Fedora Linux 37, you should apply any pending updates. Enter the following in the terminal:

$ rpm-ostree update

or install updates through GNOME Software and reboot.

Rebasing using GNOME Software

GNOME Software shows you that there is new version of Fedora Linux available on the Updates screen.

Fedora 37 update available

First thing you need to do is download the new image, so click on the Download button. This will take some time. When it’s done you will see that the update is ready to install.

Fedora 37 update ready to install

Click on the Restart & Upgrade button. This step will take only a few moments and the computer will be restarted at the end. After restart you will end up in new and shiny release of Fedora Linux 37. Easy, isn’t it?

Rebasing using terminal

If you prefer to do everything in a terminal, then this part of the guide is for you.

Rebasing to Fedora Linux 37 using the terminal is easy. First, check if the 37 branch is available:

$ ostree remote refs fedora

You should see the following in the output:

fedora:fedora/37/x86_64/silverblue

If you want to pin the current deployment (this deployment will stay as option in GRUB until you remove it), you can do it by running:

# 0 is entry position in rpm-ostree status
$ sudo ostree admin pin 0

To remove the pinned deployment use the following command:

# 2 is entry position in rpm-ostree status
$ sudo ostree admin pin --unpin 2

where 2 is the position in the rpm-ostree status.

Next, rebase your system to the Fedora Linux 37 branch.

$ rpm-ostree rebase fedora:fedora/37/x86_64/silverblue

Finally, the last thing to do is restart your computer and boot to Fedora Linux 37.

How to roll back

If anything bad happens—for instance, if you can’t boot to Fedora Linux 37 at all—it’s easy to go back. Pick the previous entry in the GRUB menu at boot (if you don’t see it, try to press ESC during boot), and your system will start in its previous state before switching to Fedora Linux 37. To make this change permanent, use the following command:

$ rpm-ostree rollback

That’s it. Now you know how to rebase Fedora Silverblue to Fedora Linux 37 and roll back. So why not do it today?

Posted on Leave a comment

Announcing Fedora Linux 37

Today I’m excited to share the results of the hard work of thousands of Fedora Project contributors: the Fedora Linux 37 release is here! Let’s see what the latest release brings you. As always, you should make sure your system is fully up-to-date before upgrading from a previous release. Can’t wait to get started? Download while you read!

New editions

Fedora Editions are flagship offerings targeted at a particular “market”. With Fedora Linux 37, we’re adding two new Editions. Fedora CoreOS is the successor to what you may remember as Atomic Host. Drawing from Project Atomic and the original CoreOS work, it provides an automatic update mechanism geared toward hosting container-based workloads. With atomic updates and easy rollback, it adds peace of mind to your infrastructure.

Fedora Cloud is also back as an Edition. The Cloud Working Group has seen a resurgence in activity. Cloud provides a great Fedora base to run in your favorite public or private cloud. AMIs will be available in the AWS Marketplace later this week and community channels are available now. Check the website for images in other cloud providers or for your own cloud!

Desktop improvements

Fedora Workstation focuses on the desktop experience. As usual, Fedora Workstation features the latest GNOME release. GNOME 43 includes a new device security panel in Settings, providing the user with information about the security of hardware and firmware on the system. Building on the previous release, more core GNOME apps have been ported to the latest version of the GTK toolkit, providing improved performance and a modern look. 

With this release, we’ve made a few changes to allow you to slim down your installation a bit. We split the language packs for the Firefox browser into subpackages. This means you can remove the “firefox-langpacks” package if you don’t need the localization. The runtime packages for gettext — the tools that help other packages produce multilingual text — are split into a separate, optional subpackage.

Of course, we produce more than just the Editions. Fedora Spins and Labs target a variety of audiences and use cases, including Fedora Comp Neuro, which provides tools for computational neuroscience, and desktop environments like Fedora LXQt, which provides a lightweight desktop environment. And, don’t forget our alternate architectures: ARM AArch64, Power, and S390x.

Sysadmin improvements

Fedora Server now produces a KVM disk image to make running Server in a virtual machine easier. If you’ve disabled SELinux (it’s okay — we still love you!), you can turn it back on with less impact. The autorelabel now runs in parallel, making the “fixfiles” operation much faster.

In order to keep up with advances in cryptography, this release introduces a TEST-FEDORA39 policy that previews changes planned for future releases. The new policy includes a move away from SHA-1 signatures. Researchers have long known that this hash (like MD5 before it) is not safe to use for many security purposes.

In the future, we are likely to remove SHA-1 from the list of acceptable security algorithms in Fedora Linux. (As the name TEST-FEDORA39 implies, perhaps as soon as next year.) We know there are still SHA-1 hashes in use today, however. The new policy helps you test your critical applications now so that you’ll be ready. Please try it out, and let us know where you encounter problems.

Speaking of cryptography, the openssl1.1 package is now deprecated. It will remain available, but we recommend you update your code to work with openssl 3.

Other updates

The Raspberry Pi 4 is now officially supported in Fedora Linux, including accelerated graphics. In other ARM news, Fedora Linux 37 drops support for the ARMv7 architecture (also known as arm32 or armhfp).

Following our “First” foundation, we’ve updated key programming language and system library packages, including Python 3.11, Golang 1.19, glibc 2.36, and LLVM 15.

We’re excited for you to try out the new release! Go to https://getfedora.org/ and download it now. Or if you’re already running Fedora Linux, follow the easy upgrade instructions. For more information on the new features in Fedora Linux 37, see the release notes.

In the unlikely event of a problem…

If you run into a problem, visit our Ask Fedora user-support forum. This includes a category for common issues.

Thank you everyone

Thanks to the thousands of people who contributed to the Fedora Project in this release cycle. We love having you in the Fedora community.

Posted on Leave a comment

Fedora Linux 37 update

Fedora Linux 37 is going to be late; very late. Here’s why. As you may have heard, the OpenSSL project announced a version due to be released on Tuesday. It will include a fix for a critical-severity bug. We won’t know the specifics of the issue until Tuesday’s release, but it could be significant. As a result, we decided to delay the release of Fedora Linux 37. We are now targeting a release day of 15 November.

Imperfect information

Most decisions happen with imperfect information. This one is particularly imperfect. If you’re not familiar with the embargo process, you might not understand why. When a security issue is discovered, this information is often shared with the project confidentially. This allows the developers to fix the issue before more people know about it and can exploit it. Projects then share information with downstreams so they can be ready.

Ironically, Fedora’s openness means we can’t start preparing ahead of time. All of our build pipelines and artifacts are open. If we were to start building updates, this would disclose the vulnerability before the embargo lifts. As a result, we only know that OpenSSL considers this the highest level of severity and Red Hat’s Product Security team strongly recommended we wait for a fix before releasing Fedora Linux 37.

Balancing time and quality

As the Fedora Program Manager, our release schedule is my responsibility. I take pride in the on-time release streak I inherited from my predecessor. We kept it going through Fedora Linux 34 in April 2021. In that time, we made big technical changes (like switching to Btrfs as the default for most variants) and kept each other going through a pandemic. I’m proud of what the community was able to accomplish under difficult circumstances.

But being on time isn’t the only factor. We know that you rely on Fedora Linux for work and for play, so quality is always a consideration. Knowing that we were going to delay for the OpenSSL vulnerability, the question became “how long”?

We make the “go/no-go” decision on Thursdays for a release the following Tuesday. This gives time for the images to update to the mirrors. The OpenSSL project team plans to publish the security fix about 48 hours before we’d make the go/no-go decision for an 8 November target. Factoring in time to build the updated openssl package and generate a release candidate, that gives us about a day and a half to do testing. That’s not enough time to be comfortable with a change to such an important package.

As a result, we’re giving ourselves an extra week so that we can be confident that Fedora Linux 37 has the same level of quality you’ve come to expect.

Was it the right decision?

Time will tell if we made the right decision or not. Today’s Go/No-Go meeting was lively and not everyone agrees that we should delay the release because of this. Like I said, we have little information to go on. It’s important to note that the decision was made as a team, and not the dictate of a single person. Fedora values collaborative decision making, and this is a good example.

When the details are released Tuesday, it may turn out we go “wow, that was not worth delaying the release.” But I think we made the best decision we could with the information we have available.

In the meantime, please join us November 4–5 for the Fedora Linux 37 Release Party. It will be a lot of fun, even if the release isn’t quite out yet.

Posted on Leave a comment

What’s new in Fedora Workstation 37

Fedora Workstation 37 is the latest version of the Fedora Project’s desktop operating system, made by a worldwide community dedicated to pushing forward innovation in open source. This article describes some of the new user-facing features in Fedora Workstation 37. Upgrade today from GNOME Software, or by using dnf system-upgrade in your favourite terminal emulator!

GNOME 43

Fedora Workstation 37 features the latest version of the GNOME desktop environment which sees more core applications ported to GTK 4, user interface tweaks, and performance tune-ups. Check out the GNOME 43 release notes for more information!

Redesigned Quick Settings menu

No need to open Settings just to change to and from Dark Mode

The new Quick Settings menu offers more control and convenience. You can now easily switch your Wi-Fi network in the menu instead of being taken to a full-screen dialogue box, change between default and dark modes, and enable Night Light without opening the Settings app. A convenient button for taking screenshots and screencasts is also now present.

Core applications

The GNOME core applications included in Fedora Workstation 37 have seen a round of tweaks and improvements.

  • Files has been ported to GTK 4, and the user interface has seen many improvements. Here are just some of them:
    • It is now adaptive – meaning it automatically adjusts to a narrower size, making better use of the available space.
    • The list view has been re-architected to make rubber-band selections easier.
    • The “Properties” and “Open With…” dialogues have been redesigned.
Rubber-band selection in Files 43
  • Calendar features a new sidebar that shows your upcoming events at a glance. It, along with Contacts, now feature adaptive user interfaces.
  • Characters now shows you different skin tone, hair colour, and gender options for emoji.
  • The package source selector in Software has been redesigned and moved to a more visible location.
  • Maps has been ported to GTK 4.
  • Settings includes a new Device Security panel, allowing you to easily see the hardware security features your devices offers – or lacks!
Uh oh!

New supplemental default wallpapers

Fedora Workstation 37 ships with a new set of supplemental wallpapers. See how they were made here!

The six new wallpapers come in both light and dark variants

Under-the-hood changes throughout Fedora Linux 37

Fedora Linux 37 features many under-the-hood changes. Here are some notable ones:

  • The Raspberry Pi 4 single-board computer is now officially supported, including 3D acceleration!
  • New installs on BIOS systems will use the GPT disk layout instead of the legacy MBR layout. The installer images will also now use GRUB instead of syslinux to boot on BIOS systems.
  • If you disable and then re-enable SELinux, or run the fixfiles onboot command, the file system relabelling processes will now be done in parallel, allowing for a significant speed boost.
  • The default fonts for Persian has been changed from DejaVu and Noto Sans Arabic to Vazirmatn, providing a more consistent experience for those who use Fedora Linux in Persian.

Also check out…

Cool happenings throughout the Fedora Project!

  • Fedora CoreOS and Fedora Cloud Base have been promoted to Edition status!
  • Preview installer images with a new GUI for Anaconda, the Fedora Linux system installer, will become available in about a week. An article will be published with more details, so watch this space!