Posted on Leave a comment

Welcome to our new Fedora Community Action and Impact Coordinator

Great news, Fedora Friends! I am excited to announce that we have completed our search for a new Fedora Community Action and Impact Coordinator (FCAIC). He joins the Open Source Program Office (OSPO) team at Red Hat to work with the Fedora Community today. Please give a warm welcome to Justin W. Flory (he/him).

If you’re a contributor to Fedora, you may have already worked with Justin on a variety of teams and projects. Although I couldn’t possibly list them all in one post, Justin’s Fedora contribution highlights include co-founding CommOps, the Diversity, Equity and Inclusion (D.E.I.) Team, and Mindshare Committee. More contribution highlights include former editor-in-chief of the Fedora Magazine and Community Blog, former Council Member, leading the Marketing team, contributing as a packager, and traveling to events and conferences worldwide as a Fedora Ambassador. He has attended many Flocks: organizing workshops, presenting sessions, and coordinating informal socials like the international candy swap. Most recently Justin presented “5 Lessons Learned from 5 years of Fedora’s D.E.I. Events” at Nest with Fedora 2022.

Justin is new to Red Hat, joining us after seven years of involvement with the Fedora Community. He was first introduced to Fedora as a high school student and later through Open@RIT at the Rochester Institute of Technology (formerly the FOSSBox and FOSS@MAGIC). Justin’s most recent role was at UNICEF’s Office of Innovation supporting and mentoring startup companies across the world in open sourcing their innovations. He mentored 23 companies from 19 countries on community strategies for their Open Source products. Of those, fourteen achieved global recognition as Digital Public Goods (like Fedora Linux). Additionally, he also designed a fixed-term Open Source mentoring program for startup companies and developer communities to follow best practices and industry standards on launching Open Source communities.

Justin’s extensive experience with supporting Open Source community building, program management, and involvement with the Fedora Project makes him an excellent fit for this position. I am excited to work with him as both a colleague on the OSPO team at Red Hat and as a Fedora contributor. Feel free to reach out to Justin with your congratulations, but give him a bit to get up to speed with his new FCAIC duties. 

Congratulations, Justin!

Posted on Leave a comment

Working with Btrfs – General Concepts

This article is part of a series of articles that takes a closer look at Btrfs. This is the default filesystem for Fedora Workstation and Fedora Silverblue since Fedora Linux 33.

Introduction

Filesystems are one of the foundations of modern computers. They are an essential part of every operating system and they usually work unnoticed. However, modern filesystems such as Btrfs offer many great features that make working with computers more convenient. Next to other things they can, for example, transparently compress your files for you or build a solid foundation for incremental backups.

This article gives you a high-level overview of how the Btrfs filesystem works and some of the features it has. It will not go into much technical detail nor look at the implementation. More detailed explanations of some highlighted features follow in later articles of this series.

What is a filesystem?

If you’ve heard before how filesystems work on the most basic level, then this isn’t new to you and you can skip to the next section. Otherwise, read ahead for a short introduction into what makes a filesystem in the first place.

In simple terms, a filesystem allows your PC to find the data that it stores on disk. This sounds like a trivial task, but in essence any type of non-volatile storage device today (such as HDDs, SSDs, SD cards, etc…) is still mostly what it was back in 1970 when PCs were being invented: A (huge) collection of storage blocks.

Blocks are the most granular addressable storage unit. Every file on your PC is stored across one or more blocks. A block is typically 4096 bytes in size. This depends on the hardware you have and the software (i.e. the filesystem) on top of it.

Filesystems allow us to find the contents of our files from the vast amount of available storage blocks. This is done via so-called inodes. An inode contains information about a file in a specially formatted storage block. This includes the file’s size, where to find the storage blocks that make up the file contents, its access rules (i.e. who can read, write or execute the file) and much more.

Below is an example of what this looks like:

A text file “myfile.txt” and a hypothetical example of its representation on disk. All the squares are individual storage blocks.

The structure of an inode has big implications on a filesystem’s capabilities, so it is one of the central datastructures for any file system. For this reason every filesystem has its own inode structure. If you want to know more about this, have a look at the inode structure of the Btrfs filesystem linked below [1]. For a more detailed explanation of what the individual fields mean, you can refer to the inode structure of the ext4 filesystem [2].

Copy-on-Write filesystems

One of the outstanding features of Btrfs, compared to ext4, for example, is that it is a CoW (Copy-on-Write) filesystem. When a file is changed and written back to disk, it intentionally is not written back to where it was before. Instead, it is copied and stored in an entirely new location on disk. In this sense, it may be simpler to think of CoW as a kind of “redirection”, because the file write is redirected to different storage blocks.

This may sound wasteful, but in practice it isn’t. This is because the modified data must be written back to the disk in any case, regardless of how the filesystem works. Btrfs merely makes sure that the data is written to previously unoccupied blocks, so the old data remains intact. The only real drawback is that this behavior can lead to file fragmentation quicker than on other filesystems. In regular desktop usage scenarios it is unlikely you will notice a difference.

What is the advantage of CoW? In simple terms: a history of the modified and edited files can be kept. Btrfs will keep the references to the old file versions (inodes) somewhere they can be easily accessed. This reference is a snapshot: An image of the filesystem state at some point in time. This will be the topic of a separate article in this series, so it will be left at that for now.

Beyond keeping file histories, CoW filesystems are always in a consistent state, even if a previous filesystem transaction (like writing to a file) didn’t complete due to e.g. power loss. That is because filesystem metadata updates are also CoW: The file system itself is never overwritten, so an interruption can’t leave it in a partially written state

Copy-on-Write for files

You can think of filenames as pointers to the inodes of the file they belong to. Upon writing to a file, Btrfs creates a copy of the modified file content (the data), along with a new inode (the metadata), and then makes your filename point to this new inode. The old inode remains untouched. Below you see another hypothetical example to illustrate this:

Continuation of the example above: 3 more bytes of data were added

Here “myfile.txt” has had three bytes appended. A traditional filesystem would have updated the “Data” block in the middle to contain the new contents. A CoW filesystem keeps the old blocks intact (greyed out) and writes (copies) changed data and metadata somewhere new. It is important to note that only changed data blocks are copied, and not the whole file.

If there are no more unused blocks to write new contents to, Btrfs will reclaim space from data blocks occupied by old file versions (Unless they are part of a snapshot, see later article in this series).

Copy-on-Write for folders

From a filesystem’s point of view, a folder is a special type of file. In contrast to regular files, the filesystem interprets the underlying contents directly. A folder has some metadata associated with it (an inode, as seen for files above) that governs access permissions or modification time. In the simplest case, the data stored in a folder (so called “directory entries”) is a list of references to inodes, where each inode is in turn another file or folder. However, modern filesystems store at least a filename, together with a reference to an inode of the file in question, in a directory entry.

Earlier it was pointed out that writing to a file creates a copy of the previous inode and modifies the contents accordingly. In essence, this yields a new inode that isn’t related to its predecessor. To make the modified file show up in the filesystem, all the directory entries containing a reference to it are updated as well.

This is a recursive process! Since a folder is itself a file with an inode, modifying any of its folder entries creates a new inode for the folder file. This recursion occurs all the way up the filesystem tree, until it arrives at the filesystem root.

As a consequence, as long as a reference is kept to any of the old directories and they are not deleted or overwritten, the filesystem tree can be traversed in it’s previous state. This, again, is exactly what snapshots do.

What to expect in future articles

Btrfs is more than just a CoW filesystem. It aims to implement “advanced features while also focusing on fault tolerance, repair and easy administration” (See [3]). Future articles of this series will have a look at these features in particular:

  • Subvolumes – Filetrees within your filetree
  • Snapshots – Going back in time
  • Compression – Transparently saving storage space
  • Qgroups – Limiting your filesystem size
  • RAID – Replace your mdadm configuration

This is by far not an exhaustive list of Btrfs features. If you want the full overview of available features, check out the Wiki [4] and Docs [3].

Conclusion

I hope that I managed to whet your appetite for getting to know your PC filesystem. If you have questions so far, please leave a comment about what you come up with so they can be discussed in future articles. In the meantime, feel free to study the linked resources in the text. If you stumble over a Btrfs feature that you find particularly intriguing, please add a comment below, too. If there’s enough interest in a particular topic, maybe I’ll add an article to the series. See you in the next article!

Sources

[1]: https://btrfs.wiki.kernel.org/index.php/Data_Structures#btrfs_inode_item
[2]: https://ext4.wiki.kernel.org/index.php/Ext4_Disk_Layout#Inode_Table
[3]: https://btrfs.readthedocs.io/en/latest/Introduction.html
[4]: https://btrfs.wiki.kernel.org/index.php/Main_Page

Posted on Leave a comment

Using Python and NetworkManager to control the network

NetworkManager is the default network management service on Fedora and several other Linux distributions. Its main purpose is to take care of things like setting up interfaces, adding addresses and routes to them and configuring other network related aspects of the system, such as DNS.

There are other tools that offer similar functionality. However one of the advantages of NetworkManager is that it offers a powerful API. Using this API, other applications can inspect, monitor and change the networking state of the system.

This article first introduces the API of NetworkManager and presents how to use it from a Python program. In the second part it shows some practical examples: how to connect to a wireless network or to add an IP address to an interface programmatically via NetworkManager.

The API

NetworkManager provides a D-Bus API. D-Bus is a message bus system that allows processes to talk to each other; using D-Bus, a process that wants to offer some services can register on the bus with a well-known name (for example, “org.freedesktop.NetworkManager”) and expose some objects, each identified by a path. Using d-feet, a graphical tool to inspect D-Bus objects, we can see the object tree exposed by the NetworkManager service:

Each object has properties, methods and signals, grouped into different interfaces. For example, the following is a simplified view of the interfaces for the second device object:

We see that there are different interfaces; the org.freedesktop.NetworkManager.Device interface contains some properties common to all devices, like the state, the MTU and IP configurations. Since this device is Ethernet, it also has a org.freedesktop.NetworkManager.Device.Wired D-Bus interface containing other properties such as the link speed.

The full documentation for the D-Bus API of NetworkManager is here.

A client can connect to the NetworkManager service using the well-known name and perform operations on the exposed objects. For example, it can invoke methods, access properties or receive notifications via signals. In this way, it can control almost every aspect of network configuration. In fact, all the tools that interact with NetworkManager – nmcli, nmtui, GNOME control center, the KDE applet, Cockpit – use this API.

libnm

When developing a program, it can be convenient to automatically instantiate objects from the objects available on D-Bus and keep their properties synchronized; or to be able to have method calls on those objects automatically dispatched to the corresponding D-Bus method. Such objects are usually called proxies and are used to hide the complexity of D-Bus communication from the developer.

For this purpose, the NetworkManager project provides a library called libnm, written in C and based on GNOME’s GLib and GObject. The library provides C language bindings for functionality provided by NetworkManager. Being a GLib library, it is usable from other languages as well via GObject introspection, as explained below.

The library maps fairly closely to the D-Bus API of NetworkManager. It wraps remote D-Bus objects as native GObjects, and D-Bus signals and properties to GObject signals and properties. Furthermore, it provides helpful accessors and utility functions.

Overview of libnm objects

The diagram below shows the most important objects in libnm and their relationship:

NMClient caches all the objects instantiated from D-Bus. The object is typically created at the beginning at the program and provides a way to access other objects.

A NMDevice represents a network interface, physical (as Ethernet, Infiniband, Wi-Fi, etc.) or virtual (as a bridge or a IP tunnel). Each device type supported by NetworkManager has a dedicated subclass that implements type-specific properties and methods. For example, a NMDeviceWifi has properties related to the wireless configuration and to access points found during the scan, while a NMDeviceVlan has properties describing its VLAN-id and the parent device.

NMClient also provides a list of NMRemoteConnection objects. NMRemoteConnection is one of the two implementations of the NMConnection interface. A connection (or connection profile) contains all the configuration needed to connect to a specific network.

The difference between a NMRemoteConnection and a NMSimpleConnection is that the former is a proxy for a connection existing on D-Bus while the latter is not. In particular, NMSimpleConnection can be instantiated when a new blank connection object is required. This is useful for, example, when adding a new connection to NetworkManager.

The last object in the diagram is NMActiveConnection. This represents an active connection to a specific network using settings from a NMRemoteConnection.

GObject introspection

GObject introspection is a layer that acts as a bridge between a C library using GObject and programming language runtimes such as JavaScript, Python, Perl, Java, Lua, .NET, Scheme, etc.

When the library is built, sources are scanned to generate introspection metadata describing, in a language-agnostic way, all the constants, types, functions, signals, etc. exported by the library. The resulting metadata is used to automatically generate bindings to call into the C library from other languages.

One form of metadata is a GObject Introspection Repository (GIR) XML file. GIRs are mostly used by languages that generate bindings at compile time. The GIR can be translated into a machine-readable format called Typelib that is optimized for fast access and lower memory footprint; for this reason it is mostly used by languages that generate bindings at runtime.

This page lists all the introspection bindings for other languages. For a Python example we will use PyGObject which is included in the python3-gobject RPM on Fedora.

A basic example

Let’s start with a simple Python program that prints information about the system:

import gi gi.require_version("NM", "1.0")
from gi.repository import GLib, NM client = NM.Client.new(None)
print("version:", client.get_version())

At the beginning we import the introspection module and then the Glib and NM modules. Since there could be multiple versions of the NM module in the system, we make certain to load the right one. Then we create a client object and print the version of NetworkManager.

Next, we want to get a list of devices and print some of their properties:

devices = client.get_devices()
print("devices:")
for device in devices: print(" - name:", device.get_iface()); print(" type:", device.get_type_description()) print(" state:", device.get_state().value_nick)

The device state is an enum of type NMDeviceState and we use value_nick to get its description. The output is something like:

version: 1.41.0
devices: - name: lo type: loopback state: unmanaged - name: enp1s0 type: ethernet state: activated - name: wlp4s0 type: wifi state: activated

In the libnm documentation we see that the NMDevice object has a get_ip4_config() method which returns a NMIPConfig object and provides access to addresses, routes and other parameters currently set on the device. We can print them with:

 ip4 = device.get_ip4_config() if ip4 is not None: print(" addresses:") for a in ip4.get_addresses(): print(" - {}/{}".format(a.get_address(), a.get_prefix())) print(" routes:") for r in ip4.get_routes(): print(" - {}/{} via {}".format(r.get_dest(), r.get_prefix(), r.get_next_hop()))

From this, the output for enp1s0 becomes:

 - name: enp1s0 type: ethernet state: activated addresses: - 192.168.122.191/24 - 172.26.1.1/16 routes: - 172.26.0.0/16 via None - 192.168.122.0/24 via None - 0.0.0.0/0 via 192.168.122.1

Connecting to a Wi-Fi network

Now that we have mastered the basics, let’s try something more advanced. Suppose we are in the range of a wireless network and we want to connect to it.

As mentioned before, a connection profile describes all the settings required to connect to a specific network. Conceptually, we’ll need to perform two different operations: first insert a new connection profile into NetworkManager’s configuration and second activate it. Fortunately, the API provides method nm_client_add_and_activate_connection_async() that does everything in a single step. When calling the method we need to pass at least the following parameters:

  • the NMConnection we want to add, containing all the needed properties;
  • the device to activate the connection on;
  • the callback function to invoke when the method completes asynchronously.

We can construct the connection with:

def create_connection(): connection = NM.SimpleConnection.new() ssid = GLib.Bytes.new("Home".encode("utf-8")) s_con = NM.SettingConnection.new() s_con.set_property(NM.SETTING_CONNECTION_ID, "my-wifi-connection") s_con.set_property(NM.SETTING_CONNECTION_TYPE, "802-11-wireless") s_wifi = NM.SettingWireless.new() s_wifi.set_property(NM.SETTING_WIRELESS_SSID, ssid) s_wifi.set_property(NM.SETTING_WIRELESS_MODE, "infrastructure") s_wsec = NM.SettingWirelessSecurity.new() s_wsec.set_property(NM.SETTING_WIRELESS_SECURITY_KEY_MGMT, "wpa-psk") s_wsec.set_property(NM.SETTING_WIRELESS_SECURITY_PSK, "z!q9at#0b1") s_ip4 = NM.SettingIP4Config.new() s_ip4.set_property(NM.SETTING_IP_CONFIG_METHOD, "auto") s_ip6 = NM.SettingIP6Config.new() s_ip6.set_property(NM.SETTING_IP_CONFIG_METHOD, "auto") connection.add_setting(s_con) connection.add_setting(s_wifi) connection.add_setting(s_wsec) connection.add_setting(s_ip4) connection.add_setting(s_ip6) return connection

The function creates a new NMSimpleConnection and sets all the needed properties. All the properties are grouped into settings. In particular, the NMSettingConnection setting contains general properties such as the profile name and its type. NMSettingWireless indicates the wireless network name (SSID) and that we want to operate in “infrastructure” mode, that is, as a wireless client. The wireless security setting specifies the authentication mechanism and a password. We set both IPv4 and IPv6 to “auto” so that the interface gets addresses via DHCP and IPv6 autoconfiguration.

All the properties supported by NetworkManager are described in the nm-settings man page and in the “Connection and Setting API Reference” section of the libnm documentation.

To find a suitable interface, we loop through all devices on the system and return the first Wi-Fi device.

def find_wifi_device(client): for device in client.get_devices(): if device.get_device_type() == NM.DeviceType.WIFI: return device return None

What is missing now is a callback function, but it’s easier if we look at it later. We can proceed invoking the add_and_activate_connection_async() method:

import gi
gi.require_version("NM", "1.0")
from gi.repository import GLib, NM # other functions here... main_loop = GLib.MainLoop()
client = NM.Client.new(None)
connection = create_connection()
device = find_wifi_device(client)
client.add_and_activate_connection_async( connection, device, None, None, add_and_activate_cb, None
)
main_loop.run()

To support multiple asynchronous operations without blocking execution of the whole program, libnm uses an event loop mechanism. For an introduction to event loops in GLib see this tutorial. The call to main_loop.run() waits until there are events (such as the callback for our method invocation, or any update from D-Bus). Event processing continues until the main loop is explicitly terminated. This happens in the callback:

def add_and_activate_cb(client, result, data): try: ac = client.add_and_activate_connection_finish(result) print("ActiveConnection {}".format(ac.get_path())) print("State {}".format(ac.get_state().value_nick)) except Exception as e: print("Error:", e) main_loop.quit()

Here, we use client.add_and_activate_connection_finish() to get the result for the asynchronous method. The result is a NMActiveConnection object and we print its D-Bus path and state.

 Note that the callback is invoked as soon as the active connection is created. It may still be attempting to connect. In other words, when the callback runs we don’t have a guarantee that the activation completed successfully. If we want to ensure that, we would need to monitor the active connection state until it changes to activated (or to deactivated in case of failure). In this example, we just print that the activation started, or why it failed, and then we quit the main loop; after that, the main_loop.run() call will end and our program will terminate.

Adding an address to a device

Once there is a connection active on a device, we might decide that we want to configure an additional IP address on it.

There are different ways to do that. One way would be to modify the profile and activate it again similar to what we saw in the previous example. Another way is by changing the runtime configuration of the device without updating the profile on disk.

To do that, we use the reapply() method. It requires at least the following parameters:

  • the NMDevice on which to apply the new configuration;
  • the NMConnection containing the configuration.

Since we only want to change the IP address and leave everything else unchanged, we first need to retrieve the current configuration of the device (also called the “applied connection”). Then we update it with the static address and reapply it to the device.

The applied connection, not surprisingly, can be queried with method get_applied_connection() of the NMDevice. Note that the method also returns a version id that can be useful during the reapply to avoid race conditions with other clients. For simplicity we are not going to use it.

In this example we suppose that we already know the name of the device we want to update:

import gi
import socket gi.require_version("NM", "1.0")
from gi.repository import GLib, NM # other functions here... main_loop = GLib.MainLoop()
client = NM.Client.new(None)
device = client.get_device_by_iface("enp1s0")
device.get_applied_connection_async(0, None, get_applied_cb, None)
main_loop.run()

The callback function retrieves the applied connection from the result, changes the IPv4 configuration and reapplies it:

def get_applied_cb(device, result, data): (connection, v) = device.get_applied_connection_finish(result) s_ip4 = connection.get_setting_ip4_config() s_ip4.add_address(NM.IPAddress.new(socket.AF_INET, "172.25.12.1", 24)) device.reapply_async(connection, 0, 0, None, reapply_cb, None)

Omitting exception handling for brevity, the reapply callback is as simple as:

def reapply_cb(device, result, data): device.reapply_finish(result) main_loop.quit()

When the program quits, we will see the new address configured on the interface.

Conclusion

This article introduced the D-Bus and libnm API of NetworkManager and presented some practical examples of its usage. Hopefully it will be useful when you need to develop your next project that involves networking!

Besides the examples presented here, the NetworkManager git tree includes many others for different programming languages. To stay up-to-date with the news from NetworkManager world, follow the blog.

References

Posted on Leave a comment

Manual action required to update Fedora Silverblue, Kinoite and IoT (version 36)

Due to an unfortunate combination of issues, the Fedora Silverblue, Kinoite and IoT variants that are running a version from 36.20220810.0 and later are no longer able to update to the latest version.

You can use these two commands to work around the bugs:

$ sudo find /boot/efi -exec touch '{}' ';'
$ sudo touch /etc/kernel/cmdline

Afterwards, you can update your system as usual with GNOME Software (on Silverblue) or via:

$ sudo rpm-ostree upgrade

These two issues are rooted in GRUB2 bugs that have only landed in Fedora and do not affect CentOS Stream 9 or RHEL. This also does not affect Fedora CoreOS for different reasons.

You can get more details about those issues in the tracker for Fedora Silverblue: https://github.com/fedora-silverblue/issue-tracker/issues/322

Posted on Leave a comment

The business case for supporting EPEL

EPEL stands for Extra Packages for Enterprise Linux. EPEL is a collection of packages built and maintained by the community for use in Red Hat Enterprise Linux (RHEL), CentOS Stream, and RHEL-like distributions like Rocky Linux and Alma Linux.

I am going to make the case that if you use EPEL as part of your organization’s infrastructure, you have an interest in keeping those packages available and as secure as they can be.

Who is this article for? I’m thinking of the team leads, managers, and directors in IT departments who make decisions about the tools their organizations have access to.

If you don’t use or know about EPEL, it’s likely that you don’t have to think about these things. In this case this article isn’t for you. However, it might contain ideas for promoting sustainable uses of free and open source software that you can apply to other situations that are more relevant to you.

Reason 1: Unmaintained packages may be removed from EPEL

Packages must be built and maintained for them to be available to the users of every distro. If someone isn’t doing the work of maintaining the packages, those packages become increasingly out of date. Eventually they may even be removed from the repository because of the security risk. This is avoidable as long as a package has a maintainer.

If you or someone in your organization is the maintainer of a package that you use, then you don’t have to worry about it falling by the wayside and potentially becoming a vulnerability. You get to make sure that the package stays in the repo, is up to date, and remains compatible with the rest of your infrastructure or deployments. Plain and simple.

Of course there needs to be room to manage bandwidth. How critical an application is to the operations of your organization defines how important it should be for you to make sure that either you maintain it or it is being looked after. XFCE may just be a nice-to-have for you, but Ansible might be mission critical.

Reason 2: You’re the first to have any security patches

Cyber threats continue to grow in number of exploits found and the speed at which they are exploited. Security is on every IT person’s mind. Patching vulnerabilities is something that increasingly can’t wait, and this extends to EPEL packages as well.

If you’re the maintainer for a required application, you have the ability to respond quickly to newly discovered vulnerabilities and protect your organization. Additionally, acting in your own self-interest now protects all the other organizations that also depend on that package.

Reason 3: Everyone else who uses that package can help you keep the package running well

As the maintainer of an application, others who also use the package will alert you of bugs as they arise. These are bugs that you may not have realized were there. Arguably it may not be critical to squash bugs that you don’t experience. However, by becoming the hub for feedback for that package, you will also be smoothing out the experience for your own users who may not have thought to report the bug. You benefit from crowd-sourcing quality control.

Reason 4: You can prepare for future releases before they come out

All future LTS releases of RHEL and RHEL-like distros will have their start as CentOS Stream. If you plan on migrating to a release that is represented by the current version of CentOS Stream, as the maintainer you can and should be building against it. This allows you to ensure continuity by packaging the application yourself for your next upgrade. You will know, ahead of time, whether your must-have packages will work in the latest release of your enterprise distro of choice.

Reason 5: You’re contributing to the long-term confidence in EPEL as a platform

The only reason we have packages in EPEL to begin with is because individuals are volunteering their time to maintain them. In a few cases you have companies committing resources to maintain packages but they are a small minority. If people don’t believe that EPEL will stick around for as long as RHEL releases, maintainers can lose steam or burnout. By committing resources to EPEL, you are shoring up confidence in the project – confidence that can encourage other organizations and people to invest in EPEL.

Potential solutions

If at this point you are thinking to yourself, “I would like to give back in some way, but what would that look like?”, here are some ideas. Some require lower commitment than others if you want to help but need to remain flexible about involvement.

  1. Maintain at least one package of the ones you use in your organization. The average maintainer looks after 10 packages, so covering at least one should be an easier hurdle to cross.
  2. If everything you use is already covered, find at least one package without a maintainer so that you can support other users just as other maintainers are supporting you.
  3. Report bugs for the packages you’re using.
  4. Request packages from older EPEL branches in newer EPEL branches, i.e. EPEL 9.
  5. Provide testing feedback for packages in the epel-testing repositories.
  6. Depending on the number and importance of packages you use, consider how much employee time you want to dedicate to EPEL maintenance.
  7. Integrate any EPEL maintenance you provide into the job descriptions of the responsible employees so that your team can continue being a responsible open source contributor into the future.

Become a package maintainer

You can start by checking out the Fedora documentation on how to become a package maintainer!

If you need support, or assistance getting started, help is available in the EPEL Matrix channel (with IRC bridge). Here are other ways to get in touch with the EPEL community.

Since you’ve made it this far…

Please take this quick survey about EPEL! We’re looking for feedback on how to improve EPEL in the future.

Here are additional resources you can check out on EPEL and how you can leverage it more.

What do you think?

Do you think these reasons are valid? Are there others you think should be mentioned? Do you disagree with this idea? Continue the conversation in the comments below or in the Fedora Discussion board!

Posted on Leave a comment

Manage containers on Fedora Linux with Podman Desktop

Podman Desktop is an open-source GUI application for managing containers on Linux, macOS, and Windows.

Historically, developers have been using Docker Desktop for graphical management of containers. This worked for those who had Docker Daemon and Docker CLI installed. However, for those who used Podman daemon-less tool, although there were a few Podman frontends like Pods, Podman desktop companion, and Cockpit, there was no official application. This is not the case anymore. Enter Podman Desktop!

This article will discuss features, installation, and use of Podman Desktop, which is developed by developers from Red Hat and other open-source contributors.

Installation

To install Podman Desktop on Fedora Linux, head over to podman-desktop.io, and click the Download for Linux button. You will be presented with two options: Flatpak and zip. In this example we are using Flatpak. After clicking Flatpak, open it in GNOME Software by double clicking the file (if you are using GNOME). You can also install it via the terminal:

flatpak install podman-desktop-X.X.X.flatpak

In the above command, replace X.X.X with the specific version you have downloaded. If you downloaded the zip file, then extract the archive, and launch the Podman Desktop application binary. You can also find pre-release versions by going to the project’s releases page on GitHub.

Features

Podman Desktop is still in its early days. Yet, it supports many common container operations like creating container images, running containers, etc. In addition, you can find a Podman extension under Extensions Catalog in Preferences, which you can use to manage Podman virtual machines on macOS and Windows. Futhermore, Podman Desktop has support for Docker Desktop extensions.

You can install such extensions in the Docker Desktop Extensions section under Preferences. The application window has two panes. The left narrow pane shows different features of the application and the right pane is the content area, which will display relevant information given what is selected on the left.

Podman Desktop 0.0.6 running on Fedora 36

Demo

To get an overall view of Podman Desktop’s capabilities, we will create an image from a Dockerfile and push it to a registry, then pull and run it, all from within Podman Desktop.

Build image

The first step is to create a simple Dockerfile by entering the following lines in the command line:

cat <<EOF>>Dockerfile
FROM docker.io/library/httpd:2.4
COPY . /var/www/html WORKDIR /var/www/html CMD ["httpd", "-D", "FOREGROUND"]
EOF

Now, go to the Images section and press the Build Image button. You will be taken to a new page to specify the Dockerfile, build context and image name. Under Containerfile path, click and browse to pick your Dockerfile. Under image name, enter a name for your image. You can specify a fully qualified image name (FQIN) in the form example.com/username/repo:tag if you want to push the image to a container registry. In this example, I enter quay.io/codezombie/demo-httpd:latest, because I have a public repository named demo-httpd on quay.io. You can follow a similar format to specify your FQIN pointing to your container registry (Quay, Docker Hub, GitHub Container Registry, etc.). Now, press Build and wait for the build to complete.

Push image

Once the build is finished, it’s time to push the image. So, we need to configure a registry in Podman Desktop. Go to Preferences, Registries and press Add registry.

Add Registry dialog

In the Add Registry dialog, enter your registry server address, and your user credentials and click ADD REGISTRY.

Now, I go back to my image in the list of images and push it to the repository by pressing the upload icon. When you hover over the image name that starts with the name of the registry added in the settings (quay.io in this demo), a push button appears alongside the image name.

The push button that appears when you hover over the image name
Image pushed to repository via Podman Desktop

Once the image is pushed, anyone with access to the image repository can pull it. Since my image repository is public, you can easily pull it in Podman Desktop.

Pull image

So, to make sure things work, remove this image locally and pull it in Podman Desktop. Find the image in the list and remove it by pressing the delete icon. Once the image is removed, click the Pull Image button. Enter the fully qualified name in the Image to Pull section and press Pull image.

Our container image is successfully pulled

Create a container

As the last part in our Podman Desktop demo, let us spin up a container from our image and check the result. I go to Containers and press Create Container. This will open up a dialog with two choices: From Containerfile/Dockerfile, and From existing image. Press From existing image. This takes us to the list of images. There, select the image we pulled.

Create a container in Podman Desktop

Now, we select our recently-pulled image from the list and press the Play button in front of it. In the dialog that appears, I enter demo-web as Container Name and 8000 as Port Mapping, and press Start Container.

Container configuration

The container starts running and we can check out our Apache server’s default page by running the following command:

curl http://localhost:8000 
It works!

You should also be able to see the running container in the Containers list, with its status changed to Running. There, you will find available operations in front of the container. For example, you can click the terminal icon to open a TTY into the container!

Display of running container demo-web in Podman Desktop with available operations for the container

What Comes Next

Podman Desktop is still young and under active development. There is a project roadmap on GitHub with a list of exciting and on-demand features including:

  • Kubernetes Integration
  • Support for Pods
  • Task Manager
  • Volumes Support
  • Support fo Docker Compose
  • Kind Support
Posted on Leave a comment

Contribute at the i18n, Release Validation, CrytoPolicy and GNOME 43 Final test weeks for Fedora Linux 37

There are 4 upcoming test days/weeks in the coming weeks. The first is Wed 31 August through Wed 07 Sept. It is to test Pre-Beta Release Validation. The second is Tuesday 6 Sept through Monday 12 Sept. It focuses on testing i18n. The third is Monday 5 Sept the Crypto Policy test day. The fourth is Wed 7 Sept through Wed 14 Sept to test GNOME 43 Final. Please come and test with us to make the upcoming Fedora 37 even better. Read more below on how to participate.

Pre-Beta Release Validation

Fedora Linux is foremost a community-powered distribution. Fedora Linux runs on all sorts of off-the-shelf hardware. The QA team relies on looking at bugs and edge cases coming out of community-owned hardware, so testing pre-release composes is a crucial part of the release process. We try to fix as many of them as we can! Please participate in the pre-beta release validation test week now through 7 September. You can help us catch those bugs at an early stage. A detailed post can be found here

GNOME 43 Final test week

GNOME is the default desktop environment for Fedora Workstation and thus for many Fedora users. As a part of the planned change, the GNOME 43 Final will land on Fedora which then will be shipped with Fedora Linux 37. To ensure that everything works fine The Workstation Working Group and QA team will have this test week Wed 7 Sept through Wed 14 Sept. Refer to the GNOME 43 test week wiki page for links and resources needed to participate.

i18n test week

i18n test week focuses on testing internationalization features in Fedora Linux. The test week is Tuesday 6 Sept through Monday 12 Sept.

StrongCryptoSettings3 test day

This is a new and unconventional test day. The change, however small, will have impacts across many areas and we want our users to spot as many bugs as we possibly can. The advice is to use exotic VPNs, proprietary chat apps, different email providers and even git workflows. These can be tested with some advice which can be found here. This test day is Monday 5 Sept.

How do test days work?

A test day is an event where anyone can help make sure changes in Fedora Linux work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to download test materials (which include some large files) and then read and follow directions step by step.

Detailed information about both test days is available on the wiki pages mentioned above. If you’re available on or around the days of the events, please do some testing and report your results.

Come and test with us to make the upcoming Fedora 37 even better.

Posted on Leave a comment

Help improve GNOME!

gnome-info-collect is a new tool that collects anonymous data about how GNOME systems are configured. It sends that information back to GNOME servers where it can be analyzed. The goal of this tool is to help improve GNOME, by providing data that can inform design decisions, influence where resources are invested, and generally help GNOME understand its users better.

The more people who provide data, the better! So, if you would like to help us improve GNOME, please consider installing and running gnome-info-collect on your system. It only takes a second.

As of last week, gnome-info-collect is ready to be used, and we are asking all GNOME users to install and run it!

How to run the tool

Simply install the package from Fedora Copr repository by running the following commands:

$ dnf copr enable vstanek/gnome-info-collect
$ dnf install gnome-info-collect

The Copr repository also contains instructions on how to install on systems without dnf (useful for Silverblue users).

After installing, simply run

$ gnome-info-collect

from the Terminal. The tool will show you what information will be shared and won’t upload anything until you give your consent.

There are packages for other distributions as well. See the project’s README for more information.

How it works

gnome-info-collect is a simple client-server application. The client can be run on any GNOME system. There, it collects various system data including:

  • Hardware information, including the manufacturer and model
  • Various system settings, including workspace configuration, and which sharing features are enabled
  • Application information, such as which apps are installed, and which ones are favourited
  • Which GNOME shell extensions are installed and enabled

You can find the full list of collected information in the gnome-info-collect README. The tool shows the data that will be collected prior to uploading and, if the user consents to the upload, is then securely sent to GNOME’s servers for processing.

Data privacy

The collected data is completely anonymous and will be used only for the purpose of enhancing usability and user experience of GNOME. No personal information, like usernames or email addresses, is recorded. Any potentially identifying information, such as the IP address of the sender and the precise time of receiving the data, is discarded on the server side. To prevent the same client from sending data multiple times, a salted hash of the machine ID and username is used.

All of this ensures that the collected data is confidential and untraceable.

Spread the word!

The best way to help is to take part by running gnome-info-collect and uploading your anonymous data.

You can also help by sharing this article with other GNOME users, and by encouraging others to run the collection tool themselves. The more users running gnome-info-collect, the better conclusions we can make from the collected data. This will result in an improved GNOME system that is more comfortable for its users.

So, do not hesitate to help improve GNOME. Simply install gnome-info-collect, run it and go tell all your GNOME friends about it! Thank you!

Posted on Leave a comment

4 cool new projects to try in Copr for August 2022

Copr is a build system for anyone in the Fedora community. It hosts thousands of projects for various purposes and audiences. Some of them should never be installed by anyone, some are already being transitioned to the official Fedora Linux repositories, and the rest are somewhere in between. Copr gives you the opportunity to install third-party software that is not available in Fedora Linux repositories, try nightly versions of your dependencies, use patched builds of your favorite tools to support some non-standard use cases, and just experiment freely.

If you don’t know how to enable a repository or if you are concerned about whether it is safe to use Copr, please consult the project documentation.

This article takes a closer look at interesting projects that recently landed in Copr.

Ntfy

Ntfy is a simple HTTP-based notification service that allows you to send notifications to your devices using scripts from any computer. To send notifications ntfy uses PUT/POST commands or it is possible to send notifications via ntfy CLI without any registration or login. For this reason, choose a hard-to guess topic name, as this is essentially a password.

In the case of sending notifications, it is as simple as this:

$ ntfy publish beer-lovers "Hi folks. I love beer!"
{"id":"4ZADC9KNKBse", "time":1649963662, "event":"message", "topic":"beer-lovers", "message":"Hi folks. I love beer!"}

And a listener who subscribes to this topic will receive:

$ ntfy subscribe beer-lovers
{"id":"4ZADC9KNKBse", "time":1649963662, "event":"message", "topic":"beer-lovers", "message":"Hi folks. I love beer!"}

If you wish to receive notifications on your phone, then ntfy also has a mobile app for Android so you can send notifications from your laptop to your phone.

Installation instructions

The repo currently provides ntfy for Fedora Linux 35, 36, 37, and Fedora Rawhide. To install it, use these commands:

sudo dnf copr enable cyqsimon/ntfysh
sudo dnf install ntfysh

Koi

If you use light mode during the day but want to protect your eyesight overnight and switch to dark mode, you don’t have to do it manually anymore. Koi will do it for you!

Koi provides KDE Plasma Desktop functionality to automatically switch between light and dark mode according to your preferences. Just set the time and themes.

Installation instructions

The repo currently provides Koi for Fedora Linux 35, 36, 37, and Fedora Rawhide. To install it, use these commands:

sudo dnf copr enable birkch/Koi
sudo dnf install Koi

SwayNotificationCenter

SwayNotificationCenter provides a simple and nice looking GTK GUI for your desktop notifications.

You will find some key features such as do-not-disturb mode, a panel to view previous notifications, track pad/mouse gestures, support for keyboard shortcuts, and customizable widgets. SwayNotificationCenter also provides a good way to configure and customize via JSON and CSS files.

More information on https://github.com/ErikReider/SwayNotificationCenter with screenshots at the bottom of the page.

Installation instructions

The repo currently provides SwayNotificationCenter for Fedora Linux 35, 36, 37, and Fedora Rawhide. To install it, use these commands:

sudo dnf copr enable erikreider/SwayNotificationCenter
sudo dnf install SwayNotificationCenter

Webapp Manager

Ever want to launch your favorite websites from one place? With WebApp manager, you can save your favorite websites and run them later as if they were an apps.

You can set a browser in which you want to open the website and much more. For example, with Firefox, all links are always opened within the WebApp.

WebApp manager showcase

Installation instructions

The repo currently provides WebApp for Fedora Linux 35, 36, 37, and Fedora Rawhide. To install it, use these commands:

sudo dnf copr enable perabyte/webapp-manager
sudo dnf install webapp-manager
Posted on Leave a comment

Hibernation in Fedora Workstation

This article walks you through the manual setup for hibernation in Fedora Linux 36 Workstation using BTRFS and is based on a gist by eloylp on github.

Goal and Rationale

Hibernation stores the current runtime state of your machine – effectively the contents of your RAM, onto disk and does a clean shutdown. Upon next boot this state is restored from disk to memory such that everything, including open programs, is how you left it.

Fedora Workstation uses ZRAM. This is a sophisticated approach to swap using compression inside a portion of your RAM to avoid the slower on-disk swap files. Unfortunately this means you don’t have persistent space to move your RAM upon hibernation when powering off your machine.

How it works

The technique configures systemd and dracut to store and restore the contents of your RAM in a temporary swap file on disk. The swap file is created just before and removed right after hibernation to avoid trouble with ZRAM. A persistent swap file is not recommended in conjunction with ZRAM, as it creates some confusing problems compromising your systems stability.

A word on compatibility and expectations

Hibernation following this guide might not work flawless on your particular machine(s). Due to possible shortcomings of certain drivers you might experience glitches like non-working wifi or display after resuming from hibernation. In that case feel free to reach out to the comment section of the gist on github, or try the tips from the troubleshooting section at the bottom of this article.

The changes introduced in this article are linked to the systemd hibernation.service and hibernation.target units and hence won’t execute on their own nor interfere with your system if you don’t initiate a hibernation. That being said, if it does not work it still adds some small bloat which you might want to remove.

Hibernation in Fedora Workstation

The first step is to create a btrfs sub volume to contain the swap file.

$ btrfs subvolume create /swap

In order to calculate the size of your swap file use swapon to get the size of your zram device.

$ swapon
NAME TYPE SIZE USED PRIO
/dev/zram0 partition 8G 0B 100

In this example the machine has 16G of RAM and a 8G zram device. ZRAM stores roughly double the amount of system RAM compressed in a portion of your RAM. Let that sink in for a moment. This means that in total the memory of this machine can hold 8G * 2 + 8G of RAM which equals 24G uncompressed data. Create and configure the swapfile using the following commands.

$ touch /swap/swapfile
# Disable Copy On Write on the file
$ chattr +C /swap/swapfile
$ fallocate --length 24G /swap/swapfile
$ chmod 600 /swap/swapfile $ mkswap /swap/swapfile

Modify the dracut configuration and rebuild your initramfs to include the

resume

module, so it can later restore the state at boot.

$ cat <<-EOF | sudo tee /etc/dracut.conf.d/resume.conf
add_dracutmodules+=" resume "
EOF
$ dracut -f

In order to configure grub to tell the kernel to resume from hibernation using the swapfile, you need the UUID and the physical offset.

Use the following command to determine the UUID of the swap file and take note of it.

$ findmnt -no UUID -T /swap/swapfile
dbb0f71f-8fe9-491e-bce7-4e0e3125ecb8

Calculate the correct offset. In order to do this you’ll unfortunately need gcc and the source of the btrfs_map_physical tool, which computes the physical offset of the swapfile on disk. Invoke gcc in the directory you placed the source in and run the tool.

$ gcc -O2 -o btrfs_map_physical btrfs_map_physical.c
$ ./btrfs_map_physical /path/to/swapfile FILE OFFSET EXTENT TYPE LOGICAL SIZE LOGICAL OFFSET PHYSICAL SIZE DEVID PHYSICAL OFFSET
0 regular 4096 2927632384 268435456 1 <4009762816>
4096 prealloc 268431360 2927636480 268431360 1 4009766912
268435456 prealloc 268435456 3251634176 268435456 1 4333764608
536870912 prealloc 268435456 3520069632 268435456 1 4602200064
805306368 prealloc 268435456 3788505088 268435456 1 4870635520
1073741824 prealloc 268435456 4056940544 268435456 1 5139070976
1342177280 prealloc 268435456 4325376000 268435456 1 5407506432
1610612736 prealloc 268435456 4593811456 268435456 1 5675941888

The first value in the PHYSICAL OFFSET column is the relevant one. In the above example it is 4009762816.

Take note of the pagesize you get from getconf PAGESIZE.

Calculate the kernel resume_offset through division of physical offset by the pagesize. In this example that is 4009762816 / 4096 = 978946.

Update your grub configuration file and add the resume and resume_offset kernel cmdline parameters.

grubby --args="resume=UUID=dbb0f71f-8fe9-491e-bce7-4e0e3125ecb8 resume_offset=2459934" --update-kernel=ALL

The created swapfile is only used in the hibernation stage of system shutdown and boot hence not configured in fstab. Systemd units control this behavior, so create the two units hibernate-preparation.service and hibernate-resume.service.

$ cat <<-EOF | sudo tee /etc/systemd/system/hibernate-preparation.service
[Unit]
Description=Enable swap file and disable zram before hibernate
Before=systemd-hibernate.service [Service]
User=root
Type=oneshot
ExecStart=/bin/bash -c "/usr/sbin/swapon /swap/swapfile && /usr/sbin/swapoff /dev/zram0" [Install]
WantedBy=systemd-hibernate.service
EOF
$ systemctl enable hibernate-preparation.service
$ cat <<-EOF | sudo tee /etc/systemd/system/hibernate-resume.service
[Unit]
Description=Disable swap after resuming from hibernation
After=hibernate.target [Service]
User=root
Type=oneshot
ExecStart=/usr/sbin/swapoff /swap/swapfile [Install]
WantedBy=hibernate.target
EOF
$ systemctl enable hibernate-resume.service

Systemd does memory checks on login and hibernation. In order to avoid issues when moving the memory back and forth between swapfile and zram disable some of them.

$ mkdir -p /etc/systemd/system/systemd-logind.service.d/
$ cat <<-EOF | sudo tee /etc/systemd/system/systemd-logind.service.d/override.conf
[Service]
Environment=SYSTEMD_BYPASS_HIBERNATION_MEMORY_CHECK=1
EOF
$ mkdir -p /etc/systemd/system/systemd-hibernate.service.d/
$ cat <<-EOF | sudo tee /etc/systemd/system/systemd-hibernate.service.d/override.conf
[Service]
Environment=SYSTEMD_BYPASS_HIBERNATION_MEMORY_CHECK=1
EOF

Reboot your machine for the changes to take effect. The following SELinux configuration won’t work if you don’t reboot first.

SELinux won’t like hibernation attempts just yet. Change that with a new policy. An easy although “brute” approach is to initiate hibernation and use the audit log of this failed attempt via audit2allow. The following command will fail, returning you to a login prompt.

systemctl hibernate

After you’ve logged in again check the audit log, compile a policy and install it. The -b option filters for audit log entries from last boot. The -M option compiles all filtered rules into a module, which is then installed using semodule -i.

$ audit2allow -b
#============= systemd_sleep_t ==============
allow systemd_sleep_t unlabeled_t:dir search;
$ cd /tmp
$ audit2allow -b -M systemd_sleep
$ semodule -i systemd_sleep.pp

Check that hibernation is working via systemctl hibernate again. After resume check that ZRAM is indeed the only active swap device.

$ swapon
NAME TYPE SIZE USED PRIO
/dev/zram0 partition 8G 0B 100

You now have hibernation configured.

GNOME Shell hibernation integration

You might want to add a hibernation button to the GNOME Shell “Power Off / Logout” section. Check out the extension Hibernate Status Button to do so.

Troubleshooting

A first place to troubleshoot any problems is through journalctl -b. Have a look around the end of the log, after trying to hibernate, to pin-point log entries that tell you what might be wrong.

Another source of information on errors is the Problem Reporting tool. Especially problems, that are not common but more specific to your hardware configuration. Have a look at it before and after attempting hibernation and see if something comes up. Follow up on any issues via BugZilla and see if others experience similar problems.

Revert the changes

To reverse the changes made above, follow this check-list:

  • remove the swapfile
  • remove the swap subvolume
  • remove the dracut configuration and rebuild dracut
  • remove kernel cmdline args via grubby –remove-args=
  • disable and remove hibernation preparation and resume services
  • remove systemd overrides for systemd-logind.service and systemd-hibernation.service
  • remove SELinux module via semodule -r systemd_sleep

Credits and Additional Resources

This article is a community effort based primarily on the work of eloylp. As author of this article I’d like to make transparent that I’ve participated in the discussion to advance the gist behind this but many more minds contributed to make this work. Make certain to check out the discussion on github.

There are already some ansible playbooks and shell scripts to automate the process depicted in this guide. For example check out the shell scripts by krokwen and pietryszak or the ansible playbook by jorp

See the arch wiki for the full guide on how to calculate the swapfile offset.