Posted on Leave a comment

EPEL 8 Modularity is going away

EPEL 8 Modularity was set up shortly after the main EPEL 8 release. It attempted to use the Fedora module ecosystem with RHEL modules. The strange mixture of Fedora ecosystem and RHEL modularity never worked properly. There have been routine instances of modules that wouldn’t install, modules that overwrote RHEL modules, Fedora maintainers surprised their modules were in EPEL, and the constant issue that EPEL modules couldn’t depend on RHEL modules.

Many people have attempted to fix EPEL modularity over the years but none of these attempts have worked. At this point the EPEL Steering Committee is saying that the experiment with modules in EPEL has not worked. We are decommissioning EPEL 8 modularity.

Decommission Plan

  • October 31, 2022
    • An updated epel-release will be pushed to the epel8 repo.
      • This sets “enabled = 0” for epel-modular, if you haven’t already changed your config.
      • epel-modular full name will have “DEPRECATED” in it.
  • February 15, 2023
    • The infrastructure for building and publishing epel8 modules will be removed.
    • The EPEL 8 modules will be archived and removed.
    • The mirror manager will be pointed to the archive.

Archive Access

Systems will still be able to access archived modules, but their use is not recommended. The modules will not receive any further security or bug fixes.

Posted on Leave a comment

You’re invited to the Fedora Linux 37 Release Party!

I am pleased to announce we will celebrate the final release of Fedora Linux 37 with a virtual Release Party. The virtual release parties are a great way to learn more about the latest Fedora Linux release. More importantly, they’re a chance to spend time with the wonderful Fedora community. Please register on Hopin and join us on November 4th and 5th for a short program of informational sessions and social activities. Make sure to save the dates, share the registration, and show up to party with Fedora Friends!

The program is still in the works, but we hope to include informational sessions that will feature updates about Fedora CoreOS, the new installer interface preview, and a bunch more current community activities. We’ll also meet our new Fedora Community Action and Impact Coordinator, Justin W. Flory. Last, but certainly not least, we will have social sessions, including hanging out in the Fedora Museum WorkAdventure. Thanks to our amazing community for all your contributions to the latest release of Fedora Linux. Let’s celebrate!

Don’t forget to register for free any time and join us November 4th and 5th.

Posted on Leave a comment

Contribute at the Fedora Linux Test Week for Kernel 6.0

The kernel team is working on final integration for Linux kernel 6.0. This version was just recently released, and will arrive soon in Fedora. As a result, the Fedora kernel and QA teams have organized a test week now through Sunday, Oct 16, 2022. Refer to the wiki page for links to the test images you’ll need to participate. Read below for details.

How does a test week work?

A test week is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to do the following things:

  • Download test materials, which include some large files
  • Read and follow directions step by step

The wiki page for the kernel test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the day of the event, please do some testing and report your results. We have a document which provides all the necessary steps.

Happy testing, and we hope to see you on test day.

Posted on Leave a comment

Welcome to our new Fedora Community Action and Impact Coordinator

Great news, Fedora Friends! I am excited to announce that we have completed our search for a new Fedora Community Action and Impact Coordinator (FCAIC). He joins the Open Source Program Office (OSPO) team at Red Hat to work with the Fedora Community today. Please give a warm welcome to Justin W. Flory (he/him).

If you’re a contributor to Fedora, you may have already worked with Justin on a variety of teams and projects. Although I couldn’t possibly list them all in one post, Justin’s Fedora contribution highlights include co-founding CommOps, the Diversity, Equity and Inclusion (D.E.I.) Team, and Mindshare Committee. More contribution highlights include former editor-in-chief of the Fedora Magazine and Community Blog, former Council Member, leading the Marketing team, contributing as a packager, and traveling to events and conferences worldwide as a Fedora Ambassador. He has attended many Flocks: organizing workshops, presenting sessions, and coordinating informal socials like the international candy swap. Most recently Justin presented “5 Lessons Learned from 5 years of Fedora’s D.E.I. Events” at Nest with Fedora 2022.

Justin is new to Red Hat, joining us after seven years of involvement with the Fedora Community. He was first introduced to Fedora as a high school student and later through Open@RIT at the Rochester Institute of Technology (formerly the FOSSBox and FOSS@MAGIC). Justin’s most recent role was at UNICEF’s Office of Innovation supporting and mentoring startup companies across the world in open sourcing their innovations. He mentored 23 companies from 19 countries on community strategies for their Open Source products. Of those, fourteen achieved global recognition as Digital Public Goods (like Fedora Linux). Additionally, he also designed a fixed-term Open Source mentoring program for startup companies and developer communities to follow best practices and industry standards on launching Open Source communities.

Justin’s extensive experience with supporting Open Source community building, program management, and involvement with the Fedora Project makes him an excellent fit for this position. I am excited to work with him as both a colleague on the OSPO team at Red Hat and as a Fedora contributor. Feel free to reach out to Justin with your congratulations, but give him a bit to get up to speed with his new FCAIC duties. 

Congratulations, Justin!

Posted on Leave a comment

Working with Btrfs – General Concepts

This article is part of a series of articles that takes a closer look at Btrfs. This is the default filesystem for Fedora Workstation and Fedora Silverblue since Fedora Linux 33.

Introduction

Filesystems are one of the foundations of modern computers. They are an essential part of every operating system and they usually work unnoticed. However, modern filesystems such as Btrfs offer many great features that make working with computers more convenient. Next to other things they can, for example, transparently compress your files for you or build a solid foundation for incremental backups.

This article gives you a high-level overview of how the Btrfs filesystem works and some of the features it has. It will not go into much technical detail nor look at the implementation. More detailed explanations of some highlighted features follow in later articles of this series.

What is a filesystem?

If you’ve heard before how filesystems work on the most basic level, then this isn’t new to you and you can skip to the next section. Otherwise, read ahead for a short introduction into what makes a filesystem in the first place.

In simple terms, a filesystem allows your PC to find the data that it stores on disk. This sounds like a trivial task, but in essence any type of non-volatile storage device today (such as HDDs, SSDs, SD cards, etc…) is still mostly what it was back in 1970 when PCs were being invented: A (huge) collection of storage blocks.

Blocks are the most granular addressable storage unit. Every file on your PC is stored across one or more blocks. A block is typically 4096 bytes in size. This depends on the hardware you have and the software (i.e. the filesystem) on top of it.

Filesystems allow us to find the contents of our files from the vast amount of available storage blocks. This is done via so-called inodes. An inode contains information about a file in a specially formatted storage block. This includes the file’s size, where to find the storage blocks that make up the file contents, its access rules (i.e. who can read, write or execute the file) and much more.

Below is an example of what this looks like:

A text file “myfile.txt” and a hypothetical example of its representation on disk. All the squares are individual storage blocks.

The structure of an inode has big implications on a filesystem’s capabilities, so it is one of the central datastructures for any file system. For this reason every filesystem has its own inode structure. If you want to know more about this, have a look at the inode structure of the Btrfs filesystem linked below [1]. For a more detailed explanation of what the individual fields mean, you can refer to the inode structure of the ext4 filesystem [2].

Copy-on-Write filesystems

One of the outstanding features of Btrfs, compared to ext4, for example, is that it is a CoW (Copy-on-Write) filesystem. When a file is changed and written back to disk, it intentionally is not written back to where it was before. Instead, it is copied and stored in an entirely new location on disk. In this sense, it may be simpler to think of CoW as a kind of “redirection”, because the file write is redirected to different storage blocks.

This may sound wasteful, but in practice it isn’t. This is because the modified data must be written back to the disk in any case, regardless of how the filesystem works. Btrfs merely makes sure that the data is written to previously unoccupied blocks, so the old data remains intact. The only real drawback is that this behavior can lead to file fragmentation quicker than on other filesystems. In regular desktop usage scenarios it is unlikely you will notice a difference.

What is the advantage of CoW? In simple terms: a history of the modified and edited files can be kept. Btrfs will keep the references to the old file versions (inodes) somewhere they can be easily accessed. This reference is a snapshot: An image of the filesystem state at some point in time. This will be the topic of a separate article in this series, so it will be left at that for now.

Beyond keeping file histories, CoW filesystems are always in a consistent state, even if a previous filesystem transaction (like writing to a file) didn’t complete due to e.g. power loss. That is because filesystem metadata updates are also CoW: The file system itself is never overwritten, so an interruption can’t leave it in a partially written state

Copy-on-Write for files

You can think of filenames as pointers to the inodes of the file they belong to. Upon writing to a file, Btrfs creates a copy of the modified file content (the data), along with a new inode (the metadata), and then makes your filename point to this new inode. The old inode remains untouched. Below you see another hypothetical example to illustrate this:

Continuation of the example above: 3 more bytes of data were added

Here “myfile.txt” has had three bytes appended. A traditional filesystem would have updated the “Data” block in the middle to contain the new contents. A CoW filesystem keeps the old blocks intact (greyed out) and writes (copies) changed data and metadata somewhere new. It is important to note that only changed data blocks are copied, and not the whole file.

If there are no more unused blocks to write new contents to, Btrfs will reclaim space from data blocks occupied by old file versions (Unless they are part of a snapshot, see later article in this series).

Copy-on-Write for folders

From a filesystem’s point of view, a folder is a special type of file. In contrast to regular files, the filesystem interprets the underlying contents directly. A folder has some metadata associated with it (an inode, as seen for files above) that governs access permissions or modification time. In the simplest case, the data stored in a folder (so called “directory entries”) is a list of references to inodes, where each inode is in turn another file or folder. However, modern filesystems store at least a filename, together with a reference to an inode of the file in question, in a directory entry.

Earlier it was pointed out that writing to a file creates a copy of the previous inode and modifies the contents accordingly. In essence, this yields a new inode that isn’t related to its predecessor. To make the modified file show up in the filesystem, all the directory entries containing a reference to it are updated as well.

This is a recursive process! Since a folder is itself a file with an inode, modifying any of its folder entries creates a new inode for the folder file. This recursion occurs all the way up the filesystem tree, until it arrives at the filesystem root.

As a consequence, as long as a reference is kept to any of the old directories and they are not deleted or overwritten, the filesystem tree can be traversed in it’s previous state. This, again, is exactly what snapshots do.

What to expect in future articles

Btrfs is more than just a CoW filesystem. It aims to implement “advanced features while also focusing on fault tolerance, repair and easy administration” (See [3]). Future articles of this series will have a look at these features in particular:

  • Subvolumes – Filetrees within your filetree
  • Snapshots – Going back in time
  • Compression – Transparently saving storage space
  • Qgroups – Limiting your filesystem size
  • RAID – Replace your mdadm configuration

This is by far not an exhaustive list of Btrfs features. If you want the full overview of available features, check out the Wiki [4] and Docs [3].

Conclusion

I hope that I managed to whet your appetite for getting to know your PC filesystem. If you have questions so far, please leave a comment about what you come up with so they can be discussed in future articles. In the meantime, feel free to study the linked resources in the text. If you stumble over a Btrfs feature that you find particularly intriguing, please add a comment below, too. If there’s enough interest in a particular topic, maybe I’ll add an article to the series. See you in the next article!

Sources

[1]: https://btrfs.wiki.kernel.org/index.php/Data_Structures#btrfs_inode_item
[2]: https://ext4.wiki.kernel.org/index.php/Ext4_Disk_Layout#Inode_Table
[3]: https://btrfs.readthedocs.io/en/latest/Introduction.html
[4]: https://btrfs.wiki.kernel.org/index.php/Main_Page

Posted on Leave a comment

Using Python and NetworkManager to control the network

NetworkManager is the default network management service on Fedora and several other Linux distributions. Its main purpose is to take care of things like setting up interfaces, adding addresses and routes to them and configuring other network related aspects of the system, such as DNS.

There are other tools that offer similar functionality. However one of the advantages of NetworkManager is that it offers a powerful API. Using this API, other applications can inspect, monitor and change the networking state of the system.

This article first introduces the API of NetworkManager and presents how to use it from a Python program. In the second part it shows some practical examples: how to connect to a wireless network or to add an IP address to an interface programmatically via NetworkManager.

The API

NetworkManager provides a D-Bus API. D-Bus is a message bus system that allows processes to talk to each other; using D-Bus, a process that wants to offer some services can register on the bus with a well-known name (for example, “org.freedesktop.NetworkManager”) and expose some objects, each identified by a path. Using d-feet, a graphical tool to inspect D-Bus objects, we can see the object tree exposed by the NetworkManager service:

Each object has properties, methods and signals, grouped into different interfaces. For example, the following is a simplified view of the interfaces for the second device object:

We see that there are different interfaces; the org.freedesktop.NetworkManager.Device interface contains some properties common to all devices, like the state, the MTU and IP configurations. Since this device is Ethernet, it also has a org.freedesktop.NetworkManager.Device.Wired D-Bus interface containing other properties such as the link speed.

The full documentation for the D-Bus API of NetworkManager is here.

A client can connect to the NetworkManager service using the well-known name and perform operations on the exposed objects. For example, it can invoke methods, access properties or receive notifications via signals. In this way, it can control almost every aspect of network configuration. In fact, all the tools that interact with NetworkManager – nmcli, nmtui, GNOME control center, the KDE applet, Cockpit – use this API.

libnm

When developing a program, it can be convenient to automatically instantiate objects from the objects available on D-Bus and keep their properties synchronized; or to be able to have method calls on those objects automatically dispatched to the corresponding D-Bus method. Such objects are usually called proxies and are used to hide the complexity of D-Bus communication from the developer.

For this purpose, the NetworkManager project provides a library called libnm, written in C and based on GNOME’s GLib and GObject. The library provides C language bindings for functionality provided by NetworkManager. Being a GLib library, it is usable from other languages as well via GObject introspection, as explained below.

The library maps fairly closely to the D-Bus API of NetworkManager. It wraps remote D-Bus objects as native GObjects, and D-Bus signals and properties to GObject signals and properties. Furthermore, it provides helpful accessors and utility functions.

Overview of libnm objects

The diagram below shows the most important objects in libnm and their relationship:

NMClient caches all the objects instantiated from D-Bus. The object is typically created at the beginning at the program and provides a way to access other objects.

A NMDevice represents a network interface, physical (as Ethernet, Infiniband, Wi-Fi, etc.) or virtual (as a bridge or a IP tunnel). Each device type supported by NetworkManager has a dedicated subclass that implements type-specific properties and methods. For example, a NMDeviceWifi has properties related to the wireless configuration and to access points found during the scan, while a NMDeviceVlan has properties describing its VLAN-id and the parent device.

NMClient also provides a list of NMRemoteConnection objects. NMRemoteConnection is one of the two implementations of the NMConnection interface. A connection (or connection profile) contains all the configuration needed to connect to a specific network.

The difference between a NMRemoteConnection and a NMSimpleConnection is that the former is a proxy for a connection existing on D-Bus while the latter is not. In particular, NMSimpleConnection can be instantiated when a new blank connection object is required. This is useful for, example, when adding a new connection to NetworkManager.

The last object in the diagram is NMActiveConnection. This represents an active connection to a specific network using settings from a NMRemoteConnection.

GObject introspection

GObject introspection is a layer that acts as a bridge between a C library using GObject and programming language runtimes such as JavaScript, Python, Perl, Java, Lua, .NET, Scheme, etc.

When the library is built, sources are scanned to generate introspection metadata describing, in a language-agnostic way, all the constants, types, functions, signals, etc. exported by the library. The resulting metadata is used to automatically generate bindings to call into the C library from other languages.

One form of metadata is a GObject Introspection Repository (GIR) XML file. GIRs are mostly used by languages that generate bindings at compile time. The GIR can be translated into a machine-readable format called Typelib that is optimized for fast access and lower memory footprint; for this reason it is mostly used by languages that generate bindings at runtime.

This page lists all the introspection bindings for other languages. For a Python example we will use PyGObject which is included in the python3-gobject RPM on Fedora.

A basic example

Let’s start with a simple Python program that prints information about the system:

import gi gi.require_version("NM", "1.0")
from gi.repository import GLib, NM client = NM.Client.new(None)
print("version:", client.get_version())

At the beginning we import the introspection module and then the Glib and NM modules. Since there could be multiple versions of the NM module in the system, we make certain to load the right one. Then we create a client object and print the version of NetworkManager.

Next, we want to get a list of devices and print some of their properties:

devices = client.get_devices()
print("devices:")
for device in devices: print(" - name:", device.get_iface()); print(" type:", device.get_type_description()) print(" state:", device.get_state().value_nick)

The device state is an enum of type NMDeviceState and we use value_nick to get its description. The output is something like:

version: 1.41.0
devices: - name: lo type: loopback state: unmanaged - name: enp1s0 type: ethernet state: activated - name: wlp4s0 type: wifi state: activated

In the libnm documentation we see that the NMDevice object has a get_ip4_config() method which returns a NMIPConfig object and provides access to addresses, routes and other parameters currently set on the device. We can print them with:

 ip4 = device.get_ip4_config() if ip4 is not None: print(" addresses:") for a in ip4.get_addresses(): print(" - {}/{}".format(a.get_address(), a.get_prefix())) print(" routes:") for r in ip4.get_routes(): print(" - {}/{} via {}".format(r.get_dest(), r.get_prefix(), r.get_next_hop()))

From this, the output for enp1s0 becomes:

 - name: enp1s0 type: ethernet state: activated addresses: - 192.168.122.191/24 - 172.26.1.1/16 routes: - 172.26.0.0/16 via None - 192.168.122.0/24 via None - 0.0.0.0/0 via 192.168.122.1

Connecting to a Wi-Fi network

Now that we have mastered the basics, let’s try something more advanced. Suppose we are in the range of a wireless network and we want to connect to it.

As mentioned before, a connection profile describes all the settings required to connect to a specific network. Conceptually, we’ll need to perform two different operations: first insert a new connection profile into NetworkManager’s configuration and second activate it. Fortunately, the API provides method nm_client_add_and_activate_connection_async() that does everything in a single step. When calling the method we need to pass at least the following parameters:

  • the NMConnection we want to add, containing all the needed properties;
  • the device to activate the connection on;
  • the callback function to invoke when the method completes asynchronously.

We can construct the connection with:

def create_connection(): connection = NM.SimpleConnection.new() ssid = GLib.Bytes.new("Home".encode("utf-8")) s_con = NM.SettingConnection.new() s_con.set_property(NM.SETTING_CONNECTION_ID, "my-wifi-connection") s_con.set_property(NM.SETTING_CONNECTION_TYPE, "802-11-wireless") s_wifi = NM.SettingWireless.new() s_wifi.set_property(NM.SETTING_WIRELESS_SSID, ssid) s_wifi.set_property(NM.SETTING_WIRELESS_MODE, "infrastructure") s_wsec = NM.SettingWirelessSecurity.new() s_wsec.set_property(NM.SETTING_WIRELESS_SECURITY_KEY_MGMT, "wpa-psk") s_wsec.set_property(NM.SETTING_WIRELESS_SECURITY_PSK, "z!q9at#0b1") s_ip4 = NM.SettingIP4Config.new() s_ip4.set_property(NM.SETTING_IP_CONFIG_METHOD, "auto") s_ip6 = NM.SettingIP6Config.new() s_ip6.set_property(NM.SETTING_IP_CONFIG_METHOD, "auto") connection.add_setting(s_con) connection.add_setting(s_wifi) connection.add_setting(s_wsec) connection.add_setting(s_ip4) connection.add_setting(s_ip6) return connection

The function creates a new NMSimpleConnection and sets all the needed properties. All the properties are grouped into settings. In particular, the NMSettingConnection setting contains general properties such as the profile name and its type. NMSettingWireless indicates the wireless network name (SSID) and that we want to operate in “infrastructure” mode, that is, as a wireless client. The wireless security setting specifies the authentication mechanism and a password. We set both IPv4 and IPv6 to “auto” so that the interface gets addresses via DHCP and IPv6 autoconfiguration.

All the properties supported by NetworkManager are described in the nm-settings man page and in the “Connection and Setting API Reference” section of the libnm documentation.

To find a suitable interface, we loop through all devices on the system and return the first Wi-Fi device.

def find_wifi_device(client): for device in client.get_devices(): if device.get_device_type() == NM.DeviceType.WIFI: return device return None

What is missing now is a callback function, but it’s easier if we look at it later. We can proceed invoking the add_and_activate_connection_async() method:

import gi
gi.require_version("NM", "1.0")
from gi.repository import GLib, NM # other functions here... main_loop = GLib.MainLoop()
client = NM.Client.new(None)
connection = create_connection()
device = find_wifi_device(client)
client.add_and_activate_connection_async( connection, device, None, None, add_and_activate_cb, None
)
main_loop.run()

To support multiple asynchronous operations without blocking execution of the whole program, libnm uses an event loop mechanism. For an introduction to event loops in GLib see this tutorial. The call to main_loop.run() waits until there are events (such as the callback for our method invocation, or any update from D-Bus). Event processing continues until the main loop is explicitly terminated. This happens in the callback:

def add_and_activate_cb(client, result, data): try: ac = client.add_and_activate_connection_finish(result) print("ActiveConnection {}".format(ac.get_path())) print("State {}".format(ac.get_state().value_nick)) except Exception as e: print("Error:", e) main_loop.quit()

Here, we use client.add_and_activate_connection_finish() to get the result for the asynchronous method. The result is a NMActiveConnection object and we print its D-Bus path and state.

 Note that the callback is invoked as soon as the active connection is created. It may still be attempting to connect. In other words, when the callback runs we don’t have a guarantee that the activation completed successfully. If we want to ensure that, we would need to monitor the active connection state until it changes to activated (or to deactivated in case of failure). In this example, we just print that the activation started, or why it failed, and then we quit the main loop; after that, the main_loop.run() call will end and our program will terminate.

Adding an address to a device

Once there is a connection active on a device, we might decide that we want to configure an additional IP address on it.

There are different ways to do that. One way would be to modify the profile and activate it again similar to what we saw in the previous example. Another way is by changing the runtime configuration of the device without updating the profile on disk.

To do that, we use the reapply() method. It requires at least the following parameters:

  • the NMDevice on which to apply the new configuration;
  • the NMConnection containing the configuration.

Since we only want to change the IP address and leave everything else unchanged, we first need to retrieve the current configuration of the device (also called the “applied connection”). Then we update it with the static address and reapply it to the device.

The applied connection, not surprisingly, can be queried with method get_applied_connection() of the NMDevice. Note that the method also returns a version id that can be useful during the reapply to avoid race conditions with other clients. For simplicity we are not going to use it.

In this example we suppose that we already know the name of the device we want to update:

import gi
import socket gi.require_version("NM", "1.0")
from gi.repository import GLib, NM # other functions here... main_loop = GLib.MainLoop()
client = NM.Client.new(None)
device = client.get_device_by_iface("enp1s0")
device.get_applied_connection_async(0, None, get_applied_cb, None)
main_loop.run()

The callback function retrieves the applied connection from the result, changes the IPv4 configuration and reapplies it:

def get_applied_cb(device, result, data): (connection, v) = device.get_applied_connection_finish(result) s_ip4 = connection.get_setting_ip4_config() s_ip4.add_address(NM.IPAddress.new(socket.AF_INET, "172.25.12.1", 24)) device.reapply_async(connection, 0, 0, None, reapply_cb, None)

Omitting exception handling for brevity, the reapply callback is as simple as:

def reapply_cb(device, result, data): device.reapply_finish(result) main_loop.quit()

When the program quits, we will see the new address configured on the interface.

Conclusion

This article introduced the D-Bus and libnm API of NetworkManager and presented some practical examples of its usage. Hopefully it will be useful when you need to develop your next project that involves networking!

Besides the examples presented here, the NetworkManager git tree includes many others for different programming languages. To stay up-to-date with the news from NetworkManager world, follow the blog.

References

Posted on Leave a comment

Announcing the release of Fedora Linux 37 Beta

The Fedora Project is pleased to announce the immediate availability of Fedora Linux 37 Beta, the next step towards our planned Fedora Linux 37 release at the end of October.

Download the prerelease from our Get Fedora site:

Or, check out one of our popular variants, including KDE Plasma, Xfce, and other desktop environments, as well as images for specific use cases like Computational Neuroscience

Beta Release Highlights

Fedora Workstation

Fedora 37 Workstation Beta includes a beta release of GNOME 43. (We expect the final GNOME 43 release in a few weeks.) GNOME 43 includes a new device security panel in Settings, providing the user with information about the security of hardware and firmware on the system. Building on the previous release, more core GNOME apps have been ported to the latest version of the GTK toolkit, providing improved performance and a modern look. 

Other updates

The Raspberry Pi 4 is now officially supported in Fedora Linux, including accelerated graphics. In other ARM news, Fedora Linux 37 Beta drops support for the ARMv7 architecture (also known as arm32 or armhfp).

We are preparing to promote two of our most popular variants: Fedora CoreOS and Fedora Cloud Base to Editions. Fedora Editions are our flagship offerings targeting specific use cases. 

In order to keep up with advances in cryptography, this release introduces a TEST-FEDORA39 policy that previews changes planned for Fedora Linux 39. The new policy includes a move away from SHA-1 signatures.

Of course, there’s the usual update of programming languages and libraries: Python 3.11, Perl 5.36, Golang 1.19, and more!

Testing needed

Since this is a Beta release, we expect that you may encounter bugs or missing features. To report issues encountered during testing, contact the Fedora QA team via the test mailing list or in the #quality channel on Matrix (bridged to #fedora-qa on Libera.chat). As testing progresses, we track common issues on Ask Fedora.

For tips on reporting a bug effectively, read how to file a bug.

What is the Beta Release?

A Beta release is code-complete and bears a very strong resemblance to the final release. If you take the time to download and try out the Beta, you can check and make sure the things that are important to you are working. Every bug you find and report doesn’t just help you, it improves the experience of millions of Fedora Linux users worldwide! Together, we can make Fedora rock-solid. We have a culture of coordinating new features and pushing fixes upstream as much as we can. Your feedback improves not only Fedora Linux, but the Linux ecosystem and free software as a whole.

More information

For more detailed information about what’s new on Fedora Linux 37 Beta release, you can consult the Fedora Linux 37 Change set. It contains more technical information about the new packages and improvements shipped with this release.

Posted on Leave a comment

Manual action required to update Fedora Silverblue, Kinoite and IoT (version 36)

Due to an unfortunate combination of issues, the Fedora Silverblue, Kinoite and IoT variants that are running a version from 36.20220810.0 and later are no longer able to update to the latest version.

You can use these two commands to work around the bugs:

$ sudo find /boot/efi -exec touch '{}' ';'
$ sudo touch /etc/kernel/cmdline

Afterwards, you can update your system as usual with GNOME Software (on Silverblue) or via:

$ sudo rpm-ostree upgrade

These two issues are rooted in GRUB2 bugs that have only landed in Fedora and do not affect CentOS Stream 9 or RHEL. This also does not affect Fedora CoreOS for different reasons.

You can get more details about those issues in the tracker for Fedora Silverblue: https://github.com/fedora-silverblue/issue-tracker/issues/322

Posted on Leave a comment

The business case for supporting EPEL

EPEL stands for Extra Packages for Enterprise Linux. EPEL is a collection of packages built and maintained by the community for use in Red Hat Enterprise Linux (RHEL), CentOS Stream, and RHEL-like distributions like Rocky Linux and Alma Linux.

I am going to make the case that if you use EPEL as part of your organization’s infrastructure, you have an interest in keeping those packages available and as secure as they can be.

Who is this article for? I’m thinking of the team leads, managers, and directors in IT departments who make decisions about the tools their organizations have access to.

If you don’t use or know about EPEL, it’s likely that you don’t have to think about these things. In this case this article isn’t for you. However, it might contain ideas for promoting sustainable uses of free and open source software that you can apply to other situations that are more relevant to you.

Reason 1: Unmaintained packages may be removed from EPEL

Packages must be built and maintained for them to be available to the users of every distro. If someone isn’t doing the work of maintaining the packages, those packages become increasingly out of date. Eventually they may even be removed from the repository because of the security risk. This is avoidable as long as a package has a maintainer.

If you or someone in your organization is the maintainer of a package that you use, then you don’t have to worry about it falling by the wayside and potentially becoming a vulnerability. You get to make sure that the package stays in the repo, is up to date, and remains compatible with the rest of your infrastructure or deployments. Plain and simple.

Of course there needs to be room to manage bandwidth. How critical an application is to the operations of your organization defines how important it should be for you to make sure that either you maintain it or it is being looked after. XFCE may just be a nice-to-have for you, but Ansible might be mission critical.

Reason 2: You’re the first to have any security patches

Cyber threats continue to grow in number of exploits found and the speed at which they are exploited. Security is on every IT person’s mind. Patching vulnerabilities is something that increasingly can’t wait, and this extends to EPEL packages as well.

If you’re the maintainer for a required application, you have the ability to respond quickly to newly discovered vulnerabilities and protect your organization. Additionally, acting in your own self-interest now protects all the other organizations that also depend on that package.

Reason 3: Everyone else who uses that package can help you keep the package running well

As the maintainer of an application, others who also use the package will alert you of bugs as they arise. These are bugs that you may not have realized were there. Arguably it may not be critical to squash bugs that you don’t experience. However, by becoming the hub for feedback for that package, you will also be smoothing out the experience for your own users who may not have thought to report the bug. You benefit from crowd-sourcing quality control.

Reason 4: You can prepare for future releases before they come out

All future LTS releases of RHEL and RHEL-like distros will have their start as CentOS Stream. If you plan on migrating to a release that is represented by the current version of CentOS Stream, as the maintainer you can and should be building against it. This allows you to ensure continuity by packaging the application yourself for your next upgrade. You will know, ahead of time, whether your must-have packages will work in the latest release of your enterprise distro of choice.

Reason 5: You’re contributing to the long-term confidence in EPEL as a platform

The only reason we have packages in EPEL to begin with is because individuals are volunteering their time to maintain them. In a few cases you have companies committing resources to maintain packages but they are a small minority. If people don’t believe that EPEL will stick around for as long as RHEL releases, maintainers can lose steam or burnout. By committing resources to EPEL, you are shoring up confidence in the project – confidence that can encourage other organizations and people to invest in EPEL.

Potential solutions

If at this point you are thinking to yourself, “I would like to give back in some way, but what would that look like?”, here are some ideas. Some require lower commitment than others if you want to help but need to remain flexible about involvement.

  1. Maintain at least one package of the ones you use in your organization. The average maintainer looks after 10 packages, so covering at least one should be an easier hurdle to cross.
  2. If everything you use is already covered, find at least one package without a maintainer so that you can support other users just as other maintainers are supporting you.
  3. Report bugs for the packages you’re using.
  4. Request packages from older EPEL branches in newer EPEL branches, i.e. EPEL 9.
  5. Provide testing feedback for packages in the epel-testing repositories.
  6. Depending on the number and importance of packages you use, consider how much employee time you want to dedicate to EPEL maintenance.
  7. Integrate any EPEL maintenance you provide into the job descriptions of the responsible employees so that your team can continue being a responsible open source contributor into the future.

Become a package maintainer

You can start by checking out the Fedora documentation on how to become a package maintainer!

If you need support, or assistance getting started, help is available in the EPEL Matrix channel (with IRC bridge). Here are other ways to get in touch with the EPEL community.

Since you’ve made it this far…

Please take this quick survey about EPEL! We’re looking for feedback on how to improve EPEL in the future.

Here are additional resources you can check out on EPEL and how you can leverage it more.

What do you think?

Do you think these reasons are valid? Are there others you think should be mentioned? Do you disagree with this idea? Continue the conversation in the comments below or in the Fedora Discussion board!

Posted on Leave a comment

Manage containers on Fedora Linux with Podman Desktop

Podman Desktop is an open-source GUI application for managing containers on Linux, macOS, and Windows.

Historically, developers have been using Docker Desktop for graphical management of containers. This worked for those who had Docker Daemon and Docker CLI installed. However, for those who used Podman daemon-less tool, although there were a few Podman frontends like Pods, Podman desktop companion, and Cockpit, there was no official application. This is not the case anymore. Enter Podman Desktop!

This article will discuss features, installation, and use of Podman Desktop, which is developed by developers from Red Hat and other open-source contributors.

Installation

To install Podman Desktop on Fedora Linux, head over to podman-desktop.io, and click the Download for Linux button. You will be presented with two options: Flatpak and zip. In this example we are using Flatpak. After clicking Flatpak, open it in GNOME Software by double clicking the file (if you are using GNOME). You can also install it via the terminal:

flatpak install podman-desktop-X.X.X.flatpak

In the above command, replace X.X.X with the specific version you have downloaded. If you downloaded the zip file, then extract the archive, and launch the Podman Desktop application binary. You can also find pre-release versions by going to the project’s releases page on GitHub.

Features

Podman Desktop is still in its early days. Yet, it supports many common container operations like creating container images, running containers, etc. In addition, you can find a Podman extension under Extensions Catalog in Preferences, which you can use to manage Podman virtual machines on macOS and Windows. Futhermore, Podman Desktop has support for Docker Desktop extensions.

You can install such extensions in the Docker Desktop Extensions section under Preferences. The application window has two panes. The left narrow pane shows different features of the application and the right pane is the content area, which will display relevant information given what is selected on the left.

Podman Desktop 0.0.6 running on Fedora 36

Demo

To get an overall view of Podman Desktop’s capabilities, we will create an image from a Dockerfile and push it to a registry, then pull and run it, all from within Podman Desktop.

Build image

The first step is to create a simple Dockerfile by entering the following lines in the command line:

cat <<EOF>>Dockerfile
FROM docker.io/library/httpd:2.4
COPY . /var/www/html WORKDIR /var/www/html CMD ["httpd", "-D", "FOREGROUND"]
EOF

Now, go to the Images section and press the Build Image button. You will be taken to a new page to specify the Dockerfile, build context and image name. Under Containerfile path, click and browse to pick your Dockerfile. Under image name, enter a name for your image. You can specify a fully qualified image name (FQIN) in the form example.com/username/repo:tag if you want to push the image to a container registry. In this example, I enter quay.io/codezombie/demo-httpd:latest, because I have a public repository named demo-httpd on quay.io. You can follow a similar format to specify your FQIN pointing to your container registry (Quay, Docker Hub, GitHub Container Registry, etc.). Now, press Build and wait for the build to complete.

Push image

Once the build is finished, it’s time to push the image. So, we need to configure a registry in Podman Desktop. Go to Preferences, Registries and press Add registry.

Add Registry dialog

In the Add Registry dialog, enter your registry server address, and your user credentials and click ADD REGISTRY.

Now, I go back to my image in the list of images and push it to the repository by pressing the upload icon. When you hover over the image name that starts with the name of the registry added in the settings (quay.io in this demo), a push button appears alongside the image name.

The push button that appears when you hover over the image name
Image pushed to repository via Podman Desktop

Once the image is pushed, anyone with access to the image repository can pull it. Since my image repository is public, you can easily pull it in Podman Desktop.

Pull image

So, to make sure things work, remove this image locally and pull it in Podman Desktop. Find the image in the list and remove it by pressing the delete icon. Once the image is removed, click the Pull Image button. Enter the fully qualified name in the Image to Pull section and press Pull image.

Our container image is successfully pulled

Create a container

As the last part in our Podman Desktop demo, let us spin up a container from our image and check the result. I go to Containers and press Create Container. This will open up a dialog with two choices: From Containerfile/Dockerfile, and From existing image. Press From existing image. This takes us to the list of images. There, select the image we pulled.

Create a container in Podman Desktop

Now, we select our recently-pulled image from the list and press the Play button in front of it. In the dialog that appears, I enter demo-web as Container Name and 8000 as Port Mapping, and press Start Container.

Container configuration

The container starts running and we can check out our Apache server’s default page by running the following command:

curl http://localhost:8000 
It works!

You should also be able to see the running container in the Containers list, with its status changed to Running. There, you will find available operations in front of the container. For example, you can click the terminal icon to open a TTY into the container!

Display of running container demo-web in Podman Desktop with available operations for the container

What Comes Next

Podman Desktop is still young and under active development. There is a project roadmap on GitHub with a list of exciting and on-demand features including:

  • Kubernetes Integration
  • Support for Pods
  • Task Manager
  • Volumes Support
  • Support fo Docker Compose
  • Kind Support