Posted on Leave a comment

Linux bcache with writeback cache (how it works and doesn’t work)

bcache is a simple and good way to have large disks (typically rotary and slow) exhibit performance quite similar to an SSD disk, using a small SSD disk or a small part of an SDD.

In general, bcache is a system for having devices composed of slow and large disks, with fast and small disks attached as a cache.

This article will discuss performance and some optimization tips as well as configuration of bcache.

The following terms are used in bcache to describe how it works and the parts of bcache:

backing device slow and large disk (disk intended to actually hold the data)
cache device fast and small disk (cache)
dirty cache data present only in the cache device
writeback writing to the cache device and later (much later) to the backing device
writeback rate cache write speed in the backing device

A disk data cache has always existed, it is the free RAM in the operating system. When data is read from the disk it is copied to RAM. If the data is already in RAM, it is read from RAM rather than being read from disk again. When data is written to the disk, it is written to RAM and after a few moments written to the disk as well. The time data spends only in RAM is very short since RAM is volatile.

bcache is similar, only it has various modes of cache operation. The mode that is faster in writing data is writeback. It works the same as for RAM, only instead of RAM there is a SATA or NVME SSD device. The data may reside only in the cache for much longer, even forever, so it is a bit riskier (if you break the SSD, the data that resided only in the cache is lost, with a good chance that the whole filesystem becomes inaccessible).

Performance Comparison

It is very difficult to gather reliable data from any tests, either with real cases or with special programs. They always give extremely variable, different, unstable values. The various caches present and the type of filesystem (btrfs, journaled, etc.), make the values very variable. It is advisable to ignore small differences (say 5-10%).

The following performance data refers to the test below (random and multiple reads/writes), trying to always maintain the same conditions and repeating three times in immediate sequence.

$ sysbench fileio --file-total-size=2G --file-test-mode=rndrw --time=30 --max-requests=0 run

The tables below show the performance of the separate devices:

Performance of the backing device (RAID 1 with 1TB rotary disks)

Throughput:
read, MiB/s: 0.22 read, MiB/s: 0.23 read, MiB/s: 0.19
written, MiB/s: 0.15 written, MiB/s: 0.16 written, MiB/s 0.13
Latency (ms):
max: 174.92 max: 879.59 max: 1335.30
95th percentile: 87.56 95th percentile: 87.56 95th percentile: 89.16
RAID 1 with 1TB rotary disks

Performance of the cache device (SSD SATA 100GB)

Throughput:
read, MiB/s: 7.28 read, MiB/s: 7.21 read, MiB/s: 7.51
written, MiB/s: 4.86  written, MiB/s: 4.81 written, MiB/s 5.01
Latency (ms):
max: 126.55 max: 102.39 max: 107.95
95th percentile: 1.47 95th percentile: 1.47 95th percentile: 1.47
Cache device (SSD SATA 100GB)

The theoretical expectation that a bcache device will be as fast as the cache device is (physically) impossible to achieve. On average, bcache is significantly slower and only sometimes approaches the same performance as the cache device. Improved performance almost always requires various compromises.

Consider an example assuming there is a 1TB bcache device and a 100GB cache. When writing a 1TB file, the cache device is filled, then partially emptied to the backing device, and refilled again, until the file is fully written.

Because of this (and also because part of the cache also serves data when reading) there is a limit on the length of the file’s sequential data that are written to the cache. Once the limit is exceeded, the file data is written (or read) directly to the backing device, bypassing the cache.

bcache also limits the response delay of the disks, but disproportionately so, especially for SSD SATA, degrading the performance of the cache.

The dirty cache should be emptied to decrease the risk of data loss and to have cache available when it is needed. This should only be done when the devices exhibit little or no activity, otherwise the performance available for normal use collapses.

Unfortunately, the default settings are too low, and the writeback rate adjustment is crude. To improve the writeback rate adjustment it is necessary to write a program (I wrote a script for this).

The following commands provide the necessary optimizations (required at each startup) to get better performance from the bcache device.

# echo 0 > /sys/block/bcache0/bcache/cache/congested_write_threshold_us
# echo 0 > /sys/block/bcache0/bcache/cache/congested_read_threshold_us
# echo 600000000 > /sys/block/bcache0/bcache/sequential_cutoff
# echo 40 > /sys/block/bcache0/bcache/writeback_percent

The following tables compare the performance with the default values and the optimization results.

Performance with default values

Throughput:
read, MiB/s: 3.37 read, MiB/s: 2.67 read, MiB/s: 2.61
written, MiB/s: 2.24 written, MiB/s: 1.78 written, MiB/s 1.74
Latency (ms):
max: 128.51 max: 102.61   max: 142.04
95th percentile: 9.22 95th percentile: 10.84 95th percentile: 11.04
Default values (SSD SATA 100GB)

Performance with optimizations

Throughput:
read, MiB/s: 5.96 read, MiB/s: 3.89 read, MiB/s: 3.81
written, MiB/s: 3.98 written, MiB/s: 2.59 written, MiB/s 2.54
Latency (ms):
max: 131.95 max: 133.23 max: 117.76
95th percentile: 2.61 95th percentile: 2.66 95th percentile: 2.66
Optimization (SSD SATA 100GB)

Performance with the writeback rate adjustment script

Throughput:  
read, MiB/s: 6.25 read, MiB/s: 4.29 read, MiB/s: 5.12
written, MiB/s: 4.17 written, MiB/s: 2.86 written, MiB/s 3.41
Latency (ms):  
max: 130.92 max: 115.96 max: 122.69
95th percentile: 2.61 95th percentile: 2.66 95th percentile: 2.61
Writeback rate adjustment (SSD SATA 100GB)

In single operations (without anything else happening in the system) on large files, adjusting the writeback rate becomes irrelevant.

Prepare the backing, cache and bcache device

To create a bcache device you need to install the bcache-tools. The command for this is:

# dnf install bcache-tools

bcache devices are visible as /dev/bcacheN (for example /dev/bcache0 ). Once created, they are managed like any other disk.

More details are available at https://docs.kernel.org/admin-guide/bcache.html

CAUTION: Any operation performed can immediately destroy the data on the partitions and disks on which you are operating. Backup is advised.

In the following example /dev/md0 is the backing device and /dev/sda7 is the cache device.

WARNING: bcache device cannot be resized.
NOTE: bcache refuses to use partitions or disks with a filesystem already present.

To delete an existing filesystem you can use:
# wipefs -a /dev/md0 # wipefs -a /dev/sda7 

Create the backing device (and therefore the bcache device)

# bcache make -B /dev/md0
if necessary (device status is inactive)
# bcache register /dev/md0

Creating the cache device (and hooking the cache to the backing device)

# bcache make -C /dev/sda7
if necessary (device status is inactive)
# bcache register /dev/sda7
# bcache attach /dev/sda7 /dev/md0
# bcache set-cachemode /dev/md0 writeback

Check the status

# bcache show

The output from this command includes information similar to the following:
(if the status of a device is inactive, it means that it must be registered)

Name Type State Bname AttachToDev
/dev/md0 1 (data) clean(running) bcache0 /dev/sda7
/dev/sda7 3 (cache) active N/A N/A
bcache show

Optimize

# echo 0 > /sys/block/bcache0/bcache/cache/congested_write_threshold_us
# echo 0 > /sys/block/bcache0/bcache/cache/congested_read_threshold_us
# echo 600000000 > /sys/block/bcache0/bcache/sequential_cutoff
# echo 40 > /sys/block/bcache0/bcache/writeback_percent

In closing

Hopefully this article will provide some insight on the benefits of bcache if it suits your needs.

As always, nothing fits all cases and all people’s preferences. However, understanding (even roughly) how things work, and especially how they don’t work, as well as how to adapt them, makes the difference in having satisfactory results or not


Addendum

The following charts show the performance with a SSD NVME cache device rather than SSD SATA as shown above.

Performance of the cache device (SSD NVME 100GB)

Throughput:  
read, MiB/s: 16.31 read, MiB/s: 16.17 read, MiB/s: 15.77
written, MiB/s: 10.87 written, MiB/s: 10.78 written, MiB/s 10.51
Latency (ms):  
max: 17.50 max: 15.30 max: 46.61
95th percentile: 1.10 95th percentile: 1.10 95th percentile: 1.10
Cache device (SSD NVME 100GB)

Performance with optimizations

Throughput:
read, MiB/s: 7.96 read, MiB/s: 6.87 read, MiB/s: 7.73
written, MiB/s: 5.31 written, MiB/s: 4.58 written, MiB/s 5.15
Latency (ms):
max: 50.79 max: 84.40 max: 108.71
95th percentile: 2.00 95th percentile: 2.03 95th percentile: 2.00
Optimization (SSD NVME da 100GB)

Performance with the writeback rate adjustment script

Throughput:  
read, MiB/s: 8.43 read, MiB/s: 7.52 read, MiB/s: 7.34
written, MiB/s: 5.62 written, MiB/s: 5.02 written, MiB/s 4.89
Latency (ms):  
max: 72.71 max: 78.60 max: 50.61
95th percentile: 2.00 95th percentile: 2.03 95th percentile: 2.11
Writeback rate adjustment (SSD NVME 100GB)
Posted on Leave a comment

Contribute at the Fedora CoreOS, Upgrade, and IoT Test Days

Fedora test days are events where anyone can help make certain that changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed to Fedora before, this is a perfect way to get started.

There are five upcoming test days in the next two weeks covering three topics:

  • Tues 28 March through Sunday 02 April, is to test the Fedora CoreOS.
  • Wed March 28th through March 31st , is to test the Upgrade
  • Monday April 03 through April 07 , is to test Fedora IoT .

Come and test with us to make Fedora 38 even better. Read more below on how to do it.

Fedora 38 CoreOS Test Week

The Fedora 38 CoreOS Test Week focuses on testing FCOS based on Fedora 38. The FCOS next stream is already rebased on Fedora 38 content, which will be coming soon to testing and stable. To prepare for the content being promoted to other streams the Fedora CoreOS and QA teams have organized test days on Tues, March 28, 2023 (results accepted through Sun , November 12). Refer to the wiki page for links to the test cases and materials you’ll need to participate. The FCOS and QA team will meet and communicate with the community sync on a Google Meet at the beginning of test week and async over multiple matrix/element channels. Read more about them in this announcement.

Upgrade test day

As we come closer to Fedora Linux 38 release dates, it’s time to test upgrades. This release has a lot of changes and it becomes essential that we test the graphical upgrade methods as well as the command line. As a part of these test days, we will test upgrading from a full updated, F36 and F37 to F38 for all architectures (x86_64, ARM, aarch64) and variants (WS, cloud, server, silverblue, IoT).

IoT test week

For this test week, the focus is all-around; test all the bits that come in a Fedora IoT release as well as validate different hardware. This includes:

  • Basic installation to different media
  • Installing in a VM
  • rpm-ostree upgrades, layering, rebasing
  • Basic container manipulation with Podman.

We welcome all different types of hardware, but have a specific list of target hardware for convenience.

How do test days work?

A test day is an event where anyone can help make certain that changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. Test days are the perfect way to start contributing if you not in the past.

The only requirement to get started is the ability to download test materials (which include some large files) and then read and follow directions step by step.

Detailed information about all the test days are on the wiki page links provided above. If you are available on or around the days of the events, please do some testing and report your results.

Posted on Leave a comment

Fedora Linux editions part 3: Labs

Everyone uses their computer in different ways, according to their needs. You may work as a designer, so you need various design software on your computer. Or maybe you’re a gamer, so you need an operating system that supports the games you like. Sometimes we don’t have enough time to prepare an operating system that supports our needs. Fedora Linux Lab editions are here for you for that reason. Fedora Labs is a selection of curated bundles of purpose-driven software and content curated and maintained by members of the Fedora Community. This article will go into a little more detail about the Fedora Linux Lab editions.

You can find an overview of all the Fedora Linux variants in my previous article Introduce the different Fedora Linux editions.


Astronomy

Fedora Astronomy is made for both amateur and professional astronomers. You can do various activities related to astronomy with this Fedora Linux. Some of the applications in Fedora Astronomy are Astropy, Kstars, Celestia, Virtualplanet, Astromatic, etc. Fedora Astronomy comes with KDE Plasma as its default desktop environment.

Details about the applications included and the download link are available at this link: https://labs.fedoraproject.org/en/astronomy/


Comp Neuro

Fedora Comp Neuro was created by the NeuroFedora Team to support computational neuroscience. Some of the applications included in Fedora Linux are Neuron, Brian, Genesis, SciPy, Moose, NeuroML, NetPyNE, etc. Those applications can support your work, such as modeling software, analysis tools, and general productivity tools.

Details about the applications included and the download link are available at this link: https://labs.fedoraproject.org/en/comp-neuro/


Design Suite

This Fedora Linux is for you if you are a designer. You will get a complete Fedora Linux with various tools for designing, such as GIMP, Inkscape, Blender, Darktable, Krita, Pitivi, etc. You are ready to create various creative works with those tools, such as web page designs, posters, flyers, 3D models, videos, and animations. This Fedora Design Suite is created by designers, for designers.

Details about the applications included and the download link are available at this link: https://labs.fedoraproject.org/en/design-suite/


Games

Playing games is fun, and you can do it with Fedora Games. This Fedora Linux is comes with various game genres, such as first-person shooters, real-time and turn-based strategy games, and puzzle games. Some of the games on Fedora Linux are Extreme Tux Racer, Wesnoth, Hedgewars, Colossus, BZFlag, Freeciv, Warzone 2011, MegaGlest, and Fillets.

Details about the applications included and the download link are available at this link: https://labs.fedoraproject.org/en/games/


Jams

Almost everyone likes music. Some of you may be a musician or music producer. Or maybe you are someone who likes to play with audio. Then this Fedora Jam is for you, as it comes with JACK, ALSA, PulseAudio, and various support for audio and music. Some of the default applications from Fedora Jam are Ardor, Qtractor, Hydrogen, MuseScore, TuxGuitar, SooperLooper, etc.

Details about the applications included and the download link are available at this link: https://labs.fedoraproject.org/en/jam/


Python Classroom

Fedora Python Classroom will make your work related to Python easier, especially if you are a Python developer, teacher, or instructor. Fedora Python Classroom is supported by various important stuff pre-installed. Some of the default applications on Fedora Linux are IPython, Jypyter Notebook, git, tox, Python 3 IDLE, etc. Fedora Python Classroom has 3 variants, namely you can run it graphically with GNOME, or with Vagrant or Docker containers.

Details about the applications included and the download link are available at this link: https://labs.fedoraproject.org/en/python-classroom/


Security Lab

Fedora Security Lab is Fedora Linux for security testers and developers. Xfce comes as a default desktop environment with customizations to suit the needs of security auditing, forensics, system rescue, etc. This Fedora Linux provides several applications that are installed by default to support your work in the security field, such as Etherape, Ettercap, Medusa, Nmap, Scap-workbench, Skipfish, Sqlninja, Wireshark, and Yersinia.

Details about the applications included and the download link are available at this link: https://labs.fedoraproject.org/en/security/


Robotics Suite

Fedora Robotic Suite is Fedora Linux with a wide variety of free and open robotics software packages. This Fedora Linux is suitable for professionals or hobbyists related to robotics. Some of the default applications are Player, SimSpark, Fawkes, Gazebo, Stage, PCL, Arduino, Eclipse, and MRPT.

Details about the applications included and the download link are available at this link: https://labs.fedoraproject.org/en/robotics/


Scientific

Your scientific and numerical work will become easier with Fedora Scientific. This Fedora Linux features a variety of useful open source scientific and numerical tools. KDE Plasma is the default desktop environment along with various applications that will support your work, such as IPython, Pandas, Gnuplot, Matplotlib, R, Maxima, LaTeX, GNU Octave, and GNU Scientific Library.

Details about the applications included and the download link are available at this link: https://labs.fedoraproject.org/en/scientific/


Conclusion

You have many choices of Fedora Linux to suit your work or hobby. Fedora Labs makes that easy. You don’t need to do a lot of configuration from scratch because Fedora Labs will do it for you. You can find complete information about Fedora Labs at https://labs.fedoraproject.org/.

Posted on Leave a comment

Contribute at the Fedora Kernel , GNOME , i18n, and DNF test days

Fedora test days are events where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed to Fedora before, this is a perfect way to get started.

There are five upcoming test days in the upcoming weeks:

  • Sunday 05 March through Sunday 12 March, is to test the Kernel 6.2.
  • Monday March 06 through March 10 , two test day periods focusing on testing GNOME Desktop and Core Apps.
  • Tues March 07 through March 13 , is to test i18n .
  • Tues March 14, is to test DNF 5.

Come and test with us to make the upcoming Fedora 38 even better. Read more below on how to do it.

Kernel 6.2 test week

The kernel team is working on final integration for kernel 6.2. This recently released version will arrive soon in Fedora. As a result, the Fedora kernel and QA teams have organized a test week.

Sunday 05 March through Sunday 12 March will be the Kernel test week. Refer to the wiki page for links to the test images you’ll need to participate. This document clearly outlines the steps.

GNOME 44 test week

GNOME is the default desktop environment for Fedora Workstation and thus for many Fedora users. As a part of the planned change, GNOME 44 landed on Fedora and will ship with Fedora 38. Since GNOME is such a huge part of user experience and requires a lot of testing, the Workstation
WG and Fedora QA team have decided to split the test week into two parts:

Mon March 06 through Wed March 8, we will be testing GNOME Desktop and Core Apps. You can find the test day page here.
Thurs March 09 and Fri March 10, the focus will be to test GNOME Apps in general. This will be shipped by default. The test day page is here.

i18n test week

The i18n test week focuses on testing internationalization features in Fedora Linux.

The test week is Tuesday 7 March through Monday 13 March. The test week page is available here.

DNF 5

Since the brand new dnf5 package has landed in F38, we would like to organize a test day to get some initial feedback on it. We will be testing DNF 5 to iron out any rough edges.

The test day will be Tuesday 14 March. The test day page is available here .

How do test days work?

A test day is an event where anyone can help make sure changes in Fedora Linux work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to download test materials (which include some large files) and then read and follow directions step by step.

Detailed information about both test days is available on the wiki pages mentioned above. If you’re available on or around the days of the events, please do some testing and report your results. All the test day pages receive some final touches which complete about 24 hrs before the test day begins. We urge you to be patient about resources that are in most cases uploaded hours before the test day starts.

Come and test with us to make the upcoming Fedora 38 even better.

Posted on Leave a comment

4 cool new projects to try in Copr for March 2023

This article introduces four new projects available in Copr, with installation instructions.

Copr is a build-system for anyone in the Fedora community. It hosts thousands of projects for various purposes and audiences. Some of them should never be installed by anyone, some are already being transitioned to the official Fedora Linux repositories, and the rest are somewhere in between. Copr gives you the opportunity to install 3rd party software that is not available in Fedora Linux repositories, try nightly versions of your dependencies, use patched builds of your favorite tools to support some non-standard use-cases, and just experiment freely.

If you don’t know how to enable a repository or if you are concerned about whether it is safe to use Copr, please consult the project documentation.

This article takes a closer look at interesting projects that recently landed in Copr.

Sticky

Do you always forget your passwords, write them on sticky notes and post them all around your monitor? Well, please don’t use Sticky for that. But it is a great note-taking application with support for checklists, text formatting, spell-checking, backups, and so on. It also supports adjusting note visibility and organizing notes into groups.

Installation instructions

The repo currently provides Sticky for Fedora 36, 37, 38, and Fedora Rawhide. To install it, use these commands:

sudo dnf copr enable a-random-linux-lover/sticky
sudo dnf install sticky

Webapp-manager

Generations of programmers spent over three decades creating, improving, and re-inventing window managers for us to disregard all of that, and live inside of a web browser with dozens of tabs. Webapp-manager allows you to run websites as if they were applications, and return to the previous paradigm.

Installation instructions

The repo currently provides webapp-manager for Fedora 36, 37, 38, and Fedora Rawhide. To install it, use these commands:

sudo dnf copr enable kylegospo/webapp-manager
sudo dnf install webapp-manager

Umoria

Umoria (The Dungeons of Moria) is a single-player dungeon crawl game inspired by J. R. R. Tolkien’s novel The Lord of the Rings. It is considered to be the first roguelike game ever created. A player begins their epic adventure by acquiring weapons and supplies in the town level and then descends to the dungeons to face the evil that lurks beneath.

Installation instructions

The repo currently provides Umoria for Fedora 36, 37, 38, and Fedora Rawhide. To install it, use these commands:

sudo dnf copr enable whitehara/umoria
sudo dnf install umoria

PyCharm

JetBrains PyCharm is a popular IDE for the Python programming language. It provides intelligent code completion, on-the-fly error checking, quick fixes, and much more. The phracek/PyCharm repository is a great example of a well-maintained project that lives in Copr and has for a long time. Created eight years ago for Fedora 20, it provided support for every subsequent Fedora release. It is now a part of the Third-Party Repositories that can be opted into during the Fedora installation.

Installation instructions

The repo currently provides PyCharm for Fedora 36, 37, 38, Fedora Rawhide, EPEL 7, 8, and 9. To install it, use these commands:

sudo dnf copr enable phracek/PyCharm
sudo dnf install pycharm-community
Posted on Leave a comment

Working with Btrfs – Compression

This article will explore transparent filesystem compression in Btrfs and how it can help with saving storage space. This is part of a series that takes a closer look at Btrfs, the default filesystem for Fedora Workstation, and Fedora Silverblue since Fedora Linux 33.

In case you missed it, here’s the previous article from this series: https://fedoramagazine.org/working-with-btrfs-snapshots

Introduction

Most of us have probably experienced running out of storage space already. Maybe you want to download a large file from the internet, or you need to quickly copy over some pictures from your phone, and the operation suddenly fails. While storage space is steadily becoming cheaper, an increasing number of devices are either manufactured with a fixed amount of storage or are difficult to extend by end-users.

But what can you do when storage space is scarce? Maybe you will resort to cloud storage, or you find some means of external storage to carry around with you.

In this article I’ll investigate another solution to this problem: transparent filesystem compression, a feature built into Btrfs. Ideally, this will solve your storage problems while requiring hardly any modification to your system at all! Let’s see how.

Transparent compression explained

First, let’s investigate what transparent compression means. You can compress files with compression algorithms such as gzip, xz, or bzip2. This is usually an explicit operation: You take a compression utility and let it operate on your file. While this provides space savings, depending on the file content, it has a major drawback: When you want to access the file to read or modify it, you have to decompress it first.

This is not only a tedious process, but also temporarily defeats the space savings you had achieved previously. Moreover, you end up (de)compressing parts of the file that you didn’t intend to touch in the first place. Clearly there is something better than that!

Transparent compression on the other hand takes place at the filesystem level. Here, compressed files still look like regular uncompressed files to the user. However, they are stored with compression applied on disk. This works because the filesystem selectively decompresses only the parts of a file that you access and makes sure to compress them again as it writes changes to disk.

The compression here is transparent in that it isn’t noticeable to the user, except possibly for a small increase in CPU load during file access. Hence, you can apply this to existing systems without performing hardware modifications or resorting to cloud storage.

Comparing compression algorithms

Btrfs offers multiple compression algorithms to choose from. For technical reasons it cannot use arbitrary compression programs. It currently supports:

  • zstd
  • lzo
  • zlib

The good news is that, due to how transparent compression works, you don’t have to install these programs for Btrfs to use them. In the following paragraphs, you will see how to run a simple benchmark to compare the individual compression algorithms. In order to perform the benchmark, however, you must install the necessary executables. There’s no need to keep them installed afterwards, so you’ll use a podman container to make sure you don’t leave any traces in your system.

Because typing the same commands over and over is a tedious task, I have prepared a ready-to-run bash script that is hosted on Gitlab (https://gitlab.com/hartang/btrfs-compression-test). This will run a single compression and decompression with each of the above-mentioned algorithms at varying compression levels.

First, download the script:

$ curl -LO https://gitlab.com/hartang/btrfs-compression-test/-/raw/main/btrfs_compression_test.sh

Next, spin up a Fedora Linux container that mounts your current working directory so you can exchange files with the host and run the script in there:

$ podman run --rm -it --security-opt label=disable -v "$PWD:$PWD" \ -w "$PWD" registry.fedoraproject.org/fedora:37

Finally run the script with:

$ chmod +x ./btrfs_compression_test.sh
$ ./btrfs_compression_test.sh

The output on my machine looks like this:

[INFO] Using file 'glibc-2.36.tar' as compression target
[INFO] Target file 'glibc-2.36.tar' not found, downloading now...
################################################################### 100.0%
[ OK ] Download successful!
[INFO] Copying 'glibc-2.36.tar' to '/tmp/tmp.vNBWYg1Vol/' for benchmark...
[INFO] Installing required utilities
[INFO] Testing compression for 'zlib' Level | Time (compress) | Compression Ratio | Time (decompress)
-------+-----------------+-------------------+------------------- 1 | 0.322 s | 18.324 % | 0.659 s 2 | 0.342 s | 17.738 % | 0.635 s 3 | 0.473 s | 17.181 % | 0.647 s 4 | 0.505 s | 16.101 % | 0.607 s 5 | 0.640 s | 15.270 % | 0.590 s 6 | 0.958 s | 14.858 % | 0.577 s 7 | 1.198 s | 14.716 % | 0.561 s 8 | 2.577 s | 14.619 % | 0.571 s 9 | 3.114 s | 14.605 % | 0.570 s [INFO] Testing compression for 'zstd' Level | Time (compress) | Compression Ratio | Time (decompress)
-------+-----------------+-------------------+------------------- 1 | 0.492 s | 14.831 % | 0.313 s 2 | 0.607 s | 14.008 % | 0.341 s 3 | 0.709 s | 13.195 % | 0.318 s 4 | 0.683 s | 13.108 % | 0.306 s 5 | 1.300 s | 11.825 % | 0.292 s 6 | 1.824 s | 11.298 % | 0.286 s 7 | 2.215 s | 11.052 % | 0.284 s 8 | 2.834 s | 10.619 % | 0.294 s 9 | 3.079 s | 10.408 % | 0.272 s 10 | 4.355 s | 10.254 % | 0.282 s 11 | 6.161 s | 10.167 % | 0.283 s 12 | 6.670 s | 10.165 % | 0.304 s 13 | 12.471 s | 10.183 % | 0.279 s 14 | 15.619 s | 10.075 % | 0.267 s 15 | 21.387 s | 9.989 % | 0.270 s [INFO] Testing compression for 'lzo' Level | Time (compress) | Compression Ratio | Time (decompress)
-------+-----------------+-------------------+------------------- 1 | 0.447 s | 25.677 % | 0.438 s 2 | 0.448 s | 25.582 % | 0.438 s 3 | 0.444 s | 25.582 % | 0.441 s 4 | 0.444 s | 25.582 % | 0.444 s 5 | 0.445 s | 25.582 % | 0.453 s 6 | 0.438 s | 25.582 % | 0.444 s 7 | 8.990 s | 18.666 % | 0.410 s 8 | 34.233 s | 18.463 % | 0.405 s 9 | 41.328 s | 18.450 % | 0.426 s [INFO] Cleaning up...
[ OK ] Benchmark complete!

It is important to note a few things before making decisions based on the numbers from the script:

  • Not all files compress equally well. Modern multimedia formats such as images or movies compress their contents already and don’t compress well beyond that.
  • The script performs each compression and decompression exactly once. Running it repeatedly on the same input file will generate slightly different outputs. Hence, the times should be understood as estimates, rather than an exact measurement.

Given the numbers in my output, I decided to use the zstd compression algorithm with compression level 3 on my systems. Depending on your needs, you may want to choose higher compression levels (for example, if your storage devices are comparatively slow). To get an estimate of the achievable read/write speeds, you can divide the source archives size (about 260 MB) by the (de)compression times.

The compression test works on the GNU libc 2.36 source code by default. If you want to see the results for a custom file, you can give the script a file path as the first argument. Keep in mind that the file must be accessible from inside the container.

Feel free to read the script code and modify it to your liking if you want to test a few other things or perform a more detailed benchmark!

Configuring compression in Btrfs

Transparent filesystem compression in Btrfs is configurable in a number of ways:

  • As mount option when mounting the filesystem (applies to all subvolumes of the same Btrfs filesystem)
  • With Btrfs file properties
  • During btrfs filesystem defrag (not permanent, not shown here)
  • With the chattr file attribute interface (not shown here)

I’ll only take a look at the first two of these.

Enabling compression at mount-time

There is a Btrfs mount option that enables file compression:

$ sudo mount -o compress=<ALGORITHM>:<LEVEL> ...

For example, to mount a filesystem and compress it with the zstd algorithm on level 3, you would write:

$ sudo mount -o compress=zstd:3 ...

Setting the compression level is optional. It is important to note that the compress mount option applies to the whole Btrfs filesystem and all of its subvolumes. Additionally, it is the only currently supported way of specifying the compression level to use.

In order to apply compression to the root filesystem, it must be specified in /etc/fstab. The Fedora Linux Installer, for example, enables zstd compression on level 1 by default, which looks like this in /etc/fstab:

$ cat /etc/fstab
[ ... ]
UUID=47b03671-39f1-43a7-b0a7-db733bfb47ff / btrfs subvol=root,compress=zstd:1,[ ... ] 0 0

Enabling compression per-file

Another way of specifying compressions is via Btrfs filesystem properties. To read the compression setting for any file, folder or subvolume, use the following command:

$ btrfs property get <PATH> compression

Likewise, you can configure compression like this:

$ sudo btrfs property set <PATH> compression <VALUE>

For example, to enable zlib compression for all files under /etc:

$ sudo btrfs property set /etc compression zlib

You can get a list of supported values with man btrfs-property. Keep in mind that this interface doesn’t allow specifying the compression level. In addition, if a compression property is set, it overrides other compression configured at mount time.

Compressing existing files

At this point, if you apply compression to your existing filesystem and check the space usage with df or similar commands, you will notice that nothing has changed. That is because Btrfs, by itself, doesn’t “recompress” all your existing files. Compression will only take place when writing new data to disk. There are a few ways to perform an explicit recompression:

  1. Wait and do nothing: As files are modified and written back to disk, Btrfs compresses the newly written file contents as configured. At some point, if we wait long enough, an increasing part of our files are rewritten and, hence, compressed.
  2. Move files to a different filesystem and back again: Depending on which files you want to apply compression to, this can become a rather tedious operation.
  3. Perform a Btrfs defragmetation

The last option is probably the most convenient, but it comes with a caveat on Btrfs filesystems that already contain snapshots: it will break shared extent between snapshots. In other words, all the shared content between two snapshots, or a snapshot and its’ parent subvolume, will be present multiple times after a defrag operation.

Hence, if you already have a lot of snapshots on your filesystem, you shouldn’t run a defragmentation on the whole filesystem. This isn’t necessary either, since with Btrfs you can defragment specific directories or even single files, if you wish to do so.

You can use the following command to perform a defragmentation:

$ sudo btrfs filesystem defragment -r /path/to/defragment

For example, you can defragment your home directory like this:

$ sudo btrfs filesystem defragment -r "$HOME"

In case of doubt it’s a good idea to start with defragmenting individual large files and continuing with increasingly large directories while monitoring free space on the file system.

Measuring filesystem compression

At some point, you may wonder just how much space you have saved thanks to file system compression. But how do you tell? First, to tell if a Btrfs filesystem is mounted with compression applied, you can use the following command:

$ findmnt -vno OPTIONS /path/to/mountpoint | grep compress

If you get a result, the filesystem at the given mount point is using compression! Next, the command compsize can tell you how much space your files need:

$ sudo compsize -x /path/to/examine

On my home directory, the result looks like this:

$ sudo compsize -x "$HOME"
Processed 942853 files, 550658 regular extents (799985 refs), 462779 inline.
Type Perc Disk Usage Uncompressed Referenced
TOTAL 81% 74G 91G 111G
none 100% 67G 67G 77G
zstd 28% 6.6G 23G 33G

The individual lines tell you the “Type” of compression applied to files. The “TOTAL” is the sum of all the lines below it. The columns, on the other hand, tell you how much space our files need:

  • “Disk Usage” is the actual amount of storage allocated on the hard drive,
  • “Uncompressed” is the amount of storage the files would need without compression applied,
  • “Referenced” is the total size of all uncompressed files added up.

“Referenced” can differ from the numbers in “Uncompressed” if, for example, one has deduplicated files previously, or if there are snapshots that share extents. In the example above, you can see that 91 GB worth of uncompressed files occupy only 74 GB of storage on my disk! Depending on the type of files stored in a directory and the compression level applied, these numbers can vary significantly.

Additional notes about file compression

Btrfs uses a heuristic algorithm to detect compressed files. This is done because compressed files usually do not compress well, so there is no point in wasting CPU cycles in attempting further compression. To this end, Btrfs measures the compression ratio when compressing data before writing it to disk. If the first portions of a file compress poorly, the file is marked as incompressible and no further compression takes place.

If, for some reason, you want Btrfs to compress all data it writes, you can mount a Btrfs filesystem with the compress-force option, like this:

$ sudo mount -o compress-force=zstd:3 ...

When configured like this, Btrfs will compress all data it writes to disk with the zstd algorithm at compression level 3.

An important thing to note is that a Btrfs filesystem with a lot of data and compression enabled may take a few seconds longer to mount than without compression applied. This has technical reasons and is normal behavior which doesn’t influence filesystem operation.

Conclusion

This article detailed transparent filesystem compression in Btrfs. It is a built-in, comparatively cheap, way to get some extra storage space out of existing hardware without needing modifications.

The next articles in this series will deal with:

  • Qgroups – Limiting your filesystem size
  • RAID – Replace your mdadm configuration

If there are other topics related to Btrfs that you want to know more about, have a look at the Btrfs Wiki [1] and Docs [2]. Don’t forget to check out the first three articles of this series, if you haven’t already! If you feel that there is something missing from this article series, let me know in the comments below. See you in the next article!

Sources

[1]: https://btrfs.wiki.kernel.org/index.php/Main_Page
[2]: https://btrfs.readthedocs.io/en/latest/Introduction.html

Posted on Leave a comment

Podman Checkpoint

Podman is a tool which runs, manages and deploys containers under the OCI standard. Running containers with rootless access and creating pods (a Pod is a group of containers ) are additional features of Podman. This article describes and explains how to use checkpointing in Podman to save the state of a running container for later use.

Checkpointing Containers : Checkpoint / Restore In User-space, or CRIU, is Linux software available in the Fedora Linux repository as the “criu” package. It can freeze a running container (or an individual application) and checkpoint its state to disk (Reference : https://criu.org/Main_Page). The saved data can be used to restore the container and run it exactly as it was during the time of the freeze. Using this, we can achieve live migration,  snapshots, or remote debugging of applications or containers. This capability requires CRIU 3.11 or later installed on the system. 

Podman Checkpoint

# podman container checkpoint <containername> 

This command will create a checkpoint of the container and freeze its state. Checkpointing a container will stop the running container as well. If you do podman ps there will be no container existing named <containername>.

You can export the checkpoint to a specific location as a file and copy that file to a different server

# podman container checkpoint <containername> -e /tmp/mycheckpoint.tar.gz

Podman Restore

# podman container restore --keep <containername> 

the –keep option will restore the container with all the temporary files.

To import the container checkpoint you can use:

# podman container restore -i /tmp/mycheckpoint.tar.gz

Live Migration using Podman Checkpoint

This section describes how to migrate a container from client1 to client2 using the podman checkpoint feature. This example uses the https://lab.redhat.com/tracks/rhel-system-roles playground provided by Red Hat as it has multiple hosts with ssh-keygen already configured.

The example will run a container with some process on client1, create a checkpoint, and migrate it to client2. First run a container on the client1 machine with the commands below:

podman run --name=demo1 -d docker.io/httpd
podman exec -it demo1 bash
sleep 600& (run a process for verification )
exit

The above snippet runs a container as demo1 with the httpd process which runs a sleep process for 600 seconds ( 10 mins ) in background. You can verify this by doing:

# podman top demo1 USER PID PPID %CPU ELAPSED TTY TIME COMMAND
root 1 0 0.000 5m40.61208846s ? 0s httpd -DFOREGROUND www-data 3 1 0.000 5m40.613179941s ? 0s httpd -DFOREGROUND www-data 4 1 0.000 5m40.613258012s ? 0s httpd -DFOREGROUND www-data 5 1 0.000 5m40.613312515s ? 0s httpd -DFOREGROUND root 88 1 0.000 16.613370018s ? 0s sleep 600

Now create a container checkpoint and export it to a specific file:

# podman container checkpoint myapache2 -e /tmp/mycheckpoint.tar.gz
# scp /tmp/mycheckpoint.tar.gz client2:/tmp/

Then on client2:

# cd /tmp
# podman container restore -i mycheckpoint.tar.gz
# podman top demo1

You should see the output as follows:

USER PID PPID %CPU ELAPSED TTY TIME COMMAND
root 1 0 0.000 5m40.61208846s ? 0s httpd -DFOREGROUND www-data 3 1 0.000 5m40.613179941s ? 0s httpd -DFOREGROUND www-data 4 1 0.000 5m40.613258012s ? 0s httpd -DFOREGROUND www-data 5 1 0.000 5m40.613312515s ? 0s httpd -DFOREGROUND root 88 1 0.000 16.613370018s ? 0s sleep 600

In this way you can achieve a live migration using the podman checkpoint feature.

Posted on Leave a comment

Join the conversation

U.S. politician Daniel Webster described the U.S. government as, “… the people’s government, made for the people, made by the people, and answerable to the people.”[1] Similarly, the Fedora Project is “a community of people working together”[2] and it is “led by contributors from across the community.”[3] In other words, “It is what you make of it.”[4]

The Fedora community invites you to join the conversation and help advance the Fedora Project and free software in general. Traditionally much of the collaboration in the Fedora Project had occurred over IRC. And IRC support will continue for the foreseeable future. But Fedora is also rolling out some newer technologies that we think might improve the user experience. Fedora has moved primary communications to Matrix for real time communication and collaboration. If you haven’t already done so, we encourage you to sign up for a Fedora account, open the Fedora Matrix space at chat.fedoraproject.org, and explore the vast world that is the Fedora Project via Matrix. As much as possible, the Fedora Project strives to be an open community. Anyone can contribute to Fedora and everyone of good will is welcome to join.

A high-level overview of the Fedora communication channels

As the saying goes, “Communication is key.” But communication comes in many forms. One subdivision of the various forms of communiction is synchronous and asynchronous. Traditionally, the Fedora Project has used email for asynchronous communication and IRC for synchronous communication. The forum discussion.fedoraproject.org is a new option for asynchronous communication and Matrix via chat.fedoraproject.org is a new option for synchronous communication.

Regarding the synchronous — asynchronous differentiation: It is good and important to differentiate our instruments in this dimension, and also to explain something. Synchronous does not mean that you get a reaction immediately, it can take a few days, if only because of the different time zones. But you can also “ping” someone specifically or invite them to a direct conversation. After some time, however, the topic will often be forgotten about on the timeline. Asynchronous tools, on the other hand, are organized thematically, bringing a topic to the front again and again as something is added to it. This provides a more systematic approach.

Importantly, the new tools are being provided as an option that you can choose. There is no requirement to use the new tools. You can expect both email and IRC to be around for a long time to come.

If you prefer email, you might want to check out the post: Guide to interacting with [discussion.fedoraproject.org] by email. If you prefer the IRC chat protocol, many of the rooms on Matrix at chat.fedoraproject.org are bridged to corresponding rooms on libera.chat.

Blog posts are yet another form of communication that will continue to be available at communityblog.fedoraproject.org and fedoramagazine.org. The former provides information expected to be of interest to the Fedora developers and Fedora special interest groups (SIGs). Posts about the tools used to build Fedora Linux, for example, are often found on the Community Blog. In contrast, this site — fedoramagazine.org — hosts articles expected to be of interest to the general Fedora community.

In a way, blog posts can be thought of as a super-asynchronous form of communication. The trade-off, as the forms of communication go from less-synchronous to more-synchronous, is that they tend to become somewhat lower in quality. That is, you can expect a much quicker response on IRC or chat.fedoraproject.org than if you request that a blog post be written about a subject on communityblog.fedoraproject.org. But of course, there is no guarantee that you will get a response on any of the channels. All contributions to the Fedora Project are voluntary. No one is ever obliged to provide any service to anyone else. But also don’t take a lack of response personally. Your question might just be outside the area of expertise of those who noticed it.

I like to think of the relationship between the various forms of communication that the Fedora Project uses as having an inversely proportional relation between frequency and contemplativeness.

The point is that these communication methods each serve different needs but they are complementary. You won’t want to limit yourself to just one of the communications channels. If you need rapid responses to simple questions, use the chat server. If you want to go in-depth on a complex topic, it might be something that would make a good blog post. And the forum is for everything in between.

So what are you waiting for! Sign up! Explore the community! If you come across something you think you can help out with or even just something that you think you might want to get involved with, jump in and offer to help! And above all, have FUN!

See also: What can I do for Fedora?

Thanks to Peter Boy, Kevin Fenzi, and others who provided helpful feedback and content for this announcement.


References

  1. Webster-Hayne debate
  2. Fedora’s Mission and Foundations — What is Fedora
  3. Fedora Leadership
  4. Paraphrased version of “A man’s life is what his thoughts make of it.” (Marcus Aurelius)
Posted on Leave a comment

Announcing the Display/HDR hackfest

Hi all,

This is Carlos Soriano, Engineering Manager at the GPU team at Red Hat. I’m here together with Sebastian Wick, primary HDR developer at Red Hat, and Niels de Graef, GPU team Product Owner at Red Hat, to announce that we’re organizing the Display/HDR hackfest in Brno in the Czech Republic, April 24-26! The focus will be on planning and development of the technical infrastructure needed for various display technologies, specifically those that need GNOME Shell to work in tandem with the GPU stack. One of the main examples of this is HDR support, which we know you have all been waiting for!

Details

The purpose of the hackfest is to bring together contributors from across the display/GPU stack. Attendees will include those from projects such as Freedesktop, GNOME, KDE, Mesa, Wayland and the Linux kernel. This is going to be a great opportunity to meet and collaborate on the holistic approach necessary to make these technologies work well across various vendors and projects. 

The proposed length of the hackfest is 2 full days, and a third day for wrapping up during the morning and doing a local activity during the afternoon.

Now, you might be asking why are these technologies, such as HDR, important for us? And what is the plan to integrate them in Fedora? Well, let’s take HDR as a primary example, and start by explaining what HDR is.

What is HDR? – By Sebastian Wick

When most people talk about High Dynamic Range (HDR) they often refer not only to HDR but also to Wide Color Gamut (WCG). Both of these terms describe an improvement of display technologies.

The dynamic range of a display describes the ratio of the lowest luminance to the highest luminance it can produce at the same time. A high dynamic range thus means an increase in the highest luminance (colloquially called brighter whites) or a decrease of the lowest luminance (darker blacks), or both.

The color gamut of a display describes the colors it is able to reproduce. If a gamut is “wide”, it means the display is able to reproduce more chromaticities compared to a small gamut. To put it colloquially, it can show more colors.

We humans are able to perceive images which have up to a certain dynamic range and color gamut. The closer displays get to those capabilities the more immersive the resulting images are. HDR, in its broader meaning, is therefore all about being able to show more colors, and we can use HDR modes on displays to unlock their full potential.

From a technical perspective, enabling those modes and presenting content is not hard, as long as the display only has to show a single source. While this may work for some use cases, a general purpose desktop requires composition of various Standard Dynamic Range (SDR) signals, color managed SDR signals and various HDR signals to various SDR displays, color managed displays and HDR displays at the same time. You can take a look at Apple’s EDR concept if you want to see how this looks when done right.

There are no industry standards yet for this kind of composition and most HDR modes, unfortunately all the common modes, are also not designed for this use case. Instead they focus on presenting a single HDR source.

With the increase in composition complexity, offloading the composition and achieving a zero-copy direct-scanout scenario becomes much harder. But this is required to keep power consumption in check and thus improve battery life.

Wow, that sounds complex

Yeah, this all sounds more complex than someone could imagine, but we’re confident we can get there. Now, why is HDR important for us, and what is our plan for integrating it in Fedora once it is ready? Niels de Graef has been working as the primary HDR feature owner at Red Hat for a couple of months now and can help us understand that.

Why is HDR important for us, and what is our plan for integrating it in Fedora? – By Niels de Graef

By adding support for HDR, we want to be an enabler for several key groups.

On one hand, we want to support content creators who see HDR as a very interesting feature. It allows them to present their work to people the way they intend it to be seen, eliminating the effect of “washed down” colors due to the monitor only supporting a relatively small color space. For example: as an artist, you might want to specify exactly how bright the sun in a desert scene should look while making sure the rest of the scene does not degrade in detail.

As content creators go, an important stakeholder is the VFX industry, which consists of big players like Disney. Red Hat closely collaborates with the industry, which also recommends Red Hat Enterprise Linux (RHEL) as their choice of distribution. We want to make certain we get the industry’s feedback so we can incorporate it in this story and make sure we get this right from the start.

On the other hand, we want to enable Linux users. Hardware that supports HDR is becoming more commonplace and is becoming more affordable as of late. HDR is becoming more supported, and an increasing amount of content is making use of it. As long as we don’t have HDR support, Linux users will have a degraded experience compared to Windows and Mac users.

Finally, supporting HDR fits into the foundations of Fedora, where we want to do the right thing, making sure everyone is free to enjoy the latest innovations and features. This follows our move to Wayland, which, as a modern graphics stack, allows us to build new features like this.

Wrapping up

We hope that you enjoy the work we’re doing to enable the Linux ecosystem and users to make use of the latest technologies. We’re definitely excited with what we are aiming to achieve. Thanks to everyone who is contributing to this effort, and the organization of the hackfest.

We hope to see you all in Brno in April!

Posted on Leave a comment

Using .NET 7 on Fedora Linux

.NET 7 is now available in Fedora Linux. This article briefly describes what .NET is, some of its recent and interesting features, how to install it, and presents some examples showing how it can be used.

.NET 7

.NET is a platform for building cross platform applications. It allows you to write code in C#, F#, or VB.NET. You can easily develop applications on one platform and deploy and execute them on another platform or architecture.

In particular, you can develop applications on Windows and run them on Fedora Linux instead! This is one less hurdle if you want to move from a proprietary platform to Fedora Linux. It’s also possible to develop on Fedora and deploy to Windows. Please note that in this last scenario, some Windows-specific application types, such as GUI Windows applications, are not available.

.NET 7 includes a number of new and exciting features. It includes a large number of performance enhancements to the runtime and the .NET libraries, better APIs for working with Unix file permissions and tar files, better support for observability via OpenTelemetry, and compiling applications ahead-of-time. For more details about all the new features in .NET 7, see https://learn.microsoft.com/en-us/dotnet/core/whats-new/dotnet-7.

Fedora Linux builds of .NET 7 can even run on the IBM Power (ppc64le) architecture. This is in addition to support for 64-bit ARM/Aarch64 (which Fedora Linux calls aarch64 and .NET calls arm64), IBM Z (s390x) and 64-bit Intel/AMD platforms (which Fedora Linux calls x86_64 and .NET calls x64).

.NET 7 is a Standard Term Support (STS) release, which means upstream will stop maintaining it on May 2024. .NET in Fedora Linux will follow that end date. If you want to use a Long Term Support (LTS) release, please use .NET 6 instead. .NET 6 reaches its end of Life on November 2024. For more details about the .NET lifecycle, see https://dotnet.microsoft.com/en-us/platform/support/policy/dotnet-core.

If you are looking to set up a development environment for developing .NET applications on Fedora Linux, take a look at https://fedoramagazine.org/set-up-a-net-development-environment/.

The .NET Special Interest Group (DotNetSIG) maintains .NET in Fedora Linux. Please come and join us to improve .NET on Fedora Linux! You can reach us via IRC (#fedora-devel) or mailing lists ([email protected]) if you have any feedback, questions, ideas or suggestions.

How to install .NET 7

To build C#, F# or VB.NET code on Fedora Linux, you will need the .NET SDK. If you only want to run existing applications, you will only need the .NET Runtime.

Install the .NET 7 Software Development Kit (SDK) using this command:

sudo dnf install -y dotnet-sdk-7.0

This installs all the dependencies, including a .NET runtime.

If don’t want to install the entire SKD but just want to run .NET 7 applications, you can install either the ASP.NET Core runtime or the .NET runtime using one of the following commands:

sudo dnf install -y aspnetcore-runtime-7.0
sudo dnf install -y dotnet-runtime-7.0

This style of package name applies to all versions of .NET on all versions of Fedora Linux. For example, you can install .NET 6 using the same style of package name:

sudo dnf install -y dotnet-sdk-6.0

To make certain .NET 7 is installed, run dotnet –info to see all the SDKs and Runtimes installed.

License and Telemetry

The .NET packages in Fedora Linux are built from fully Open Source source code. The primary license is MIT. The .NET packages in Fedora Linux do not contain any closed source or proprietary software. The Fedora .NET team builds .NET offline in the Fedora Linux build system and removes all binaries present in the source code repositories before building .NET. This gives us a high degree of confidence that .NET is built from reviewed sources.

The .NET packages in Fedora Linux do not collect any data from users. All telemetry is disabled in the Fedora builds of .NET. No data is collected from anyone running .NET and no data is sent to Microsoft. We run tests to verify this for every build of .NET in Fedora Linux.

“Hello World” in .NET

After installing .NET 7, you can use it to create and run applications. For example, you can use the following steps to create and run the classic “Hello World” application.

Create a new .NET 7 project in the C# language:

dotnet new console -o HelloWorldConsole

This will create a new directory named HelloWorldConsole and create a trivial C# Hello World that prints hello world.

Then, switch to the project directory:

cd HelloWorldConsole

Finally, build and run your the application:

dotnet run

.NET 7 will build your program and run it. You should see a “Hello world” output from your program.

“Hello Web” in .NET

You can also use .NET to create web applications. Lets do that now.

First, create a new web project, in a separate directory (not under our previous project):

dotnet new web -o HelloWorldWeb

This will create a simple Hello-World style application based on .NET’s built-in web (Empty ASP.NET Core) template.

Now, switch to that directory:

cd HelloWorldWeb

Finally, build and run the application:

dotnet run

You should see output like the following that shows the web application is running.

Building…
info: Microsoft.Hosting.Lifetime[14]
Now listening on: http://localhost:5105
info: Microsoft.Hosting.Lifetime[0]
Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
Hosting environment: Development
info: Microsoft.Hosting.Lifetime[0]
Content root path: /home/omajid/temp/HelloWorldWeb

Use a web browser to access the application. You can find the URL in the output at the “Now listening on:” line. In my case that’s http://localhost:5105:

firefox http://localhost:5105

You should see a “Hello World” message in your browser.

Using .NET with containers

At this point, you have successfully created, built and run .NET applications locally. What if you want to isolate your application and everything about it? What if you want to run it in a non-Fedora OS? Or deploy it to a public/private/hybrid cloud? You can use containers! Let’s build a container image for running your .NET program and test it out.

First, create a new project:

dotnet new web -o HelloContainer

Then, switch to that project directory:

cd HelloContainer

Then add a Dockerfile that describes how to build a container for our application.

FROM fedora:37
RUN dnf install -y dotnet-sdk-7.0 && dnf clean all
RUN mkdir /HelloContainer/
WORKDIR /HelloContainer/
COPY . /HelloContainer/
RUN dotnet publish -c Release
ENV ASPNETCORE_URLS="http://0.0.0.0:8080"
CMD ["dotnet" , "bin/Release/net7.0/publish/HelloContainer.dll"]

This will start with a default Fedora Linux container, install .NET 7 in it, copy your source code into it and use the .NET in the container to build the application. Finally, it will set things up so that running the container runs your application and exposes it via port 8080.

You can build and run this container directly. However, if you are familiar with Dockerfiles, you might have noticed that it is quite inefficient. It will re-download all dependencies and re-build everything on any change to any source file. It produces a large container image at the end which even contains the full .NET SDK. An option is to use a multi-stage build to make it faster to iterate on the source code. You can also produce a smaller container at the end that contains just your application and .NET dependencies.

Overwrite the Dockerfile with this:

FROM registry.fedoraproject.org/fedora:37 as dotnet-sdk
RUN dnf install -y dotnet-sdk-7.0 && dnf clean all FROM registry.fedoraproject.org/fedora:37 as aspnetcore-runtime
RUN dnf install -y aspnetcore-runtime-7.0 && dnf clean all FROM dotnet-sdk as build-env
RUN mkdir /src
WORKDIR /src
COPY *.csproj .
RUN dotnet restore
COPY . .
RUN dotnet publish -c Release -o /publish FROM aspnetcore-runtime as app
WORKDIR /publish
COPY --from=build-env /publish .
ENV ASPNETCORE_URLS="http://0.0.0.0:8080"
EXPOSE 8080
ENTRYPOINT ["dotnet", "HelloContainer.dll"]

Now install podman so you can build and run the Dockerfile:

sudo dnf install -y podman

Build the container image:

podman build -t hello-container .

Now, run the container we just built:

podman run -it -p 8080:8080 hello-container

A note about the arguments. The port is configured with the -p flag so that port 8080 from inside the container is available as port 8080 outside too. This allows you to connect to the application directly. The container is run interactively (-it) so you can see the output and any errors that come up. Running interactively is usually not needed when deploying an application to production.

Finally, connect to the container using a web browse. For example:

firefox http://localhost:8080

You should see a “Hello World” message.

Congratulations! You now have a .NET application running inside a Fedora container!

Conclusion

This was a whirlwind overview of .NET 7 in Fedora Linux and covers building and running an application using plain Fedora RPM packages as well as creating an application for a .NET application using only Fedora Linux.

If you have an interest in using or improving .NET on Fedora Linux, please join us!