Posted on Leave a comment

Creating Data Analysis Pipelines using DuckDB and RStudio

Motivation and Vision

The core motivation behind data analysis pipelines, and the focus of this article, is the need to establish a clear path from unprocessed data to actionable insights for contributor engagement and impact. The key question is “what are we trying to measure to ensure the continuity of community work?”

As a side note, my preparation for the ADSP (Advanced Data Analysis Semi-Professional) certification in Korea utilized RStudio Desktop, running on a Fedora Linux environment. I got hands-on with R’s core statistical toolkit, leveraging base functions. Among these were summary()1 and lm()2 as the basis for fundamental hypothesis testing and regression analysis3. I became more intrigued by R’s power after testing its data manipulation packages (especially the key package dplyr).

With this background in mind, the article focuses on the design of an analysis pipeline that fulfills three objectives:

  • it leverages the power of DuckDB4 and S3 storage,
  • it redefines the workflow,
  • it ensures scalable data transformation and analysis capabilities

Establishing such a robust foundation is essential for producing reliable and validated metrics for the contributor community, which itself is subject to ongoing definition and validation.

Acknowledgement: I extend my sincere gratitude to Justin Wheeler for connecting me with the Fedora Data Working Group (FDWG), and to Michael Winters and K Naraian for their guidance, discussion, and support throughout the design and validation of this data analysis pipeline.

Scope and Tool Selection: Please note that this analysis pipeline represents a combination of tools and methods chosen from my perspective as a data analyst, particularly one with a background in the CRM (Customer Relationship Management) domain and consumer electronics industry. Therefore, other analysts or data engineers may utilize different combinations of technologies based on their own expertise or project requirements.

Why data analysts must engage with ELT5/ETL6

The role of the analyst is undergoing a fundamental transformation in 2025. We are moving beyond the traditional responsibility of performing statistical analysis and presenting visualization on pre-cleaned data. Today, the modern analyst must evolve into a “Data Ops(Operations)”. This requires a holistic understanding of the data lifecycle and bridging the gap between business context and data engineering. This expansion mandates a familiarity with ELT/ETL processes to examine the quality and structure of the data source.

Moreover, data analysts must be adept at processing diverse data types such as semi-structured data (for example, schema-less JSON strings or variant) and understanding various data access methods such as leveraging the efficiency of in-situ processing over the constraints of in-memory loading of datasets.

RStudio: The Unified IDE for Hybrid R and Python workflows

My ADSP examination requirements motivated my initial deep dive into RStudio. However, it is worth highlighting its utility as a tool for any data professional. The most significant benefit of using RStudio is its seamless ability to leverage the best tools from both the R and Python language ecosystems. This eliminates the need for the analyst to switch environments which leads to dramatically higher operational efficiency. This unified approach streamlines the analysis lifecycle from code execution to final reporting.

Python for data engineering

Use Python’s libraries like Pandas for efficient ETL/ELT operations, data manipulation on large datasets, and integrating with production environments or machine learning workflows (TensorFlow/PyTorch).

R for analysis and visualization

Utilize R’s statistical packages and its superior data visualization capabilities (ggplot2, R Shiny) for data analysis modeling, beautiful reporting, and creating customized, publication-ready graphics.

RStudio Desktop: Installation Instructions7 for Fedora Linux

Install R base packages using the terminal and verify:

$ sudo dnf install R
$ R --version

Now, install RStudio from the Fedora COPR repository. Note that these COPR packages conflict with the binary package provided by Posit. Remove the existing Posit RStudio binary package if you installed it.

$ sudo dnf copr enable iucar/rstudio
$ sudo dnf install rstudio-desktop

Dependencies and dev tools are required so install them:

$ sudo dnf install @development-tools
$ sudo dnf install libxml2-devel openssl-devel curl-devel

Launch the RStudio. When the < prompt appears on the RStudio Console enter the following commands. Note that this prompt should appear in the bottom-left pane of the default layout.

Install the reticulate package and execute the function reticulate::py_install() to manage Python dependencies:

install.packages("reticulate")
reticulate::py_install(packages = c("duckdb", "pandas"))

Set the global options for all code chunks within the R Markdown Canvas:

knitr::opts_chunk$set(echo = TRUE, message = FALSE, warning = FALSE)

ragg is an indirect but critical dependency of core Tidyverse packages (such as ggplot2):

install.packages("ragg")

Install base packages for data manipulation:

install.packages("tidyverse")

DBI, tools for database interface, is an essential R package that provides a standardized, vendor-agnostic interface for connecting to and interacting with various database systems (both local and remote)

install.packages("DBI")

Install tools for Parquet files and S3 data lake access:

install.packages("arrow")

Install R Markdown for combining R code, and install Quarto for combining R/Python/SQL with its execution results, and explanatory text into reproducible data pipelines directly within the environment. The Quarto (.qmd) file runs code chunks in R, Python, and SQL in a single document.

install.packages(c("rmarkdown","quarto"))

Load packages for ELT and EDA:

library(tidyverse)
library(arrow)

RStudio environment showing the contents of a Parquet file displayed in a data frame

Data architecture pipeline

The specific dataset chosen, Datanommer (Fedora Messaging Streams), aligns with the strategic objectives of the Fedora Data Working Group, where I contribute. The data is stored in the Bronze Data Layer where raw data from source systems is ingested and stored, as-is, for scalable data lake storage. The Bronze Layer allows for schema evolution without breaking downstream processes.

To provide the Working Group with transparent access and initial insight into this data, I have prepared a shared Initial Exploratory Data Analysis (EDA) Notebook. This notebook serves as the initial public view of the data quality and patterns, and it informed the subsequent architectural decisions for the scalable pipeline I am about to outline.

Given the complexity of the architecture, I will proceed with an outline of the core components, organized by their role in the ELT pipeline:

Data Architecture Diagram: Assisted by Figma ‘Infinite Canvas’

This restructured pipeline, leveraging the new Lakehouse architecture, unlocks several core benefits crucial for scaling contributor analysis and enabling future insights:

Elimination of Memory Constraints via In-Situ Processing

DuckDB acts as a high-performance analytical engine that enables In-Situ Processing. It queries data directly from storage (specifically the Parquet files) without requiring the entire dataset to be loaded into RAM. This not only solves the memory problem but also delivers rapid query execution and significantly lowers operational costs associated with large computational clusters hosted on the OpenShift/Fedora AWS infrastructure.

Quarto runs R code chunks to connect to DuckDB

Future-Proofing

The shift to a Lakehouse model ensures the pipeline is ready for growth and evolving data complexity. Future integration of Apache Iceberg and Apache Polaris will provide schema evolution capabilities. This ensures the pipeline is fully future-proofed against changes in underlying data structures.

Streamlined ELT Workflow and Multi-Lingual Access

I have redefined the processing workflow from a bottlenecked ETL model to a resilient Extract-Load-Transform (ELT) pattern. Parquet files with the variant type store semi-structured data (like JSON/nested structures), loaded raw into S3, simplifies the ingestion stage. When using R, it is recommended to read Parquet files using the Apache Arrow library.

Exploratory Data Analysis (EDA) using data frames in the Tidyverse

The parsed data is then accessible by multiple analytical platforms (R Shiny, Python, BI tools) without duplication or manual preparation. This multi-lingual access maximizes the utility of the clean data layer, supporting a growing number of analytical users and more complex queries necessary for defining long-term contributor metrics.

Initial EDA Notebook

The preliminary Exploratory Data Analysis (EDA) was conducted within the Jupyter Notebook format. This allowed broad compatibility with the existing execution and review environment of the Fedora Data Working Group.

The Initial EDA Notebook is documented to ensure complete reproducibility. This included all necessary steps for the Python library installation and environment setup. Any standard Python script containing ELT logic can be seamlessly run within RStudio’s Python mode or “knitting8” an R Markdown document or rendering a Quarto file.

Conclusion

The establishment of this analysis pipeline represents a crucial step in transforming unprocessed Fedora data into actionable insights. By addressing the core challenges of scaling and in-memory processing through DuckDB, and enabling transparent analysis via the hybrid RStudio/Jupyter workflow, I have demonstrated viable methods for performing Exploratory Data Analysis (EDA) and Extract, Load, Transform (ELT) processes on vast community datasets. In conclusion, the purpose of this work is to foster deeper engagement across a broader community by analyzing data with a view that relates to the Fedora Project community.

I hope this pipeline will serve as the technical foundation that activates and focuses the community discussion around the specific variables and metrics needed to define and ensure the continuity of community contributions.

AI Assistance

The ideation, structural planning, and terminology refinement of the pipelines were assisted by Gemini and Figma.

Software version

RStudio Desktop 2025.05.1 Build 513 (Fedora COPR repository)

R version 4.5.2 (2025-10-31) / Python 3.14.0

Notes

  1. summary(): When used on a data object (for example, DataFrame), it provides basic statistics (min, max, mean, median). When used on a fitted linear model object (lm), it delivers key diagnostic information like coefficient estimates and p-values. ↩
  2. lm(): Stands for Linear Model. This is the core function for fitting linear regression models in R, allowing the user to examine and model the linear relationship between variables. ↩
  3. Regression analysis examines which factors affect the other and which ones are irrelevant for statistical and business context. ↩
  4. DuckDB is a column-oriented database architecture.
    – Direct Querying: It directly queries data from file formats such as Parquet, CSV, and JSON.
    – Local compute engine: It is widely used as a high-performance local compute engine for analytical workloads. It runs in-process, meaning it operates within your application (like a Python script or R session) without needing a separate server or cluster management.
    – Cloud Integration: It supports querying data stored in cloud storage services like AWS S3, GCS (Google Cloud Storage), and Azure Blob Storage.
    ↩
  5. ELT (Extract, Load, Transform): In a modern data environment like a Lakehouse, ELT is preferred: data is first extracted from the source and loaded raw into the cloud data lake (S3), and then transformed in place by the processing engine like DuckDB. ↩
  6. ETL (Extract, Transform, Load): transformations occur before loading the data into the final destination. ↩
  7. Key Advantages of RStudio over Jupyter Notebook for Production Workflows;

    Even with its slightly more complex initial setup compared to Jupyter Notebooks, the advantages become significant when moving from exploration (Jupyter’s strength) to reproducible, production-ready workflows (RStudio’s strength).

    – Integrated Console, Source, Environment, and Files: RStudio offers a cohesive, four-pane layout that allows for seamless navigation between writing code, running commands, inspecting variables, and managing files/plots. Jupyter requires constant shifting between code cells and external tabs.
    – Superior Debugging Tools: RStudio includes a powerful, visual debugger that allows you to set breakpoints, step through code line-by-line, and inspect variable states directly in the environment pane. Jupyter’s debugging is typically cell-based and less intuitive.
    – Native Project Management: RStudio Projects (.Rproj files) automatically manage the working directory and history. This makes it easy to switch between different analytical tasks without conflicts.
    – Integrated Environment Management (renv): RStudio integrates seamlessly with tools like renv (R Environment) to create isolated, reproducible R environments. This addresses dependency hell by ensuring the exact package versions used in development are used in production, which is crucial for data pipeline version control.
    – Quarto/R Markdown Integration: RStudio provides dedicated tools and buttons for easily compiling and rendering complex analytical documents (like your Quarto file) into HTML, PDF, or presentation slides.
    – Shiny Integration: RStudio is the native environment for developing Shiny web applications—interactive dashboards and tools that turn analysis into deployable products. Jupyter requires separate frameworks (like Dash or Streamlit) for similar deployment.
    – Focus on Scripting: RStudio’s source editor is optimized for writing clean, structured R/Python scripts, which are preferred for building robust, scheduled pipeline components (like those managed by Airflow).
    – Code Chunk Execution (Quarto): Even when using Quarto, RStudio allows for superior navigation and execution of code chunks compared to the often sequential and state-dependent nature of Jupyter Notebook cells. ↩

  8. knitr executes code in R Markdown (.Rmd) file by chunks or as a whole (typically by clicking the “Knit” button in RStudio or using rmarkdown::render() in R) ↩
Posted on Leave a comment

4 cool new projects to try in Copr for December 2025

4 package to try from the Copr repos

This article series takes a closer look at interesting projects that recently landed in Copr.

Copr is a build-system for anyone in the Fedora community. It hosts thousands of projects with a wide variety of purposes, targeting diverse groups of users. Some of them should never be installed by anyone, some are already transitioning into the official Fedora repositories, and others fall somewhere in between. Copr allows you to install third-party software not found in the standard Fedora repositories, try nightly versions of your dependencies, use patched builds of your favourite tools to support some non-standard use-cases, and experiment freely.

If you don’t know how to enable a repository or if you are concerned about whether is it safe to use Copr, please consult the project documentation.

Vicinae

Vicinae is a fast application launcher written in C++/QT. Inspired by tool Raycast, it provides instant app and file search and clipboard history. It also includes built-in utilities such as a calculator and web search, along with support for extensions written in TypeScript. It is designed to be highly responsive and native for Wayland environment. Therefore, if you like keeping your hands on the keyboard or want a customizable, extensible launcher for your desktop, Vicinae may be worth trying.

Vicinae launcher in action.

Installation instructions

The repo currently provides vicinae for Fedora 42, 43, and Fedora Rawhide. To install it, use these commands:

sudo dnf copr enable scottames/vicinae
sudo dnf install vicinae

UZDoom

UZDoom is a modern DOOM source port that builds upon classic GZDoom engine, offering hardware-accelerated rendering, an updated scripting system, improved mod support, and high-quality audio playback. At the same time, it maintains compatibility with classic WAD files while making the experience smooth on current systems.

Whether you are playing the original episodes or diving into extensive mod packs, UZDoom offers a convenient way to enjoy them.

Installation instructions

The repo currently provides uzdoom for Fedora 42, 43, and Fedora Rawhide. To install it, use these commands:

sudo dnf copr enable nalika/uzdoom
sudo dnf install uzdoom

Plasma Panel Colorizer

Plasma Panel Colorizer is a widget for KDE Plasma that allows you to customize the panel’s appearance. In addition, it offers options for background tinting, blur, custom opacity levels, shadows, floating panels, or themes that differ from the stock Plasma look. It also includes full blur support and is updated for Plasma 6, making it easy to adjust your panel exactly the way you want.

Different looks you can get with the Plasma Panel Colorizer.

Installation instructions

The repo currently provides plasma-panel-colorizer  for Fedora 42, 43, and Fedora Rawhide. To install it, use these commands:

sudo dnf copr enable peridot-augustus/plasma-panel-colorizer
sudo dnf install plasma-panel-colorizer

sfizz-ui

Sfizz-ui is the graphical interface for the sfizz sampler engine, which is an open-source player for SFZ instrument libraries. The UI provides an accessible way to load SFZ instruments, adjust parameters, and integrate the sampler into your workflow. It also includes plugin support such as LV2 and VST3, making it suitable for music creation in a Linux DAW environment.

For musicians, sound designers, or anyone using SFZ sample libraries, sfizz-ui offers a polished interface.

Installation instructions

The repo currently provides sfizz-ui for Fedora 41, 42, and 43. To install it, use these commands:

sudo dnf copr enable lexridge/sfizz-ui
sudo dnf install sfizz-ui

Posted on Leave a comment

Talk to your Fedora system with the linux-mcp-server!

Talk to your Fedora system with the Linux MCP Server!

Generative AI systems are changing the way people interact with computers. MCP (model context protocol) is a way that enables LLMs to run commands and use tools to enable live, conversational interaction with systems. Using the new linux-mcp-server, let’s walk through how you can talk with your Fedora system for understanding your system and getting help troubleshooting it!

Introduction

 Large language models (LLMs) can be an invaluable tool when investigating an issue on a Linux system. However, this can involve a lot of copy/pasting of information from the Linux terminal into a web based interface to an LLM model.  

The model context protocol (MCP) acts as a bridge, enabling LLMs to interact with external tools and data sources.  The linux-mcp-server utilizes this protocol to give LLMs the ability to interact with a Fedora Linux system.  Instead of you manually copying and pasting terminal output, the linux-mcp-server enables the LLM to directly query system information and log entries.  

By enabling an LLM direct access to system information and logs, it is transformed into an active part of the investigation process when troubleshooting an issue.  It empowers an LLM to directly query the system state, allowing it to help identify performance bottlenecks, and identify important log entries that might be missed by a manual review.  

What is the model context protocol (MCP)?

Anthropic introduced MCP in November 2024 as an open standard for LLM tool use. This provides a way for LLMs to interact with outside systems and data sources.  

Prior to MCP,  there wasn’t as strong a standard and ecosystem for LLM systems to call tools. LLMs were thus frequently limited to have only the information contained in their training. They were isolated from the outside world.  For example, if you asked an LLM “what is the weather going to be next week”, the LLM would respond with a message indicating that it doesn’t know what the weather will be, as it doesn’t have access to that information. MCP helps solve this problem by enabling a standardized way for an LLM to access an outside data source, such as the weather forecast.   

At a high level, users can use an AI agent application, such as Goose (open source), or Claude Desktop, and specify which MCP servers they would like to use.  The AI agent application informs the LLM that there are tools available via these MCP servers that can be used to help answer the requests from the user. The LLM model can then decide when to invoke these tools.  

MCP is an open standard. You have the flexibility to use MCP servers, such as linux-mcp-server, with either open source-licensed LLM models, or hosted proprietary LLM models. 

What is the linux-mcp-server?

The linux-mcp-server is a project started by Red Hat’s RHEL Engineering team.  It provides a number of tools that enable an LLM to query information from a Linux system, such as system info, service information and logs, process information, journald and other logs, network information, and storage and disk information.  For a full list of the tools provided, refer to the project’s Github page.  

These tools, provided by linux-mcp-server, are focused on providing the LLM access to read-only information.  In the future, we’ll be exploring expanding past these read-only use cases.  

The linux-mcp-server can be used to interact with the local Fedora Linux system that it is running on. It can also be used to interact with remote Fedora Linux systems over SSH.  For example, if you have SSH key authentication setup with the remote systems, you could make a request to your AI agent application such as “Determine the current memory usage on the fedora1.example.com, fedora2.example.com, and fedora3.example.com servers”.  

Prerequisites

The main components needed are an AI agent application, access to LLM model inference, and the linux-mcp-server. 

There are a number of options for the AI agent application, both open source and proprietary.  An example of an open source AI agent is Goose, which provides an RPM that can be installed on Fedora.

There are several LLM model inference options, ranging from locally hosted open source models, to proprietary hosted LLM models.  If hosting a model locally, you generally need to have GPU/accelerator hardware available.  Open source tools that can be used to locally host LLM models include RamaLama, llama.cpp, and Ollama. There are a number of open source LLM models capable of tool calling. These include models such as gpt-oss-20b and Granite 4 small.

Once you’ve installed your preferred AI agent application, the next step is to install the linux-mcp-server and configure your AI agent application to use it. The linux-mcp-server Github page includes installation instructions, and configuration details for Goose and Claude Desktop.

Use cases for linux-mcp-server

⚠ NOTE

The example scenarios below include LLM generated output. The output is presented here as it was generated by the AI systems cited so you can assess the quality of the output they are capable of, particularly in a non-cloud, self-hosted scenario.

The sample LLM output provided here is not intended to be run by readers without the same level of review we recommend anyone perform on AI generated content prior to use on their systems.

Dear Fedora, why are you so slow today?

Has this happened to you? Time to hop on a call and give a demo. You’re pumped. It’s your go-time aaaaand… your system starts lagging. Folks on the call say your video is choppy or they can’t understand you. OH NO.

You’ve got to assess what’s up, quickly. Run `top`, check `dmesg`, check your network connection (read the man page for `netstat` because you can’t remember the flag you want), check `journalctl`, cross-reference what you’re seeing, then start killing processes, restarting services, reloading kernel modules willy-nilly based on hastily-crafted theories, see if it improves… 

This is a case where you could actually ask the system to tell you why it’s slow and give it the ability to do the analysis via linux-mcp-tools. The LLM you connect with won’t just run the commands through the MCP server; it will understand what the numbers it gets means in context and be able to explain in human terms.

So let’s start working with a relatively new laptop. It’s a Thinkpad T14S with the AMD Ryzen™ AI Pro 7  ( Ryzen™ AI 360 chip.) Recently one of the wifi networks it must connect to regularly was upgraded. It now has  issues with the  network connection stability. So let’s talk to Goose + gpt-oss:20b +  linux-mcp-server about it. Here’s the initial prompt I used:

My Wifi doesn't work very well. It cuts out after a few minutes. Help me fix it. Can you find the error messages in the system logs and diagnose the problem?

It responded with the following:

I'll help you diagnose your WiFi connectivity issues by examining the system logs for error messages related to your network interfaces and wireless connectivity. Let me start by gathering information about your system and checking the relevant logs.

🤖 LLM output 🤖

I’ll help you diagnose your WiFi connectivity issues by examining the system logs for error messages related to your network interfaces and wireless connectivity. Let me start by gathering information about your system and checking the relevant logs.

It then made a series of tool calls, including Get System Information, Get Network Interfaces, Get Journal Logs, Get Hardware Information, Get Service Status service_name:NetworkManager, Get Service Status service_name:wpa_supplicant, Read Log File. The result was this analysis, which is the actual problem I was hoping it could find!


🤖 LLM output 🤖

Problem Summary – AT‑H12K PCI Wi‑Fi Issue
.wp-block-group table, .wp-block-group tr, .wp-block-group td, .wp-block-group pre { max-width: 100% !important;
} .wp-block-group td div.text { max-width: 200px !important;
}

⚡ Problem Summary

Your ATH12K PCI (Qualcomm 802.11ax) wireless card repeatedly reports

 
ath12k_pci 0000:c2:00.0: failed to enqueue rx buf: -28

every few minutes, which triggers a drop of the wireless link. The driver is trying to enqueue an RX buffer for the hardware but fails with errno ‑28.

‑28

-ENOMEM

(out‑of‑memory for the driver buffer); the kernel cannot allocate a DMA‑coherent RX descriptor.

✅ Likely Root Causes

Cause Evidence / Typical Scenario Fix / Work‑around
Driver bug or missing patch The kernel’s ath12k driver has had a long‑running issue with RX ring management in older 5.15/5.4 series. Update the kernel or apply the latest patch from the

linux

tree (5.18+ or the

ath12k

backports).

Out‑of‑memory in the driver The error itself is “‑28” – a memory‑allocation failure inside the driver. Reduce the number of RX descriptors, or enable fast‑RX disabled mode (

modprobe ath12k_pci ath12k_pci_disable_fast_rx=1

).

Firmware mismatch / corruption Old firmware (

ath12k_hwgen

files) don’t match the driver version.

Reinstall firmware from your distro (

sudo apt‑install –reinstall linux‑firmware

) or copy the latest from kernel source.

System memory pressure If many big processes run, DMA‑coherent memory pool can be exhausted. Free RAM, close memory‑heavy apps, or add swap.
ACPI/PCIe power‑management quirk Some laptops throttle the Wi‑Fi device after idle, resetting the buffer queue. Add

pci_bus_reset = off

or set

ath12k_pci_disable_ps=1

in a modprobe conf.

Hardware issue (e.g., faulty PCIe slot or NIC) Persistent failure after multiple driver/kernel upgrades. Test on another machine/PCIe slot or replace the card.

🔎 Diagnostic Checklist

  1. Kernel & Driver Version
     
    uname -r
    modinfo ath12k_pci | grep ^version:
  2. Driver Parameters
     
    cat /sys/module/ath12k_pci/parameters/*

    – See if

    ath12k_pci_disable_fast_rx

    or similar is already set.

  3. Firmware Location
     
    ls /lib/firmware/ath12k/ath12k/ar9009/ | grep wlan0 | head

    – If the files are missing or older than the driver, copy newer ones from

    /lib/firmware

    of a newer kernel.

  4. Kernel Log Around the Error
     
    sudo dmesg --ctime | grep -A3 'ath12k_pci 0000:c2:00.0'

    – Observe if there are any other “-ENOMEM” or “resource exhaustion” messages.

  5. Memory Usage Stats
     
    free -h
    vmstat 1 5
  6. PCIe Latency / Link State
     
    sudo lspci -vvv -s c2:00.0 | grep -iE "link|latency"

🚀 Quick‑Fix Path (Try in this order)

Step Command What it does
1
 
sudo apt update && sudo apt upgrade linux-firmware
Refresh firmware to the latest distro version.
2
 
sudo apt upgrade linux-image-$(uname -r)
Apply latest kernel with patched ath12k driver (or switch to 5.18+ if still on 5.15).
3
 
echo "ath12k_pci ath12k_pci_disable_fast_rx=1" | sudo tee /etc/modprobe.d/ath12k.conf
Force the driver to use a more conservative RX scheme.
4 Reboot and monitor:

dmesg -T | grep ath12k_pci
Verify the error disappears.
5 Optional: test on another machine or use a USB‑WiFi dongle (USB 802.11ax / 802.11ac) as a temporary workaround. Ensure the issue is not hardware‑specific.

**Tip:** After reboot, you can also verify driver parameters:

cat /sys/module/ath12k_pci/parameters/ath12k_pci_disable_fast_rx

📺 Advanced Debugging (if basic fixes don’t help)

  1. Enable debug logs
     
    sudo bash -c 'echo 1 > /sys/module/ath12k_pci/parameters/debug'
    sudo dmesg -T | grep ath12k
  2. Check PCIe device state
     
    sudo lspci -vv -s c2:00.0
  3. Use
    ethtool

    on the Wi‑Fi interface

     
    sudo ethtool -S wlp4s0
  4. Look for firmware logs
     
    grep -i 'ath12k' /var/log/syslog

📌 Bottom Line

The “failed to enqueue rx buf: –28” is an internal driver allocation failure that typically follows one of two paths:

  1. The driver or firmware is out of date → upgrade kernel/firmware.
  2. The driver configuration is too aggressive for your system’s memory budget → tweak module parameters or free RAM.

If you're still stuck after following the quick‑fix path, it's worth capturing more detailed logs with the above diagnostic commands and sharing them. We can then cross‑reference the exact kernel version/driver commit to see if a known patch exists.

Feel free to let me know which step you’ve tried or if you encounter any new log messages, and we’ll dig deeper!


Not too bad an analysis and set of debugging steps! I’ll follow up in the comments and let you know if the solution works.

Where’d my disk space go, Fedora?

Trawling through `du` on the command line, converting the numbers in your head, sorting them – it can be tedious. We even have graphical tools like `baobab`, but it takes time to load in the results and explore the data. And sometimes you might have questions it can’t answer in terms of the risk of deleting a specific directory, or the age of a specific file. It’s easier to be sloppy and just let the disk fill up. But there comes a time where that is no longer an option.

You can ask the system using the linux-mcp-server to analyze your disk usage, give you a hitlist of the top space-eating places on the file system, and give you advice with a risk analysis of how to clean it up.

Here’s my prompt:

I need help analyzing disk usage on my system. Can you: 1. Show me the current disk usage across all filesystems using get_disk_usage
2. Use list_directories_by_size to show me the 10 largest directories (this will help identify where disk space is being consumed)
3. Check if there are any large log files consuming space using get_journal_logs or read_log_file
4. Review running processes with list_processes to see if any are consuming excessive resources or creating temporary files After gathering this information, help me identify potential space-saving opportunities and assess the safety of cleaning up specific directories. Please don't only examine top-level directories. I want you to go deep if needed to find out where the largest files are!

I ran this prompt on my Fedora 42 system using linux-mcp-server with Goose connected to locally-served gpt-oss:20b, and here’s the output I got:


🤖 LLM output 🤖

Disk‑space overview (as of Nov 24 2025 15:21)

Filesystem Size Used Avail Use % Mounted on
/dev/mapper/luks‑61cedb0e… 952.3 GB 548.9 GB 399.6 GB 57.9 % /
/dev/mapper/luks‑61cedb0e… 952.3 GB 548.9 GB 399.6 GB 57.9 % /home
/dev/nvme0n1p2 973.4 MB 462.8 MB 443.4 MB 51.1 % /boot
/dev/nvme0n1p1 598.8 MB 19.3 MB 579.6 MB 3.2 % /boot/efi

Overall usage is comfortably below 60 % of a 952 GB volume, so immediate capacity pressure isn’t a concern. The biggest contributors to disk space today are in the ~/. hierarchy.


1. Biggest directories on the system

Path (relative to the mount point) Size (≈) Notes
~/ – User home (2 mounts: /home + /) 1 TB total 57 % used; the bulk of the space is in ~/

| ~/duffy/.local/share | 238 GB | Containers sub-dir, followed by Trash (7GB) – typical podman/docker container images |
| ~/duffy/.local/share/containers | 238 GB | Container layers, images, root filesystems – the prime candidate for cleanup |
| ~/duffy/.local | 266 GB | share (237 GB) dominates it |
| ~/duffy/.local/share/containers (inside share) | 238 GB | Same as above – just a different view |
| ~/duffy/.local/lib | 13 GB | Likely a virtual‑environment library set |
| ~/duffy/.local/bin | 4 GB | Helper binaries |
| ~/duffy/.local/share/Trash | 7 GB | User‑deleted items still resident on disk |
| ~/duffy/.local/share/containers (nested) | 238 GB | Same as above; redundant listings from recursive depth‑1 scans |
| ~/duffy/.local/share (top‑level) | 238 GB | containers is the absolute largest item |

Take‑away – 75 % of the 238 GB live inside three nested directories:

  • ~/duffy/.local/share/containers (238 GB) → Rooted in Podman/Container‑Store
  • ~/duffy/.local/share (238 GB) → Same data exposed at a higher level
  • ~/duffy/.local/share/Trash (7 GB)

2. Possible space savers

Category Recommendation Rationale
Containers podman system prune -a (or docker system prune -a if Docker is in use) Removes unreferenced images, containers, and caches. This can reclaim 200–250 GB if most containers are outdated or no longer referenced.
Trash gio trash –empty The 7 GB in Trash is still taking physical space; simply emptying it will free that amount.
Large libs/binaries Check ~/duffy/.local/lib – 13 GB, and ~/duffy/.local/bin – 4 GB If these are not actively used by a project, consider uninstalling or moving them to a secondary, less‑frequented store.
Ramalama & UV 9 GB (ramalama), 8 GB (uv) Possibly temporary files or caches from development tools. Inspecting the contents of these directories (du -h) can reveal if they are still needed.
Other caches pnpm store prune, virtualenv or pip cache purge Development tool caches often accumulate; they’re usually safe to delete once you’re done with the current development sessions.
Logs The system journal shows normal activity, and syslog access is blocked by policy. Systemd journal stores are rotated automatically; no urgent action required.
Boot partitions Not relevant – they’re low‑usage (boot/efi only 3 % used). No action needed.

3. Disk‑IO perspective

  • Reads: 111.8 GB; Writes: 85.6 GB since boot (≈ 1.2 TB total).
  • If writes are high relative to free space, consider enabling a tmpfs overlay for temporary files or moving large data to a separate, faster SSD. But in this case, writes are not approaching the 399 GB free.

4. Suggested cleanup workflow

# 1. Empty Trash

gio trash --empty

# 2. Prune unused containers/images

podman system prune -a   # or docker system prune -a

# 3. Remove leftover caches

pnpm store prune         # if pnpm is installed
pip cache purge          # for Python
virtualenv --clean       # optional

# 4. Inspect large “RAM” directories manually if you need deeper cleanup

du -h --max-depth=1 ~/duffy/.local/share/containers

Tip: After each step run df -h /home or du -sh ~/duffy/.local/* to see the impact.


5. Safety checklist

Action Why it’s safe (or risk)
Emptying Trash Deletes only user‑deleted files; confirm you don’t need the content.
podman system prune -a Deletes container images, containers, and volumes that are not referenced by any running or stopped container. Verify with podman ps -a before pruning.
Removing lib/bin from .local Only safe if you’re sure those libraries/bin files aren’t part of a running application.
Caution with tmpfs or moving large directories If you move a directory on‑the‑fly, ensure no running application depends on that path.

Bottom line

  • Current free space: ≈ 400 GB on both / and /home.
  • Largest consumptions: User‑space under ~/duffy/.local/share/containers (≈ 238 GB).
  • Actionable clean‑ups: Empty Trash, prune unused container images, and review large libraries/binaries. This should reclaim over 200 GB and bring the usage from 57.9 % down to roughly 40‑45 % – plenty of headroom for the future.

Feel free to ask what next steps you’d like to dig into (e.g., detailed contents of the containers dir, specific application logs, or targeted pruning commands).


Upgrade planning

So you may have noticed a little detail above – the system is a Fedora 42 system… and Fedora 43’s been out a month now! 😅 So, time to upgrade.

This example shows where we have some tools missing from the set provided in the linux-mcp-server. We’re including it for two reasons:

  • So you can see how this works manually
    You can see that even when specific tools you might not need are available in the MCP server, you can have the response give you instructions on commands to run on your own, and copy/paste the command output back into your chat to get analysis alongside the data the system is able to pull via the MCP tool calls;
  • To encourage you to submit additional tools to our project!
    We would love your additions to the project! Here’s where to get started: https://github.com/rhel-lightspeed/linux-mcp-server/blob/main/docs/CONTRIBUTING.md

Here’s the prompt I started with, with the same Goose + gpt-oss:20b + linux-mcp-server combination:

You are a Linux system administrator assistant analyzing a Fedora system for upgrade readiness. TASK: Examine this Fedora 42 system and provide a comprehensive upgrade readiness report for Fedora 43. ANALYSIS CHECKLIST:
1. Check current Fedora version: cat /etc/fedora-release
2. Review system updates status: dnf check-update
3. Identify third-party repositories: dnf repolist
4. List installed packages from non-Fedora repos: dnf list installed | grep -v @fedora
5. Check for broken dependencies: dnf check
6. Review disk space on root partition: df -h /
7. Check for unsupported or deprecated packages
8. Identify custom kernel modules: lsmod | grep -v "^Module"
9. Review SELinux status: sestatus
10. Check for pending system updates: dnf upgrade --refresh --assumeno REPORT FORMAT: # Fedora 43 Upgrade Readiness Report ## Current System Status
- Fedora version: [version]
- Kernel: [kernel version]
- Updates status: [current/outdated] ## Potential Issues
[List blocking issues with HIGH/MEDIUM/LOW severity] ## Third-Party Software
[List non-Fedora packages that may need attention] ## Recommendations
[Specific pre-upgrade steps needed] ## Overall Readiness: [READY/NEEDS ATTENTION/NOT READY]
Run the necessary commands and provide this analysis. Be specific about any issues found and give actionable recommendations.

Now, right away the model came back to me to complain it doesn’t have access to `dnf`, `cat`, etc. And that’s expected here. What it did was give me a list of homework to run for it to complete the analysis… for example, `dnf check-update` and `cat /etc/fedora-release`. I had a little back and forth in Goose with the model where it would ask me to run a command and I’d copy the output into the context. This resulted, finally, in the following report:


🤖 LLM output 🤖

Fedora 43 Upgrade Readiness Report

Analyzed System – Fedora 42 (Kernel 6.17.7‑200.fc42.x86_64)
Date – 2025‑11‑24


1. Current System Status

Item Value
Fedora release 42 (Adams)
Kernel 6.17.7‑200.fc42.x86_64
Root filesystem 953 GB LUKS, 58 % used (411 GB free) – > 15 % free
/var /usr Same as root partition (mounted the same LUKS device)
SELinux enabled (policy targeted), currently permissive
System‑wide packages No held packages (dnf versionlock list empty)
System modules No modules held.

TL;DR – The base system is clean; the kernel is up‑to‑date for 42, there is ample disk space, and there are no version‑locked packages.


2. Third‑Party Repos & Packages

Repository Description Current dnf repolist status
rpm-fusion-free, rpm-fusion-nonfree, rpm-fusion-updates Widely used RPM‑Fusion components Enabled
google-cloud-cli Google Cloud SDK (RPM built for EL 10) Enabled
visual-studio-code Microsoft VS Code RPMs Enabled
command‑line‑assistant, google‑chrome, PyCharm, unsupported‑fedora‑packages, goose, leaktk Various Copr builds (mostly Fedora‑42 based) Enabled

What we need to verify

  1. Is a Fedora 43 build available for each of the non‑Fedora repos above?
  2. Does google-cloud-cli have a Fedora 43‑compatible package? (EL 10 build may not be usable).
  3. Are there any el10‑specific dependencies that could surface during the upgrade?
  4. Any custom kernel modules that will require re‑compilation against the 43 kernel.

3. Potential Issues & Severity

Issue Severity Why it matters Immediate Remedy
Non‑Fedora packages (e.g., google‑cloud‑cli) MEDIUM May contain packages built for EL 10; could conflict with Fedora 43 libs. Verify if a Fedora 43 build exists or plan to remove/re‑build it.
Third‑party repos without 43 builds MEDIUM dnf system-upgrade pulls all enabled repos; lacking 43 packages causes transaction failure. Disable such repos temporarily (dnf config-manager –set-disabled <repo>) or enable the 43 stream when available.
Custom kernel modules LOW Modules compiled against 42 kernel won’t load on 43; services may drop. Re‑compile or patch modules after upgrade.
Broken dependencies reported by dnf check HIGH Would block the upgrade transaction. Run sudo dnf check now; fix any reported issues.
Insufficient disk space HIGH Upgrade scripts need > 15 % free. Not an issue (58 % free).

Current High‑Risk Items – None detected yet.
Next Action – Confirm each third‑party repo has 43‑compatible packages and set SELinux to enforcing.


4. Recommendations (Pre‑Upgrade Checklist)

Step Command Purpose
Confirm 43‑stream packages `dnf list –available * grep <repo-id>`
Verify SELinux enforcement sestatus Should show enforcing.
Check for broken dependencies sudo dnf check No errors should appear.
Refresh all repos sudo dnf upgrade –refresh Pull latest metadata.
Dry‑run upgrade sudo dnf system-upgrade download –releasever=43 –allowerasing Detect any missing packages or conflicts.
Optional: Disable problematic repos sudo dnf config-manager –set-disabled <repo> Prevent upgrade crash if a repo has no 43 build.
Persist SELinux config sudo sed -i ‘s/SELINUX=permissive/SELINUX=enforcing/’ /etc/selinux/config Make enforcement permanent.

5. Post‑Upgrade Actions

  1. Reboot into the new kernel (dnf system-upgrade reboot).
  2. Re‑compile any custom kernel modules against the new kernel (yum install kernel-devel-$(uname -r) then rebuild modules).
  3. Re‑enable any disabled repos once the upgrade completes.
  4. Verify critical services (systemctl status <service>) and check logs.
  5. Run sudo dnf distro-sync to ensure all packages are the latest from Fedora 43.

6. Overall Readiness

Criteria Status Comment
Disk space Ready 58 % free
Package health Ready No held or broken packages
Third‑party repos Conditional Verify 43‑compatibility
Custom modules Pending Need to identify modules and re‑compile
Final verdict READY (after addressing above points)

Next step – Run the three verification commands below and share the output with me:

sudo dnf check
sudo dnf repolist
dnf list installed | grep -v @fedora

With those outputs I can pinpoint any remaining blockers and give you a final, ready‑to‑proceed plan.

Happy upgrading!


Give it a try, let us know what you think!

You can see from these few examples that tool calling with LLMs is a valuable tool for troubleshooting Linux systems. We could use your help building this and making it awesome! How can you help?

  • Give it a try, let us know what you think, file any bugs you find, and let us know what tools are missing that you could use! You can respond in the comments here, file a bug, or chat with us on Fedora matrix.
  • We’re primarily working with Goose as our client, but if you have another preferred client and want to help us support it better, we’d love to work with you!
  • Get involved in the linux-mcp-project generally, we’d love to see your PRs!
  • Let us know what you’d like to see in the future. What workflows would you like to see supported? How do you see this making your Fedora or overall Linux experience better? What larger workflows do you see this plugging into?

Join us upstream at https://github.com/rhel-lightspeed/linux-mcp-server! And come chat with us in the Fedora AI/ML SIG chat room on Matrix!

Posted on Leave a comment

Contribute to Fedora 39 Upgrade, Virtualization, and Cloud Test Day

Fedora test days are events where anyone can help make certain that changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed to Fedora before, this is a perfect way to get started.

There are three test days occurring in the next two weeks covering three topics:

  • Tuesday October 03, is to test the Fedora Cloud
  • Thursday October 05 , is to test the Fedora Upgrade
  • Monday October 09 , is to test Virtualization

Come and test with us to make Fedora 39 even better. Read more below on how to do it.

Fedora Cloud test day

Fedora Linux 39 is coming close to the release date and the Fedora Cloud SIG would like to get the community together to find and squash some bugs.

The test day will occur on Tuesday October 03. This event will test Fedora Cloud Base content. See the wiki page for links to the Beta Cloud Base Images. We have qcow, AMI, and ISO images ready for testing.

Upgrade test day

As we come closer to Fedora Linux 39 release dates, it’s time to test upgrades. This release has a lot of changes and it becomes essential that we test the graphical upgrade methods as well as the command line methods.

This test day will happen on Thursday, October 05. It will test upgrading from a full updated F37 and F38 to F39 for all architectures (x86_64, ARM, aarch64) and variants (WS, cloud, server, silverblue, IoT). See this wiki page for information and details.

Virtualization test day

This test day will happen on Monday, October 09 and will test all forms of virtualization possible in Fedora. The test day will focus on testing Fedora or your favorite distro inside a bare metal implementation of Fedora running Boxes, KVM, VirtualBox and whatever you have. The general features of installing the OS and working with it are outlined in the test cases which you will find on the results page.

How do test days work?

A test day is an event where anyone can help make certain that changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. Test days are the perfect way to start contributing if you not in the past.

The only requirement to get started is the ability to download test materials (which include some large files) and then read and follow directions step by step.

Detailed information about all the test days are on the wiki page links provided above. If you are available on or around the days of the events, please do some testing and report your results.

Posted on Leave a comment

Share your game achievements with Gamerzilla

Gamerzilla is an open source game achievement system that stores and shares your game achievements. Games use libgamerzilla to easily add achievements. The Gamerzilla library is written in C but bindings exist for other languages.

Two years ago I described how to setup a Gamerzilla server. In addition to the .net implementation, php and python implementations are available. But you probably don’t want to run your own server. Here is an introduction to a public server and the Gnome interface.

Public server

My web server now hosts an instance of Gamerzilla with public registration enabled. To create an account click on the Sign In link on the top right corner. From there click the Register link next to the Login button.

The user accounts collect very little information. Simply enter a username and password. You do not need to enter your real name or email address. As a result, forgotten password is not implemented.

Accounts start as invisible. If you want the public to see your achievements after login, click on your username on the top right of the page and select Make Visible. New users must be approved before they can upload achievements. Currently no indication appears whether you are approved or not.

Getting an achievement

Before you get your first achievement, you need to install the Gamerzilla gnome shell extension with:

sudo dnf install gnome-shell-extension

After installation you will need to logout and log back in. The extension needs to be enabled with the the following:

gnome-extensions enable gamerzilla@gamerzilla.identicalsoftware.com 

The game controller icon appears in the top bar of the gnome shell.

Click on the controller icon and select preferences. On the resulting screen enter your gamerzilla url including the trailing slash but without the ‘trophy’ destination for the UI. If using my server, the value will be ‘https://identicalsoftware.com/ ‘. Fill in your username and password. Click on save. By default Automatic Connect is enabled. If you don’t want to always connect disable this.

Gamerzilla Controller menu

Unfortunately a bug prevents the shell extension from working right away. You need to logout and log back in again. Fixing this is high priority.

Currently Supported Games

Several games in the Fedora repository support Gamerzilla achievements. If you want to get one to try it out, I suggest Shippy 1984, Seahorse Adventures, or Anagramarama. Seahorse Adventures will store some progress as soon as you complete a single level. If you play Super Tux Kart, you may already have achievements. Simply starting the game will synch any previous completed achievements. Gamerzilla does not display any notification when achievements complete but games may implement it on their own.

Going Forward

Besides fixing the bug with connecting the first time, I want a local browser for achievements. That way you can view all your game achievements without forcing you to upload them.

Achievements do have negative aspects. Some people feel compelled to complete them. This can lead to negative play experience, if the achievements are very difficult. I understand this concern but some people enjoy these aspects as well. More importantly, to attract and retain players, I feel open source games need a game achievement system.

We need more games to implement Gamerzilla achievements. We need players to show their support for the system. I continue to add achievements to my games and other open source games I try out.

Posted on Leave a comment

Announcing Fedora Linux 39 Beta

The Fedora Project is pleased to announce the immediate availability of Fedora Linux 39 Beta, the next step towards our planned Fedora Linux 39 release at the end of October.

Get the the prerelease of any of our editions from our project website:

Or, try one of our many different desktop variants (like KDE Plasma, Xfce, or Cinnamon) from Fedora Linux Spins.

You can also update an existing system to the beta using DNF system-upgrade.

Beta release highlights

In some ways, this release might seem notable largely for what isn’t here. We’d planned to update the DNF package manager to a new, speedier version.  We also hoped to showcase a long-awaited refresh to the user interface for Anaconda, our installation program. However, we decided these things just weren’t ready in time.

Don’t let this get you down, though — this is a healthy process at work. Years ago, we didn’t always have a good way to alter course once we’d accepted a change proposal. We often found ourselves in a situation where the only reasonable way forward was to forge ahead, even if we weren’t happy enough with the change for general users. Now, even though it’s somewhat disappointing, we’re recognizing that these big changes need more time to bake, and putting them back into the oven is a good thing.

I’ve got a kid that always wants to get 100% (or higher!) in every class. I keep telling her, “Really, you learn best when you’re right 80% of the time. Otherwise, you’re not getting enough of a challenge.” To keep up with Fedora’s commitment to innovation, we also need to take risks. If everything went according to plan, that would mean we’re not trying hard enough. At the same time, our process now allows us to take these risks while still making sure the Fedora Linux OS we ship for general use is of A+ quality.

We still plan to bring you these features in the near future, and if they’re of interest to you, please keep your eyes open for upcoming test announcements.

In the meantime, enjoy the many updates across all of Fedora Linux updates, ready for you to test in this new beta.

Notable updates

Fedora Workstation 39 Beta brings us GNOME 45 (itself also in beta). For everyone who needs a free and open source desktop suite, there’s LibreOffice 7.6.

Fedora Cloud images for AWS now default to less-expensive gp3 storage volumes.

We also have an update to the GNU Toolchain (gcc 13.2, binutils 2.40, glibc 2.38, gdb 13.2). Of course, developers appreciate that we include the latest tools, but these updates also include improvements to security and performance that will benefit everyone who uses Fedora Linux.

Testing needed

Since this is a beta release, we expect that you may encounter bugs or missing features. To report issues encountered during testing, contact the Fedora Quality team via the test mailing list or in the #quality channel on Fedora Chat. As testing progresses, common issues are tracked in the “Common Issues” category on Ask Fedora.

For tips on reporting a bug effectively, read how to file a bug.

What is the beta release?

A beta release is code-complete and bears a very strong resemblance to the final release. If you take the time to download and try out the beta, you can check and make sure the things that are important to you are working. Every bug you find and report doesn’t just help you, it improves the experience of millions of Fedora Linux users worldwide! Together, we can make Fedora rock-solid. We have a culture of coordinating new features and pushing fixes upstream as much as we can. Your feedback improves not only Fedora Linux, but the Linux ecosystem and free software as a whole.

More information

For more detailed information about what’s new on the Fedora Linux 39 Beta release, you can consult the Fedora Linux 39 Change set. It contains more technical information about the new packages and improvements shipped with this release.

Posted on Leave a comment

Contribute at Passkey Auth, Fedora CoreOS and IoT Test Week

Fedora test days are events where anyone can help make certain that changes in Fedora Linux work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed to Fedora Linux before, this is a perfect way to get started.

There are several test periods in the upcoming weeks.

  • Thursday 21 September and Friday 22 September, is to test Passkey Auth.
  • Sunday 24 September through Sunday 01 October, is to test Fedora IoT Edition.
  • Monday 25 September through Monday October 02, focuses on testing Fedora CoreOS .

Passkey Auth

Passwordless authentication methods to log into Linux systems became a hot topic in the past few years. Various organizations started to mandate more secure methods of authentication, including governments and regulated industries. FIDO2 tokens, and smartcards, represent two passwordless authentication methods mandated by the US government in their Zero Trust architecture.

FreeIPA, and SSSD in Fedora 39, enable the capability to log-in to a desktop or a console terminal with a FIDO2-compatible device, for centrally managed users enrolled in Active Directory. This is supported by the libfido2 library. Additionally, for FreeIPA, once the user is authenticated with the FIDO2-compatible device, a Kerberos ticket may be issued .

As a part of this changeset , we will be having test days on Thursday 21 September and Friday 22 September.  The idea is to run through test cases and submit results here.

Fedora IoT

For this test week, the focus is all-around; test all the bits that come in a Fedora IoT release as well as validate different hardware. This includes:

  • Basic installation to different media
  • Installing in a VM
  • rpm-ostree upgrades, layering, rebasing
  • Basic container manipulation with Podman.

We welcome all different types of hardware, but have a specific list of target hardware for convenience. This test week will occur Sunday 24 September through Sunday 01 October.

Fedora 39 CoreOS Test Week

The Fedora 39 CoreOS Test Week focuses on testing FCOS based on Fedora 39. The FCOS next stream is already rebased on Fedora 38 content, which will be coming soon to testing and stable. To prepare for the content being promoted to other streams the Fedora CoreOS and QA teams have organized test days from Monday, 25 September through 2 October. Refer to the wiki page for links to the test cases and materials you’ll need to participate. The FCOS and QA team will meet and communicate with the community in async over multiple matrix/element channels. The announcements will be made 48 hours prior to the start of test week. Stay tuned to official Fedora channels for more info.

How do test days work?

Test days or weeks are an event where anyone can help make certain that changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. Test days are the perfect way to start contributing if you not in the past.

The only requirement to get started is the ability to download test materials (which include some large files) and then read and follow directions step by step.

Detailed information about all the test days are on the wiki page links provided above. If you are available on or around the days of the events, please do some testing and report your results.

Posted on Leave a comment

Quick Fedora shirt update and sale of last stock with the old logo

There are some updates on Fedora shirts and sweatshirts.

Two years after the announcement of the current Fedora logo, we decided to clear our stock of shirts with the old logo. Soon our shirts will only be made and stocked with the new Fedora logo.

The Fedora jackets and hoodies are back again:

The old Fedora polo shirts are almost out of stock, so we have a new type with black buttons:

We have improved delivery too. No more taxes and customs paperwork within the European Union, the United States and the United Kingdom.
If you have your own embroidery machine, the PES file for the Fedora embroidery is available here; for the Fedora Classic, here.

Check out the embroidered Fedora collection here and don’t forget to use the FEDORA5 coupon code, for the $5 discount on every Fedora shirt and sweatshirt.

When ordering, note that the old logo style items are labelled “Fedora Classic”.

Posted on Leave a comment

Contribute at the Fedora Linux Test Week for Kernel 6.5 and Toolbx Test Day

Fedora test days are events where anyone can help make sure changes in Fedora Linux work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed to Fedora Linux before, this is a perfect way to get started.

There are several test periods in the upcoming weeks. Here are the first two:

  • Sunday 10 Sept through Sunday 17 Sept , is to test Kernel 6.5.
  • Thursday 14 Sept focuses on testing Toolbx .

Kernel 6.5

The kernel team is working on final integration for Linux kernel 6.5. This recently released version, will arrive soon in Fedora Linux. As a result, the Fedora Linux kernel and QA teams have organized a test week from Sunday, Sept 10, 2023 to Sunday, Sept 17, 2023. This wiki page contains links to the test images you’ll need to participate. This is also going to be the release Kernel for Fedora 39 and any help testing regression for this Kernel will be very helpful.

Toolbx

Recently, Toolbx has been made a release-blocking deliverable and now has release-blocking test criteria. Given Toolbx is very popular and has a variety of usage, we would like to run a test day to ensure nothing is broken. This test day encourages people to use containers, run apps in them ; across all platforms ie
Workstation , KDE , Silverblue and CoreOS. The details are available on this wiki and results can be submitted in the events page.

How do test days work?

A test day is an event where anyone can help make sure changes in Fedora Linux work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to download test materials (which include some large files) and then read and follow directions step by step.

Detailed information about all the test days is available on the wiki pages mentioned above. If you’re available on or around the days of the events, please do some testing and report your results. All the test day pages receive some final touches which complete about 24 hrs before the test day begins. We urge you to be patient about resources that are, in most cases, uploaded hours before the test day starts.

Come and test with us to make the upcoming Fedora Linux 39 even better.

Posted on Leave a comment

Fedora Linux Flatpak cool apps to try for September

This article introduces projects available in Flathub with installation instructions.

Flathub is the place to get and distribute apps for all of Linux. It is powered by Flatpak, allowing Flathub apps to run on almost any Linux distribution.

Please read “Getting started with Flatpak“. In order to enable flathub as your flatpak provider, use the instructions on the flatpak site.

Flatseal

Flatseal is a graphical utility to review and modify permissions from your Flatpak applications. This is one of the most used apps in the flatpak world, it allows you to improve security on flatpak applications. However, it needs to be used with caution because you can make your permissions be too open.

It’s very simple to use: Simply launch Flatseal, select an application, and modify its permissions. Restart the application after making the changes. If anything goes wrong just press the reset button.

You can install “Flatseal” by clicking the install button on the web site or manually using this command:

flatpak install flathub com.github.tchx84.Flatseal

Also available as rpm on fedora’s repositories

Reco

Reco is an audio recording app that helps you recall and listen to things you listened to earlier.

Some of the features include:

  • Recording sounds from both your microphone and system at the same time.
  • Support formats like ALAC, FLAC, MP3, Ogg Vorbis, Opus, and WAV
  • Timed recording.
  • Autosaving or always-ask-where-to-save workflow.
  • Saving recording when the app quits.

I used it a lot to help me record interviews for the Fedora Podcast

You can install “Reco” by clicking the install button on the web site or manually using this command:

flatpak install flathub com.github.ryonakano.reco

Mini Text

Mini Text is a very small and minimalistic text viewer with minimal editing capabilities. It’s meant as a place to edit text to be pasted, it doesn’t have saving capabilities. It uses GTK4 and it’s interface integrates nicely with GNOME.

I found this to be very useful just to keep data that I want to paste anywhere, it doesn’t have unwanted and/or unneeded rich text capabilities, just plain text with minimal editing features.

You can install “Mini Text” by clicking the install button on the web site or manually using this command:

flatpak install flathub io.github.nokse22.minitext

Tagger

Tagger is a tag editor for those of us that still save the music locally.

Some of the features are:

  • Edit tags and album art of multiple files, even across subfolders, all at once
  • Support for multiple music file types (mp3, ogg, flac, wma, and wav)
  • Convert filenames to tags and tags to filenames with ease

You can install “Tagger” by clicking the install button on the web site or manually using this command:

flatpak install flathub org.nickvision.tagger