post

Use a drop-down terminal for fast commands in Fedora

A drop-down terminal lets you tap a key and quickly enter any command on your desktop. Often it creates a terminal in a smooth way, sometimes with effects. This article demonstrates how it helps to improve and speed up daily tasks, using drop-down terminals like Yakuake, Tilda, Guake and a GNOME extension.

Yakuake

Yakuake is a drop-down terminal emulator based on KDE Konsole techonology. It is distributed under the terms of the GNU GPL Version 2. It includes features such as:

  • Smoothly rolls down from the top of your screen
  • Tabbed interface
  • Configurable dimensions and animation speed
  • Skinnable
  • Sophisticated D-Bus interface

To install Yakuake, use the following command:

$ sudo dnf install -y yakuake

Startup and configuration

If you’re runnign KDE, open the System Settings and go to Startup and Shutdown. Add yakuake to the list of programs under Autostart, like this:

It’s easy to configure Yakuake while running the app. To begin, launch the program at the command line:

$ yakuake &

The following welcome dialog appears. You can set a new keyboard shortcut if the standard one conflicts with another keystroke you already use:

Now click the menu button, and the following help menu appears. Next, select Configure Yakuake… to access the configuration options.

You can customize the options for appearance, such as opacity; behavior, such as focusing terminals when the mouse pointer is moved over them; and window, such as size and animation. In the window options you’ll find one of the most useful options is you use two or more monitors: Open on screen: At mouse location.

Using Yakuake

The main shortcuts are:

  • F12 = Open/Retract Yakuake
  • Ctrl+F11 = Full Screen Mode
  • Ctrl+) = Split Top/Bottom
  • Ctrl+( = Split Left/Right
  • Ctrl+Shift+T = New Session
  • Shift+Right = Next Session
  • Shift+Left = Previous Session
  • Ctrl+Alt+S = Rename Session

Below is an example of Yakuake being used to split the session like a terminal multiplexer. Using this feature, you can run several shells in one session.

Tilda

Tilda is a drop-down terminal that compares with other popular terminal emulators such as GNOME Terminal, KDE’s Konsole, xterm, and many others.

It features a highly configurable interface. You can even change options such as the terminal size and animation speed. Tilda also lets you enable hotkeys you can bind to commands and operations.

To install Tilda, run this command:

$ sudo dnf install -y tilda

Startup and configuration

Most users prefer to have a drop-down terminal available behind the scenes when they login. To set this option, first go to the app launcher in your desktop, search for Tilda, and open it.

Next, open up the Tilda Config window. Select Start Tilda hidden, which means it will not display a terminal immediately when started.

Next, you’ll set your desktop to start Tilda automatically. If you’re using KDE, go to System Settings > Startup and Shutdown > Autostart and use Add a Program.

If you’re using GNOME, you can run this command in a terminal:

$ ln -s /usr/share/applications/tilda.desktop ~/.config/autostart/

When you run for the first time, a wizard shows up to set your preferences. If you need to change something, right click and go to Preferences in the menu.

You can also create multiple configuration files, and bind other keys to open new terminals at different places on the screen. To do that, run this command:

$ tilda -C

Every time you use the above command, Tilda creates a new config file located in the ~/.config/tilda/ folder called config_0, config_1, and so on. You can then map a key combination to open a new Tilda terminal with a specific set of options.

Using Tilda

The main shortcuts are:

  • F1 = Pull Down Terminal Tilda (Note: If you have more than one config file, the shortcuts are the same, with a diferent open/retract shortcut like F1, F2, F3, and so on)
  • F11 = Full Screen Mode
  • F12 = Toggle Transparency
  • Ctrl+Shift+T = Add Tab
  • Ctrl+Page Up = Go to Next Tab
  • Ctrl+Page Down = Go to Previous Tab

GNOME Extension

The Drop-down Terminal GNOME Extension lets you use this useful tool in your GNOME Shell. It is easy to install and configure, and gives you fast access to a terminal session.

Installation

Open a browser and go to the site for this GNOME extension. Enable the extension setting to On, as shown here:

Then select Install to install the extension on your system.

Once you do this, there’s no reason to set any autostart options. The extension will automatically run whenever you login to GNOME!

Configuration

After install, the Drop Down Terminal configuration window opens to set your preferences. For example, you can set the size of the terminal, animation, transparency, and scrollbar use.

If you need change some preferences in the future, run the gnome-shell-extension-prefs command and choose Drop Down Terminal.

Using the extension

The shortcuts are simple:

  • ` (usually the key above Tab) = Open/Retract Terminal
  • F12 (customize as you prefer) = Open/Retract Terminal

Trace code in Fedora with bpftrace

bpftrace is a new eBPF-based tracing tool that was first included in Fedora 28. It was developed by Brendan Gregg, Alastair Robertson and Matheus Marchini with the help of a loosely-knit team of hackers across the Net. A tracing tool lets you analyze what a system is doing behind the curtain. It tells you which functions in code are being called, with which arguments, how many times, and so on.

This article covers some basics about bpftrace, and how it works. Read on for more information and some useful examples.

eBPF (extended Berkeley Packet Filter)

eBPF is a tiny virtual machine, or a virtual CPU to be more precise, in the Linux Kernel. The eBPF can load and run small programs in a safe and controlled way in kernel space. This makes it safer to use, even in production systems. This virtual machine has its own instruction set architecture (ISA) resembling a subset of modern processor architectures. The ISA makes it easy to translate those programs to the real hardware. The kernel performs just-in-time translation to native code for main architectures to improve the performance.

The eBPF virtual machine allows the kernel to be extended programmatically. Nowadays several kernel subsystems take advantage of this new powerful Linux Kernel capability. Examples include networking, seccomp, tracing, and more. The main idea is to attach eBPF programs into specific code points, and thereby extend the original kernel behavior.

eBPF machine language is very powerful. But writing code directly in it is extremely painful, because it’s a low level language. This is where bpftrace comes in. It provides a high-level language to write eBPF tracing scripts. The tool then translates these scripts to eBPF with the help of clang/LLVM libraries, and then attached to the specified code points.

Installation and quick start

To install bpftrace, run the following command in a terminal using sudo:

$ sudo dnf install bpftrace

Try it out with a “hello world” example:

$ sudo bpftrace -e 'BEGIN { printf("hello world\n"); }'

Note that you must run bpftrace as root due to the privileges required. Use the -e option to specify a program, and to construct the so-called “one-liners.” This example only prints hello world, and then waits for you to press Ctrl+C.

BEGIN is a special probe name that fires only once at the beginning of execution. Every action inside the curly braces { } fires whenever the probe is hit — in this case, it’s just a printf.

Let’s jump now to a more useful example:

$ sudo bpftrace -e 't:syscalls:sys_enter_execve { printf("%s called %s\n", comm, str(args->filename)); }'

This example prints the parent process name (comm) and the name of every new process being created in the system. t:syscalls:sys_enter_execve is a kernel tracepoint. It’s a shorthand for tracepoint:syscalls:sys_enter_execve, but both forms can be used. The next section shows you how to list all available tracepoints.

comm is a bpftrace builtin that represents the process name. filename is a field of the t:syscalls:sys_enter_execve tracepoint. You can access these fields through the args builtin.

All available fields of the tracepoint can be listed with this command:

bpftrace -lv "t:syscalls:sys_enter_execve"

Example usage

Listing probes

A central concept for bpftrace are probe points. Probe points are instrumentation points in code (kernel or userspace) where eBPF programs can be attached. They fit into the following categories:

  • kprobe – kernel function start
  • kretprobe – kernel function return
  • uprobe – user-level function start
  • uretprobe – user-level function return
  • tracepoint – kernel static tracepoints
  • usdt – user-level static tracepoints
  • profile – timed sampling
  • interval – timed output
  • software – kernel software events
  • hardware – processor-level events

All available kprobe/kretprobe, tracepoints, software and hardware probes can be listed with this command:

$ sudo bpftrace -l

The uprobe/uretprobe and usdt probes are userspace probes specific to a given executable. To use them, use the special syntax shown later in this article.

The profile and interval probes fire at fixed time intervals. Fixed time intervals are not covered in this article.

Counting system calls

Maps are special BPF data types that store counts, statistics, and histograms. You can use maps to summarize how many times each syscall is being called:

$ sudo bpftrace -e 't:syscalls:sys_enter_* { @[probe] = count(); }'

Some probe types allow wildcards to match multiple probes. You can also specify multiple attach points for an action block using a comma separated list. In this example, the action block attaches to all tracepoints whose name starts with t:syscalls:sys_enter_, which means all available syscalls.

The bpftrace builtin function count() counts the number of times this function is called. @[] represents a map (an associative array). The key of this map is probe, which is another bpftrace builtin that represents the full probe name.

Here, the same action block is attached to every syscall. Then, each time a syscall is called the map will be updated, and the entry is incremented in the map relative to this same syscall. When the program terminates, it automatically prints out all declared maps.

This example counts the syscalls called globally, it’s also possible to filter for a specific process by PID using the bpftrace filter syntax:

$ sudo bpftrace -e 't:syscalls:sys_enter_* / pid == 1234 / { @[probe] = count(); }'

Write bytes by process

Using these concepts, let’s analyze how many bytes each process is writing:

$ sudo bpftrace -e 't:syscalls:sys_exit_write /args->ret > 0/ { @[comm] = sum(args->ret); }'

bpftrace attaches the action block to the write syscall return probe (t:syscalls:sys_exit_write). Then, it uses a filter to discard the negative values, which are error codes (/args->ret > 0/).

The map key comm represents the process name that called the syscall. The sum() builtin function accumulates the number of bytes written for each map entry or process. args is a bpftrace builtin to access tracepoint’s arguments and return values. Finally, if successful, the write syscall returns the number of written bytes. args->ret provides access to the bytes.

Read size distribution by process (histogram):

bpftrace supports the creation of histograms. Let’s analyze one example that creates a histogram of the read size distribution by process:

$ sudo bpftrace -e 't:syscalls:sys_exit_read { @[comm] = hist(args->ret); }'

Histograms are BPF maps, so they must always be attributed to a map (@). In this example, the map key is comm.

The example makes bpftrace generate one histogram for every process that calls the read syscall. To generate just one global histogram, attribute the hist() function just to ‘@’ (without any key).

bpftrace automatically prints out declared histograms when the program terminates. The value used as base for the histogram creation is the number of read bytes, found through args->ret.

Tracing userspace programs

You can also trace userspace programs with uprobes/uretprobes and USDT (User-level Statically Defined Tracing). The next example uses a uretprobe, which probes to the end of a user-level function. It gets the command lines issued in every bash running in the system:

$ sudo bpftrace -e 'uretprobe:/bin/bash:readline { printf("readline: \"%s\"\n", str(retval)); }'

To list all available uprobes/uretprobes of the bash executable, run this command:

$ sudo bpftrace -l "uprobe:/bin/bash"

uprobe instruments the beginning of a user-level function’s execution, and uretprobe instruments the end (its return). readline() is a function of /bin/bash, and it returns the typed command line. retval is the return value for the instrumented function, and can only be accessed on uretprobe.

When using uprobes, you can access arguments with arg0..argN. A str() call is necessary to turn the char * pointer to a string.

Shipped Scripts

There are many useful scripts shipped with bpftrace package. You can find them in the /usr/share/bpftrace/tools/ directory.

Among them, you can find:

  • killsnoop.bt – Trace signals issued by the kill() syscall.
  • tcpconnect.bt – Trace all TCP network connections.
  • pidpersec.bt – Count new procesess (via fork) per second.
  • opensnoop.bt – Trace open() syscalls.
  • vfsstat.bt – Count some VFS calls, with per-second summaries.

You can directly use the scripts. For example:

$ sudo /usr/share/bpftrace/tools/killsnoop.bt

You can also study these scripts as you create new tools.

Links


Photo by Roman Romashov on Unsplash.

post

How to run virtual machines with virt-manager

In the beginning there was dual boot, it was the only way to have more than one operating system on the same laptop. At the time, it was difficult for these operating systems to be run simultaneously or interact with each other. Many years passed before it was possible, on common PCs, to run an operating system inside another through virtualization.

Recent PCs or laptops, including moderately-priced ones, have the hardware features to run virtual machines with performance close to the physical host machine.

Virtualization has therefore become normal, to test operating systems, as a playground for learning new techniques, to create your own home cloud, to create your own test environment and much more. This article walks you through using Virt Manager on Fedora to setup virtual machines.

Introducing QEMU/KVM and Libvirt

Fedora, like all other Linux systems, comes with native support for virtualization extensions. This support is given by KVM (Kernel based Virtual Machine) currently available as a kernel module.

QEMU is a complete system emulator that works together with KVM and allows you to create virtual machines with hardware and peripherals.

Finally libvirt is the API layer that allows you to administer the infrastructure, ie create and run virtual machines.

The set of these three technologies, all open source, is what we’re going to install on our Fedora Workstation.

Installation

Step 1: install packages

Installation is a fairly simple operation. The Fedora repository provides the “virtualization” package group that contains everything you need.

 
sudo dnf install @virtualization

Step 2: edit the libvirtd configuration

By default the system administration is limited to the root user, if you want to enable a regular user you have to proceed as follows.

Open the /etc/libvirt/libvirtd.conf file for editing

 
sudo vi /etc/libvirt/libvirtd.conf

Set the domain socket group ownership to libvirt

 
unix_sock_group = "libvirt"

Adjust the UNIX socket permissions for the R/W socket

 
unix_sock_rw_perms = "0770"

Step 3: start and enable the libvirtd service

 
sudo systemctl start libvirtd
sudo systemctl enable libvirtd

Step 4: add user to group

In order to administer libvirt with the regular user you must add the user to the libvirt group, otherwise every time you start virtual-manager you will be asked for the password for sudo.

 
sudo usermod -a -G libvirt $(whoami)

This adds the current user to the group. You must log out and log in to apply the changes.

Getting started with virt-manager

The libvirt system can be managed either from the command line (virsh) or via the virt-manager graphical interface. The command line can be very useful if you want to do automated provisioning of virtual machines, for example with Ansible, but in this article we will concentrate on the user-friendly graphical interface.

The virt-manager interface is simple. The main form shows the list of connections including the local system connection.

The connection settings include virtual networks and storage definition. it is possible to define multiple virtual networks and these networks can be used to communicate between guest systems and between the guest systems and the host.

Creating your first virtual machine

To start creating a new virtual machine, press the button at the top left of the main form:

The first step of the wizard requires the installation mode. You can choose between a local installation media, network boot / installation or an existing virtual disk import:

Choosing the local installation media the next step will require the ISO image path:

The subsequent two steps will allow you to size the CPU, memory and disk of the new virtual machine. The last step will ask you to choose network preferences: choose the default network if you want the virtual machine to be separated from the outside world by a NAT, or bridged if you want it to be reachable from the outside. Note that if you choose bridged the virtual machine cannot communicate with the host machine.

Check “Customize configuration before install” if you want to review or change the configuration before starting the setup:

The virtual machine configuration form allows you to review and modify the hardware configuration. You can add disks, network interfaces, change boot options and so on. Press “Begin installation” when satisfied:

At this point you will be redirected to the console where to proceed with the installation of the operating system. Once the operation is complete, you will have the working virtual machine that you can access from the console:

The virtual machine just created will appear in the list of the main form, where you will also have a graph of the CPU and memory occupation:

libvirt and virt-manager is a powerful tool that allows great customization to your virtual machines with enterprise level management. If something even simpler is desired, note that Fedora Workstation comes with GNOME Boxes pre-installed and can be sufficient for basic virtualization needs.

post

Jupyter and data science in Fedora

In the past, kings and leaders used oracles and magicians to help them predict the future — or at least get some good advice due to their supposed power to perceive hidden information. Nowadays, we live in a society obsessed with quantifying everything. So we have data scientists to do this job.

Data scientists use statistical models, numerical techniques and advanced algorithms that didn’t come from statistical disciplines, along with the data that exist on databases, to find, to infer, to predict data that doesn’t exist yet. Sometimes this data is about the future. That is why we do a lot of predictive analytics and prescriptive analytics.

Here are some questions to which data scientists help find answers:

  1. Who are the students with high propensity to abandon the class? For each one, what are the reasons for leaving?
  2. Which house has a price above or below the fair price? What is the fair price for a certain house?
  3. What are the hidden groups that my clients classify themselves?
  4. Which future problems this premature child will develop?
  5. How many calls will I get in my call center tomorrow 11:43 AM?
  6. My bank should or should not lend money to this customer?

Note how the answer to all these question is not sitting in any database waiting to be queried. These are all data that still doesn’t exist and has to be calculated. That is part of the job we data scientists do.

Throughout this article you’ll learn how to prepare a Fedora system as a Data Scientist’s development environment and also a production system. Most of the basic software is RPM-packaged, but the most advanced parts can only be installed, nowadays, with Python’s pip tool.

Jupyter — the IDE

Most modern data scientists use Python. And an important part of their work is EDA (exploratory data analysis). EDA is a manual and interactive process that retrieves data, explores its features, searches for correlations, and uses plotted graphics to visualize and understand how data is shaped and prototypes predictive models.

Jupyter is a web application perfect for this task. Jupyter works with Notebooks, documents that mix rich text including beautifully rendered math formulas (thanks to mathjax), blocks of code and code output, including graphics.

Notebook files have extension .ipynb, which means Interactive Python Notebook.

Setting up and running Jupyter

First, install essential packages for Jupyter (using sudo):

$ sudo dnf install python3-notebook mathjax sscg

You might want to install additional and optional Python modules commonly used by data scientists:

$ sudo dnf install python3-seaborn python3-lxml python3-basemap python3-scikit-image python3-scikit-learn python3-sympy python3-dask+dataframe python3-nltk

Set a password to log into Notebook web interface and avoid those long tokens. Run the following command anywhere on your terminal:

$ mkdir -p $HOME/.jupyter
$ jupyter notebook password

Now, type a password for yourself. This will create the file $HOME/.jupyter/jupyter_notebook_config.json with your encrypted password.

Next, prepare for SSLby generating a self-signed HTTPS certificate for Jupyter’s web server:

$ cd $HOME/.jupyter; sscg

Finish configuring Jupyter by editing your $HOME/.jupyter/jupyter_notebook_config.json file. Make it look like this:

{
"NotebookApp": {
"password": "sha1:abf58...87b",
"ip": "*",
"allow_origin": "*",
"allow_remote_access": true,
"open_browser": false,
"websocket_compression_options": {},
"certfile": "/home/aviram/.jupyter/service.pem",
"keyfile": "/home/aviram/.jupyter/service-key.pem",
"notebook_dir": "/home/aviram/Notebooks"
}
}

The parts in red must be changed to match your folders. Parts in blue were already there after you created your password. Parts in green are the crypto-related files generated by sscg.

Create a folder for your notebook files, as configured in the notebook_dir setting above:

$ mkdir $HOME/Notebooks

Now you are all set. Just run Jupyter Notebook from anywhere on your system by typing:

$ jupyter notebook

Or add this line to your $HOME/.bashrc file to create a shortcut command called jn:

alias jn='jupyter notebook'

After running the command jn, access https://your-fedora-host.com:8888 from any browser on the network to see the Jupyter user interface. You’ll need to use the password you set up earlier. Start typing some Python code and markup text. This is how it looks:

Jupyter with a simple notebook

In addition to the IPython environment, you’ll also get a web-based Unix terminal provided by terminado. Some people might find this useful, while others find this insecure. You can disable this feature in the config file.

JupyterLab — the next generation of Jupyter

JupyterLab is the next generation of Jupyter, with a better interface and more control over your workspace. It’s currently not RPM-packaged for Fedora at the time of writing, but you can use pip to get it installed easily:

$ pip3 install jupyterlab --user
$ jupyter serverextension enable --py jupyterlab

Then run your regular jupiter notebook command or jn alias. JupyterLab will be accessible from http://your-linux-host.com:8888/lab.

Tools used by data scientists

In this section you can get to know some of these tools, and how to install them. Unless noted otherwise, the module is already packaged for Fedora and was installed as prerequisites for previous components.

Numpy

Numpy is an advanced and C-optimized math library designed to work with large in-memory datasets. It provides advanced multidimensional matrix support and operations, including math functions as log(), exp(), trigonometry etc.

Pandas

In this author’s opinion, Python is THE platform for data science mostly because of Pandas. Built on top of numpy, Pandas makes easy the work of preparing and displaying data. You can think of it as a no-UI spreadsheet, but ready to work with much larger datasets. Pandas helps with data retrieval from a SQL database, CSV or other types of files, columns and rows manipulation, data filtering and, to some extent, data visualization with matplotlib.

Matplotlib

Matplotlib is a library to plot 2D and 3D data. It has great support for notations in graphics, labels and overlays

matplotlib pair of graphics showing a cost function searching its optimal value through a gradient descent algorithm

Seaborn

Built on top of matplotlib, Seaborn’s graphics are optimized for a more statistical comprehension of data. It automatically displays regression lines or Gauss curve approximations of plotted data.

Linear regression visualised with SeaBorn

StatsModels

StatsModels provides algorithms for statistical and econometrics data analysis such as linear and logistic regressions. Statsmodel is also home for the classical family of time series algorithms known as ARIMA.

Normalized number of passengers across time (blue) and ARIMA-predicted number of passengers (red)

Scikit-learn

The central piece of the machine-learning ecosystem, scikit provides predictor algorithms for regression (Elasticnet, Gradient Boosting, Random Forest etc) and classification and clustering (K-means, DBSCAN etc). It features a very well designed API. Scikit also has classes for advanced data manipulation, dataset split into train and test parts, dimensionality reduction and data pipeline preparation.

XGBoost

XGBoost is the most advanced regressor and classifier used nowadays. It’s not part of scikit-learn, but it adheres to scikit’s API. XGBoost is not packaged for Fedora and should be installed with pip. XGBoost can be accelerated with your nVidia GPU, but not through its pip package. You can get this if you compile it yourself against CUDA. Get it with:

$ pip3 install xgboost --user

Imbalanced Learn

imbalanced-learn provides ways for under-sampling and over-sampling data. It is useful in fraud detection scenarios where known fraud data is very small when compared to non-fraud data. In these cases data augmentation is needed for the known fraud data, to make it more relevant to train predictors. Install it with pip:

$ pip3 install imblearn --user

NLTK

The Natural Language toolkit, or NLTK, helps you work with human language data for the purpose of building chatbots (just to cite an example).

SHAP

Machine learning algorithms are very good on predicting, but aren’t good at explaining why they made a prediction. SHAP solves that, by analyzing trained models.

Where SHAP fits into the data analysis process

Install it with pip:

$ pip3 install shap --user

Keras

Keras is a library for deep learning and neural networks. Install it with pip:

$ sudo dnf instal python3-h5py
$ pip3 install keras --user

TensorFlow

TensorFlow is a popular neural networks builder. Install it with pip:

$ pip3 install tensorflow --user

Photo courtesy of FolsomNatural on Flickr (CC BY-SA 2.0).

post

Building Flatpak apps in Gnome Builder on Fedora Silverblue

If you are developing software using Fedora Silverblue, and especially if what you are developing is a Gnome application, Gnome Builder 3.30.3 feels like an obvious choice of IDE.

In this article, I will show you how you can create a simple Gnome application, and how to build it and install it as a Flatpak app on your system.

Gnome and Flatpak applications

Builder has been a part of Gnome for a long time. It is a very mature IDE to me in terms of consistency and completeness.

The Gnome Builder project website offers extensive documentation regarding Gnome application development — I highly recommend spending some time there to anyone interested.

Editor’s note: Getting Builder

Because the initial Fedora Silverblue installation doesn’t include Builder, let’s walk through the installation process first.

Starting with a freshly installed system, the first thing you’ll need to do is to enable a repository providing Builder as a Flatpak — we’ll use Flathub which is a popular 3rd-party repository with many desktop apps.

To enable Flathub on your system, download the repository file from the Fedora Quick Setup page, and double-click it which opens Gnome Software asking you to enable this repository on your system.

After you’re done with that, you can search for Builder in Gnome Software and install it.

Creating a new project

So let’s walk through the creation of a new project for our Gnome app. When you start Gnome Builder, the first display is oriented towards project management.

To create a new project, I clicked on the New… button at the top-left corner which showed me the following view.

You’ll need to fill out the project name, choose your preferred language (I chose C, but other languages will work for this example as well), and the license. Leave the version control on, and select Gnome Application as your template.

I chose gbfprtfsb as the name of my project which means Hello from Gnome 3 on Fedora SilverBlue.

The IDE creates and opens the project once you press create.

Tweaking our new project

The newly created project is opened in the Builder IDE and on my system looks like the following.

This project could be run from within the IDE right now and would give you the ever popular “Hello World!” titled gnome windowed application with a label that says, yup “Hello World!”.

Let’s get a little disruptive and mess up the title and greeting a bit. Complacency leads to mediocrity which leads to entropy overcoming chaos to enforce order, stasis, then finally it all just comes to a halt. It’s therefore our duty to shake it up at every opportunity, if only to knock out any latent entropy that may have accumulated in our systems. Towards such lofty goals, we only need to change two lines of one file, and the file isn’t even a C language file, it’s an XML file used to describe the GUI named gbfprtfsb-window.ui. All we have to do is open it and edit the title and label text, save and then build our masterpiece!

Looking at the screenshot below, I have circled the text we are going to replace. The window is a GtkApplicationWindow, and uses a GtkHeaderBar and GtkLabel to display the text we are changing. In the GtkHeaderBar we will type GBFPRTFSB for the title property. In the GtkLabel we will type Hello from Gnome 3 on Fedora SilverBlue in the label property. Now save the file to record our changes.

Building the project

Well, we have made our changes, and expressed our individualism (cough) at the same time. All that is left is to build it and see what it looks like. The build panel is located near the top of the IDE, middle right, and is represented by the icon that appears to be a brick wall being built as shown on the following picture.

Press the button, and the build process completes. You can also preview your application by clicking on the “play” button next to it.

Building a Flatpak

When we’re happy with our creation, the next step will be building it as a Flatpak. To do that, click on the title in the middle of the top bar, and then on the Export Bundle button.

Once the export has successfully completed, Gnome Builder will open a Nautilus file browser window showing the export directory, with the Flatpak bundle already selected.

To install the app on your system, simply double-click the icon which opens Gnome Software allowing you to install the app. On my system I had to enter my user password twice, which I take to be due to the fact we had no configured GPG key for the project. After it was installed, the application was shown alongside all of the other applications on my system. It can be seen running below.

I think this has successfully shown how easy it is to deploy an application as a Flatpak bundle for Gnome using Builder, and then running it on Fedora Silverblue.

post

Fedora 29 on ARM on AWS

This week Amazon announced their new A1 arm64 EC2 Instances powered by their arm64 based Graviton Processors and, with a minor delay, the shiny new Fedora 29 for aarch64 (arm64) is now available to run there too!

Details on getting running on AWS is in this good article on using AWS tools on Fedora article and over all using Fedora on the AWS arm64 EC2 is the same as x86_64.

So while a new architecture on AWS is very exciting it’s at the same time old and boring! You’ll get the same versions of kernel, same features like SELinux and the same versions of the toolchain stacks, like the latest gcc, golang, rust etc in Fedora 29 just like all other architectures. You’ll also get all the usual container tools like podman, buildah, skopeo and kubernetes, and orchestration tools like ansible. Basically if you’re using Fedora on AWS you should be able use it in the same way on arm64.

Getting started

The initial launch of A1 aarch64 instances are available in the following four regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland). Direct links to launch the Fedora aarch64 AMIs directly are available here on the Fedora Cloud site.

Getting help

The Fedora support for aarch64 is very robust. It’s been widely used and tested across a number of platforms but of course with new users and new use cases will pick up issues that we’ve yet to encounter. So what is the best way to get help? If you’re having a crash in a particular application it should be reported in the usual way through RH Bugzilla, we have an ARMTracker tracker alias to block against to help identify Arm issues. For assistance with Arm specific queries and issues the Fedora Arm mailing list and we have the #fedora-arm IRC channel on Freenode.

Known issues

We have one known issue. The instance takes a while to get started, it can be up to 5 minutes. This is due to entropy and has been a general problem in virtual environments, across all architectures. We’re working to speed this up and it should be fixed soon. Once things are up an running though everything runs as expected.

Upcoming features

There will be Fedora 29 Atomic host coming in the next Two Week Atomic release, we unfortunately missed their release this time by a small window but it’ll be available in about 2 weeks with their next release and will appear on the site once released. We can’t let you have all the fun at once 😉

post

VS Code Live Share plugin

Contributing to Open Source projects leads to collaborating with people around the world this is traditionally done via emails or instant messages. But with the rise of extreme programing practices like pair programing being able to remotely share a code editor is a great feature. VS Code has a plugin Live Share that does just that.

Getting Started with Live Share

If you do not have VS Code already installed, you can read our previous article to get started.

Using Visual Studio Code on Fedora

Installing the Live Share plugin

First let’s install the dependencies the Live Share plugin requires.

$ sudo dnf install openssl-libs krb5-libs libicu zlib gnome-keyring libsecret desktop-file-utils xorg-x11-utils

Once the installation completes, the Live Share plugin is ready to be installed from the Extensions marketplace.

More details on the role of each dependency can be found in the Documentation.

Finally reload VS Code to activate the new plugin.

Start a Collaboration Session

To start a new collaboration session, it requires a project to be open in VS Code, so let’s clone the following git repository and open it.

$ git clone https://github.com/cverna/rss_feed_notifier.git

This repository contains the source code of the following Magazine article.

Never miss a Magazine article — build your own RSS notification system

In VS Code, use the [Ctrl+K, Ctrl+O] combination to select and open the git repository.

Then from the Live Share extension menu start a collaboration session.

Live Share requires its user to sign in so that it can display the identity of the session participants. You can easily sign in using a GitHub account for example.

Ready to collaborate

To start collaborating Live Share provides a link that can be shared with others. You can get this link in you clipboard by clicking on the Invite paticipants … text.

Once one or more participant have joined the session you can start collaborating.

Sharing Code

This is where Live Share really shines as it shows clearly which participant is currently editing the file.

It is also possible to follow a participant. This has for effect to open in your editor all the files that the participant you are following opens.

Sharing Resources

Another feature of Live Share is to share resources like a server, a terminal or even a debugger session. For example it is very easy to share a terminal session and help a participant setting up a development environment, another example would be to share a database server and to inspect the content of a table.

Connectivity

By default the Live Share connection automatically checks if the host machine and the guest machine can communicate directly, if not Live Share uses a relay hosted in the Azure cloud.

It is possible to configure Live Share to use only direct connection between the host and the guest. For that the host machine needs to open a port in the 5990-5999 range and accept inbound local network connections. The guest needs a network route and outbound access to the host on the same port.

Finally set the connection mode to direct in the Live Share settings.

The documentation provides more details on the connectivity options.

Please note that the Live Share extension is available as Public Preview. The extension source code is currently not open source but this should change once the extension is generally available.

post

Design faster web pages, part 3: Font and CSS tweaks

Welcome back to this series of articles on designing faster web pages. Part 1 and part 2 of this series covered how to lose browser fat through optimizing and replacing images. This part looks at how to lose additional fat in CSS (Cascading Style Sheets) and fonts.

Tweaking CSS

First things first: let’s look at where the problem originates. CSS was once a huge step forward. You can use it to style several pages from a central style sheet. Nowadays, many web developers use frameworks like Bootstrap.

While these frameworks are certainly helpful, many people simply copy and paste the whole framework. Bootstrap is huge; the “minimal” version of 4.0 is currently 144.9 KB. Perhaps in the era of terabytes of data, this isn’t much. But as they say, even small cattle makes a mess.

Look back at the getfedora.org example. Recall in part 1, the first analysis showed the CSS files used nearly ten times more space than the HTML itself. Here’s a display of the stylesheets used:

That’s nine different stylesheets. Many styles in them that are also unused on the page.

Remove, merge, and compress/minify

The font-awesome CSS inhabits the extreme end of included, unused styles. There are only three glyphs of the font used on the page. To make that up in KB, the font-awesome CSS used at getfedora.org is originally 25.2 KB. After cleaning out all unused styles, it’s only 1.3 KB. This is only about 4% of its original size! For Bootstrap CSS, the difference is 118.3 KB original, and 13.2 KB after removing unused styles.

The next question is, must there be a bootstrap.css and a font-awesome.css? Or can they be combined? Yes, they can. That doesn’t save much file space, but the browser now requests fewer files to succesfully render the page.

Finally, after merging the CSS files, try to remove unused styles and minify them. In this way, you save 10.1 KB for a final size of 4.3 KB.

Unfortunately, there’s no packaged “minifier” tool in Fedoras repositories yet. However, there are hundreds of online services to do that for you. Or you can use CSS-HTML-JS Minify, which is Python, and therefore easy to isntall. There’s not an available tool to purify CSS, but there are web services like UnCSS.

Font improvement

CSS3 came with something a lot of web developer like. They could define fonts the browser downloads in the background to render the page. Since then, a lot of web designers are very happy, especially after they discovered the usage of icon fonts for web design. Font sets like Font Awesome are quiet popular today and widely used. Here’s the size of that content:

current free version 912 glyphs/icons, smallest set ttf 30.9KB, woff 14.7KB, woff2 12.2KB, svg 107.2KB, eot 31.2

So the question is, do you need all the glyphs? In all probability, no. You can get rid of them with FontForge, but that’s a lot of work. You could also use Fontello. Use the public instance, or set up your own, as it’s free software and available on Github.

The downside of such customized font sets is you must host the font by yourself. You can’t use other online font services to provide updates. But this may not really be a downside, compared to faster performance.

Conclusion

Now you’ve done everything you can to the content itself, to minimize what the browser loads and interprets. From now on, only tricks with the administration of the server can help.

One easy to do, but which many people do wrong, is decide on some intelligent caching. For instance, a CSS or picture file can be cached for a week. Whatever you do, if you use a proxy service like Cloudflare or build your own proxy, minimze the pages first. Users like fast loading pages. They’ll (silently) thank you for it, and the server will have a smaller load, too.

post

Design faster web pages, part 2: Image replacement

Welcome back to this series on building faster web pages. The last article talked about what you can achieve just through image compression. The example started with 1.2MB of browser fat, and reduced down to a weight of 488.9KB. That’s still not fast enough! This article continues the browser diet to lose more fat. You might think that partway through this process things are a bit crazy, but once finished, you’ll understand why.

Preparation

Once again this article starts with an analysis of the web pages. Use the built-in screenshot function of Firefox to make a screenshot of the entire page. You’ll also want to install Inkscape using sudo:

$ sudo dnf install inkscape

If you want to know how to use Inkscape, there are already several articles in Fedora Magazine. This article will only explain some basic tasks for optimizing an SVG for web use.

Analysis

Once again, this example uses the getfedora.org web page.

Getfedora page with graphics marked

Getfedora page with graphics marked

This analysis is better done graphically, which is why it starts with a screenshot. The screenshot above marks all graphical elements of the page. In two cases or better in four cases, the Fedora websites team already used measures to replace images. The icons for social media are glyphs from a font and the language selector is an SVG.

There are several options for replacing:

HTML5 Canvas

Briefly, HTML5 Canvas is an HTML element that allows you to draw with the help of scripts, mostly JavaScript, although it’s not widely used yet. As you draw with the help of scripts, the element can also be animated. Some examples of what you can achieve with HTML Canvas include this triangle pattern, animated wave, and text animation. In this case, though, it seems not to be the right choice.

CSS3

With Cascading Style Sheets you can draw shapes and even animate them. CSS is often used for drawing elements like buttons. However, more complicated graphics via CSS are usually only seen in technical demonstration pages. This is because graphics are still better done visually as with coding.

Fonts

The usage of fonts for styling web pages is another way, and Fontawesome is quiet popular. For instance, you could replace the Flavor and the Spin icons with a font in this example. There is a negative side to using this method, which will be covered in the next part of this series, but it can be done easily.

SVG

This graphics format has existed for a long time and was always supposed to be used in the browser. For a long time not all browsers supported it, but that’s history. So the best way to replace pictures in this example is with SVG.

Optimizing SVG for the web

To optimize an SVG for internet use requires several steps.

SVG is an XML dialect. Components like circle, rectangle, or text paths are described with nodes. Each node is an XML element. To keep the code clean, an SVG should use as few nodes as possible.

The SVG example is a circular icon with a coffee mug on it. You have 3 options to describe it with SVG.

Circle element with the mug on top

<circle style="opacity:1;fill:#717d82;fill-opacity:1;stroke:none;stroke-width:9.51950836;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;paint-order:markers fill stroke" id="path36" cx="68.414307" cy="130.71523" r="3.7620001" />

Circular path with the mug on top

<path style="opacity:1;fill:#717d82;fill-opacity:1;stroke:none;stroke-width:1.60968435;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;paint-order:markers fill stroke" d="m 68.414044,126.95318 a 3.7618673,3.7618673 0 0 0 -3.76153,3.76204 3.7618673,3.7618673 0 0 0 3.76153,3.76205 3.7618673,3.7618673 0 0 0 3.76206,-3.76205 3.7618673,3.7618673 0 0 0 -3.76206,-3.76204 z" id="path20" />

single path

<path style="opacity:1;fill:#717d82;fill-opacity:1;stroke:none;stroke-width:1.60968435;stroke-linecap:round;stroke-linejoin:round;stroke-miterlimit:4;stroke-dasharray:none;stroke-dashoffset:0;stroke-opacity:1;paint-order:markers fill stroke" d="m 68.414044,126.95318 a 3.7618673,3.7618673 0 0 0 -3.76153,3.76204 3.7618673,3.7618673 0 0 0 3.76153,3.76205 3.7618673,3.7618673 0 0 0 3.76206,-3.76205 3.7618673,3.7618673 0 0 0 -3.76206,-3.76204 z m -1.21542,0.92656 h 2.40554 c 0.0913,0.21025 0.18256,0.42071 0.27387,0.63097 h 0.47284 v 0.60099 h -0.17984 l -0.1664,1.05989 h 0.24961 l -0.34779,1.96267 -0.21238,-0.003 -0.22326,1.41955 h -2.12492 l -0.22429,-1.41955 -0.22479,0.003 -0.34829,-1.96267 h 0.26304 l -0.16692,-1.05989 h -0.1669 v -0.60099 h 0.44752 c 0.0913,-0.21026 0.18206,-0.42072 0.27336,-0.63097 z m 0.12608,0.19068 c -0.0614,0.14155 -0.12351,0.28323 -0.185,0.42478 h 2.52336 c -0.0614,-0.14155 -0.12248,-0.28323 -0.18397,-0.42478 z m -0.65524,0.63097 v 0.21911 l 0.0594,5.2e-4 h 3.35844 l 0.0724,-5.2e-4 v -0.21911 z m 0.16846,0.41083 0.1669,1.05937 h 2.80603 l 0.16693,-1.05937 -1.57046,0.008 z m -0.061,1.25057 0.27956,1.5782 1.34411,-0.0145 1.34567,0.0145 0.28059,-1.5782 z m 1.62367,1.75441 -1.08519,0.0124 0.19325,1.2299 h 1.79835 l 0.19328,-1.2299 z" id="path2714" inkscape:connector-curvature="0" />

You probably can see the code becomes more complex and needs more characters to describe it. More characters in a file result, of course, in a larger size.

Node cleaning

If you open an example SVG in Inkscape and press F2, that activates the Node tool. You should see something like this:

Inkscape - Node tool activated

Inkscape – Node tool activated

There are 5 nodes that aren’t necessary in this example — the ones in the middle of the lines. To remove them, select them one by one with the activated Node tool and press the Del key. After this, select the nodes which define this lines and make them corners again using the toolbar tool.

Inkscape - Node tool make node a corner

Inkscape – Node tool make node a corner

Without fixing the corners, handles are used that define the curve, which gets saved and will increase file size. You have to do this node cleaning by hand, as it can’t be effectively automated. Now you’re ready for the next stage.

Use the Save as function and choose Optimized svg. A dialogue window opens where you can select what to remove or keep.

Inkscape - Dialog window for save as optimized SVG

Inkscape – Dialog window for save as optimized SVG

Even the little SVG in this example got down from 3.2 KB to 920 bytes, less than a third of its original size.

Back to the getfedora page: The grey voronoi pattern used in the background of the main section, after our optimization from Part 1 of this series, is down to 164.1 KB versus the original 211.12 KB size.

The original SVG it was exported from is 1.9 MB in size. After these SVG optimization steps, it’s only 500.4KB. Too big? Well, the current blue background is 564.98 KB in size. But there’s only a small difference between the SVG and the PNG.

Compressed files

$ ls -lh insgesamt 928K -rw-r--r--. 1 user user 161K 19. Feb 19:44 grey-pattern.png -rw-rw-r--. 1 user user 160K 18. Feb 12:23 grey-pattern.png.gz -rw-r--r--. 1 user user 489K 19. Feb 19:43 greyscale-pattern-opti.svg -rw-rw-r--. 1 user user 112K 19. Feb 19:05 greyscale-pattern-opti.svg.gz

This is the output of a small test I did to visualize this topic. You should probably see that the raster graphic — the PNG — is already compressed and can’t be anymore. The opposite is the SVG, an XML file. This is just text and can compressed, to less then a fourth of its size. As a result it is now around 50 KB smaller in size than the PNG.

Modern browsers can handle compressed files natively. Therefore, a lot of web servers have switched on mod_deflate (Apache) and gzip (nginx). That’s how we save space during delivery. Check out if it’s enabled at your server here.

Tooling for production

First of all, nobody wants to always optimize SVG in Inkscape. You can run Inkscape without a GUI in batch mode, but there’s no option to convert from Inkscape SVG to optimized SVG. You can only export raster graphics this way. But there are alternatives:

  • SVGO (which seems not actively developed)
  • Scour

This example will use scour for optimization. To install it:

$ sudo dnf install scour

To automatically optimize an SVG file, run scour similarly to this:

[user@localhost ]$ scour INPUT.svg OUTPUT.svg -p 3 --create-groups --renderer-workaround --strip-xml-prolog --remove-descriptive-elements --enable-comment-stripping --disable-embed-rasters --no-line-breaks --enable-id-stripping --shorten-ids

This is the end of part two, in which you learned how to replace raster images with SVG and how to optimize it for usage. Stay tuned to the Fedora Magazine for part three, coming soon.

post

Design faster web pages, part 1: Image compression

Lots of web developers want to achieve fast loading web pages. As more page views come from mobile devices, making websites look better on smaller screens using responsive design is just one side of the coin. Browser Calories can make the difference in loading times, which satisfies not just the user but search engines that rank on loading speed. This article series covers how to slim down your web pages with tools Fedora offers.

Preparation

Before you sart to slim down your web pages, you need to identify the core issues. For this, you can use Browserdiet. It’s a browser add-on available for Firefox, Opera and Chrome and other browsers. It analyzes the performance values of the actual open web page, so you know where to start slimming down.

Next you’ll need some pages to work on. The example screenshot shows a test of getfedora.org. At first it looks very simple and responsive.

Browser Diet - values of getfedora.org

Browser Diet – values of getfedora.org

However, BrowserDiet’s page analysis shows there are 1.8MB in files downloaded. Therefore, there’s some work to do!

Web optimization

There are over 281 KB of JavaScript files, 203 KB more in CSS files, and 1.2 MB in images. Start with the biggest issue — the images. The tool set you need for this is GIMP, ImageMagick, and optipng. You can easily install them using the following command:

sudo dnf install gimp imagemagick optipng

For example, take the following file which is 6.4 KB:

First, use the file command to get some basic information about this image:

$ file cinnamon.png cinnamon.png: PNG image data, 60 x 60, 8-bit/color RGBA, non-interlaced 

The image — which is only in grey and white — is saved in 8-bit/color RGBA mode. That’s not as efficient as it could be.

Start GIMP so you can set a more appropriate color mode. Open cinnamon.png in GIMP. Then go to Image>Mode and set it to greyscale. Export the image as PNG with compression factor 9. All other settings in the export dialog should be the default.

$ file cinnamon.png cinnamon.png: PNG image data, 60 x 60, 8-bit gray+alpha, non-interlaced 

The output shows the file’s now in 8bit gray+alpha mode. The file size has shrunk from 6.4 KB to 2.8 KB. That’s already only 43.75% of the original size. But there’s more you can do!

You can also use the ImageMagick tool identify to provide more information about the image.

$ identify cinnamon2.png cinnamon.png PNG 60x60 60x60+0+0 8-bit Grayscale Gray 2831B 0.000u 0:00.000

This tells you the file is 2831 bytes. Jump back into GIMP, and export the file. In the export dialog disable the storing of the time stamp and the alpha channel color values to reduce this a little more. Now the file output shows:

$ identify cinnamon.png cinnamon.png PNG 60x60 60x60+0+0 8-bit Grayscale Gray 2798B 0.000u 0:00.000

Next, use optipng to losslessly optimize your PNG images. There are other tools that do similar things, including advdef (which is part of advancecomp), pngquant and pngcrush.

Run optipng on your file. Note that this will replace your original:

$ optipng -o7 cinnamon.png ** Processing: cinnamon.png 60x60 pixels, 2x8 bits/pixel, grayscale+alpha Reducing image to 8 bits/pixel, grayscale Input IDAT size = 2720 bytes Input file size = 2812 bytes Trying: zc = 9 zm = 8 zs = 0 f = 0 IDAT size = 1922 zc = 9 zm = 8 zs = 1 f = 0 IDAT size = 1920 Selecting parameters: zc = 9 zm = 8 zs = 1 f = 0 IDAT size = 1920 Output IDAT size = 1920 bytes (800 bytes decrease) Output file size = 2012 bytes (800 bytes = 28.45% decrease)

The option -o7 is the slowest to process, but provides the best end results. You’ve knocked 800 more bytes off the file size, which is now 2012 bytes.

To resize all of the PNGs in a directory, use this command:

$ optipng -o7 -dir=<directory> *.png

The option -dir lets you give a target directory for the output. If this option is not used, optipng would overwrite the original images.

Choosing the right file format

When it comes to pictures for the usage in the internet, you have the choice between:

JPG-LS and JPG 2000 are not widely used. Only a few digital cameras support these formats, so they can be ignored. aPNG is an animated PNG, and not widely used either.

You could save a few bytes more through changing the compression rate or choosing another file format. The first option you can’t do in GIMP, as it’s already using the highest compression rate. As there are no alpha channels in the picture, you can choose JPG as file format instead. For now use the default value of 90% quality — you could change it down to 85%, but then alias effects become visible. This saves a few bytes more:

$ identify cinnamon.jpg cinnamon.jpg JPEG 60x60 60x60+0+0 8-bit sRGB 2676B 0.000u 0:00.000 

Alone this conversion to the right color space and choosing JPG as file format brought down the file size from 23 KB to 12.3 KB, a reduction of nearly 50%.

PNG vs. JPG: quality and compression rate

So what about the rest of the images? This method would work for all the other pictures, except the Fedora “flavor” logos and the logos for the four foundations. Those are presented on a white background.

One of the main differences between PNG and JPG is that JPG has no alpha channel. Therefore it can’t handle transparency. If you rework these images by using a JPG on a white background, you can reduce the file size from 40.7 KB to 28.3 KB.

Now there are four more images you can rework: the backgrounds. For the grey background, set the mode to greyscale again. With this bigger picture, the savings also is bigger. It shrinks from 216.2 KB to 51.0 KB — it’s now barely 25% of its original size. All in all, you’ve shrunk 481.1 KB down to 191.5 KB — only 39.8% of the starting size.

Quality vs. Quantity

Another difference between PNG and JPG is the quality. PNG is a lossless compressed raster graphics format. But JPG loses size through compression, and thus affects quality. That doesn’t mean you shouldn’t use JPG, though. But you have to find a balance between file size and quality.

Achievement

This is the end of Part 1. After following the techniques described above, here are the results:

You brought image size down to 488.9 KB versus 1.2MB at the start. That’s only about a third of the size, just through optimizing with optipng. This page can probably be made to load faster still. On the scale from snail to hypersonic, it’s not reached racing car speed yet!

Finally you can check the results in Google Insights, for example:

In the Mobile area the page gathered 10 points on scoring, but is still in the Medium sector. It looks totally different for the Desktop, which has gone from 62/100 to 91/100 and went up to Good. As mentioned before, this test isn’t the be all and end all. Consider scores such as these to help you go in the right direction. Keep in mind you’re optimizing for the user experience, and not for a search engine.