Posted on Leave a comment

CodeReady Containers: complex solutions on OpenShift + Fedora

Want to experiment with (complex) solutions on OpenShift 4.1+? CodeReady Containers (CRC) on a physical Fedora server is a great choice. It lets you:

  • Configure the RAM available to CRC / OpenShift (this is key as we’ll deploy Machine Learning, Change Data Capture, Process Automation and other solutions with significant memory requirements)
  • Avoid installing anything on your laptop
  • Standardize (on Fedora 30) so that you get the same results every time

Start by installing CRC and Ansible Agnostic Deployer (AgnosticD) on a Fedora 30 physical server. Then, you’ll use AgnosticD to deploy Open Data Hub on the OpenShift 4.1 environment created by CRC. Let’s get started!

Set up CodeReady Containers

$ dnf config-manager --set-enabled fedora
$ su -c 'dnf -y install git wget tar qemu-kvm libvirt NetworkManager jq libselinux-python'
$ sudo systemctl enable --now libvirtd

Let’s also add a user.

$ sudo adduser demouser
$ sudo passwd demouser
$ sudo usermod -aG wheel demouser

Download and extract CodeReady Containers:

$ su demouser
$ cd /home/demouser
$ wget https://mirror.openshift.com/pub/openshift-v4/clients/crc/1.0.0-beta.3/crc-linux-amd64.tar.xz
$ tar -xvf crc-linux-amd64.tar.xz
$ cd crc-linux-1.0.0-beta.3-amd64/
$ sudo cp ./crc /usr/bin

Set the memory available to CRC according to what you have on your physical server. For example, on a physical server with around 100GB you can allocate 80G to CRC as follows:

$ crc config set memory 81920
$ crc setup

You’ll need your pull secret from https://cloud.redhat.com/openshift/install/metal/user-provisioned.

$ crc start

That’s it — you can now login to your OpenShift environment:

eval $(crc oc-env) && oc login -u kubeadmin -p <password> https://api.crc.testing:6443

Set up Ansible Agnostic Deployer

github.com/redhat-cop/agnosticd is a fully automated two-phase deployer. Let’s deploy it!

$ su demouser
$ cd /home/demouser
$ git clone https://github.com/redhat-cop/agnosticd.git
$ cd agnosticd/ansible
$ python -m pip install --upgrade --trusted-host files.pythonhosted.org -r requirements.txt
$ python3 -m pip install --upgrade --trusted-host files.pythonhosted.org -r requirements.txt
$ pip3 install kubernetes
$ pip3 install openshift
$ pip install kubernetes
$ pip install openshift

Set up Open Data Hub on Code Ready Containers

Open Data Hub is a machine-learning-as-a-service platform built on OpenShift and Kafka/Strimzi. It integrates a collection of open source projects.

First, create an Ansible inventory file with the following content.

$ cat inventory
$ 127.0.0.1 ansible_connection=local

Set up the WORKLOAD environment variable so that Ansible Agnostic Deployer knows that we want to deploy Open Data Hub.

$ export WORKLOAD="ocp4-workload-open-data-hub"
$ sudo cp /usr/local/bin/ansible-playbook /usr/bin/ansible-playbook

We are only deploying one Open Data Hub project, so set user_count to 1. You can set up workshops for many students by setting user_count.

An OpenShift project (with Open Data Hub in our case) will be created for each student.

$ eval $(crc oc-env) && oc login -u kubeadmin -p <password> https://api.crc.testing:6443
$ ansible-playbook -i inventory ./configs/ocp-workloads/ocp-workload.yml -e"ocp_workload=${WORKLOAD}" -e"ACTION=create" -e"user_count=1" -e"ocp_username=kubeadmin" -e"ansible_become_pass=<password>" -e"silent=False"
$ oc project open-data-hub-user1
$ oc get route
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
jupyterhub jupyterhub-open-data-hub-user1.apps-crc.testing jupyterhub 8080-tcp edge/Redirect None

On your laptop, add jupyterhub-open-data-hub-user1.apps-crc.testing to your /etc/hosts file. For example:

127.0.0.1 localhost fedora30 console-openshift-console.apps-crc.testing oauth-openshift.apps-crc.testing mapit-app-management.apps-crc.testing mapit-spring-pipeline-demo.apps-crc.testing jupyterhub-open-data-hub-user1.apps-crc.testing jupyterhub-open-data-hub-user1.apps-crc.testing

On your laptop:

$ sudo ssh marc@fedora30 -L 443:jupyterhub-open-data-hub-user1.apps-crc.testing:443

You can now browse to https://jupyterhub-open-data-hub-user1.apps-crc.testing.

Now that we have Open Data Hub ready, you could deploy something interesting on it. For example, you could deploy IBM’s Qiskit open source framework for quantum computing. For more information, refer to Video no. 9 at this YouTube playlist, and the Github repo here.

You could also deploy plenty of other useful tools for Process Automation, Change Data Capture, Camel Integration, and 3scale API Management. You don’t have to wait for articles on these, though. Step-by-step short videos are already available on YouTube.

The corresponding step-by-step instructions are also on YouTube. You can also follow along with this article using the GitHub repo.


Photo by Marta Markes on Unsplash.

Posted on Leave a comment

GNOME 3.34 released — coming soon in Fedora 31

Today the GNOME project announced the release of GNOME 3.34. This latest release of GNOME will be the default desktop environment in Fedora 31 Workstation. The Beta release of Fedora 31 is currently expected in the next week or two, with the Final release scheduled for late October.

GNOME 3.34 includes a number of new features and improvements. Congratulations and thank you to the whole GNOME community for the work that went into this release! Read on for more details.

GNOME 3.34 desktop environment at work

Notable features

The desktop itself has been refreshed with a pleasing new background. You can also compare your background images to see what they’ll look like on the desktop.

There’s a new custom application folder feature in the GNOME Shell Overview. It lets you combine applications in a group to make it easier to find the apps you use.

You already know that Boxes lets you easily download an OS and create virtual machines for testing, development, or even daily use. Now you can find sources for your virtual machines more easily, as well as boot from CD or DVD (ISO) images more easily. There is also an Express Install feature available that now supports Windows versions.

Now that you can save states when using GNOME Games, gaming is more fun. You can snapshot your progress without getting in the way of the fun. You can even move snapshots to other devices running GNOME.

More details

These are not the only features of the new and improved GNOME 3.34. For an overview, visit the official release announcement. For even more details, check out the GNOME 3.34 release notes.

The Fedora 31 Workstation Beta release is right around the corner. Fedora 31 will feature GNOME 3.34 and you’ll be able to experience it in the Beta release.

Posted on Leave a comment

How to set up a TFTP server on Fedora

TFTP, or Trivial File Transfer Protocol, allows users to transfer files between systems using the UDP protocol. By default, it uses UDP port 69. The TFTP protocol is extensively used to support remote booting of diskless devices. So, setting up a TFTP server on your own local network can be an interesting way to do Fedora installations, or other diskless operations.

TFTP can only read and write files to or from a remote system. It doesn’t have the capability to list files or make any changes on the remote server. There are also no provisions for user authentication. Because of security implications and the lack of advanced features, TFTP is generally only used on a local area network (LAN).

TFTP server installation

The first thing you will need to do is install the TFTP client and server packages:

dnf install tftp-server tftp -y

This creates a tftp service and socket file for systemd under /usr/lib/systemd/system.

/usr/lib/systemd/system/tftp.service
/usr/lib/systemd/system/tftp.socket

Next, copy and rename these files to /etc/systemd/system:

cp /usr/lib/systemd/system/tftp.service /etc/systemd/system/tftp-server.service cp /usr/lib/systemd/system/tftp.socket /etc/systemd/system/tftp-server.socket

Making local changes

You need to edit these files from the new location after you’ve copied and renamed them, to add some additional parameters. Here is what the tftp-server.service file initially looks like:

[Unit]
Description=Tftp Server
Requires=tftp.socket
Documentation=man:in.tftpd [Service]
ExecStart=/usr/sbin/in.tftpd -s /var/lib/tftpboot
StandardInput=socket [Install]
Also=tftp.socket

Make the following changes to the [Unit] section:

Requires=tftp-server.socket

Make the following changes to the ExecStart line:

ExecStart=/usr/sbin/in.tftpd -c -p -s /var/lib/tftpboot

Here are what the options mean:

  • The -c option allows new files to be created.
  • The -p option is used to have no additional permissions checks performed above the normal system-provided access controls.
  • The -s option is recommended for security as well as compatibility with some boot ROMs which cannot be easily made to include a directory name in its request.

The default upload/download location for transferring the files is /var/lib/tftpboot.

Next, make the following changes to the [Install] section:

[Install]
WantedBy=multi-user.target
Also=tftp-server.socket

Don’t forget to save your changes!

Here is the completed /etc/systemd/system/tftp-server.service file:

[Unit]
Description=Tftp Server
Requires=tftp-server.socket
Documentation=man:in.tftpd [Service]
ExecStart=/usr/sbin/in.tftpd -c -p -s /var/lib/tftpboot
StandardInput=socket [Install]
WantedBy=multi-user.target
Also=tftp-server.socket

Starting the TFTP server

Reload the systemd daemon:

systemctl daemon-reload

Now start and enable the server:

systemctl enable --now tftp-server

To change the permissions of the TFTP server to allow upload and download functionality, use this command. Note TFTP is an inherently insecure protocol, so this may not be advised on a network you share with other people.

chmod 777 /var/lib/tftpboot

Configure your firewall to allow TFTP traffic:

firewall-cmd --add-service=tftp --perm
firewall-cmd --reload

Client Configuration

Install the TFTP client:

yum install tftp -y

Run the tftp command to connect to the TFTP server. Here is an example that enables the verbose option:

[client@thinclient:~ ]$ tftp 192.168.1.164
tftp> verbose
Verbose mode on.
tftp> get server.logs
getting from 192.168.1.164:server.logs to server.logs [netascii]
Received 7 bytes in 0.0 seconds [inf bits/sec]
tftp> quit
[client@thinclient:~ ]$ 

Remember, TFTP does not have the ability to list file names. So you’ll need to know the file name before running the get command to download any files.


Photo by Laika Notebooks on Unsplash.

Posted on Leave a comment

How RPM packages are made: the spec file

In the previous article on RPM package building, you saw that source RPMS include the source code of the software, along with a “spec” file. This post digs into the spec file, which contains instructions on how to build the RPM. Again, this article uses fpaste as an example.

Understanding the source code

Before you can start writing a spec file, you need to have some idea of the software that you’re looking to package. Here, you’re looking at fpaste, a very simple piece of software. It is written in Python, and is a one file script. When a new version is released, it’s provided here on Pagure: https://pagure.io/releases/fpaste/fpaste-0.3.9.2.tar.gz

The current version, as the archive shows, is 0.3.9.2. Download it so you can see what’s in the archive:

$ wget https://pagure.io/releases/fpaste/fpaste-0.3.9.2.tar.gz
$ tar -tvf fpaste-0.3.9.2.tar.gz
drwxrwxr-x root/root 0 2018-07-25 02:58 fpaste-0.3.9.2/
-rw-rw-r-- root/root 25 2018-07-25 02:58 fpaste-0.3.9.2/.gitignore
-rw-rw-r-- root/root 3672 2018-07-25 02:58 fpaste-0.3.9.2/CHANGELOG
-rw-rw-r-- root/root 35147 2018-07-25 02:58 fpaste-0.3.9.2/COPYING
-rw-rw-r-- root/root 444 2018-07-25 02:58 fpaste-0.3.9.2/Makefile
-rw-rw-r-- root/root 1656 2018-07-25 02:58 fpaste-0.3.9.2/README.rst
-rw-rw-r-- root/root 658 2018-07-25 02:58 fpaste-0.3.9.2/TODO
drwxrwxr-x root/root 0 2018-07-25 02:58 fpaste-0.3.9.2/docs/
drwxrwxr-x root/root 0 2018-07-25 02:58 fpaste-0.3.9.2/docs/man/
drwxrwxr-x root/root 0 2018-07-25 02:58 fpaste-0.3.9.2/docs/man/en/
-rw-rw-r-- root/root 3867 2018-07-25 02:58 fpaste-0.3.9.2/docs/man/en/fpaste.1
-rwxrwxr-x root/root 24884 2018-07-25 02:58 fpaste-0.3.9.2/fpaste
lrwxrwxrwx root/root 0 2018-07-25 02:58 fpaste-0.3.9.2/fpaste.py -> fpaste

The files you want to install are:

  • fpaste.py: which should go be installed to /usr/bin/.
  • docs/man/en/fpaste.1: the manual, which should go to /usr/share/man/man1/.
  • COPYING: the license text, which should go to /usr/share/license/fpaste/.
  • README.rst, TODO: miscellaneous documentation that goes to /usr/share/doc/fpaste.

Where these files are installed depends on the Filesystem Hierarchy Standard. To learn more about it, you can either read here: http://www.pathname.com/fhs/ or look at the man page on your Fedora system:

$ man hier

Part 1: What are we building?

Now that we know what files we have in the source, and where they are to go, let’s look at the spec file. You can see the full file here: https://src.fedoraproject.org/rpms/fpaste/blob/master/f/fpaste.spec

Here is the first part of the spec file:

Name: fpaste
Version: 0.3.9.2
Release: 3%{?dist}
Summary: A simple tool for pasting info onto sticky notes instances
BuildArch: noarch
License: GPLv3+
URL: https://pagure.io/fpaste
Source0: https://pagure.io/releases/fpaste/fpaste-0.3.9.2.tar.gz Requires: python3 %description
It is often useful to be able to easily paste text to the Fedora
Pastebin at http://paste.fedoraproject.org and this simple script
will do that and return the resulting URL so that people may
examine the output. This can hopefully help folks who are for
some reason stuck without X, working remotely, or any other
reason they may be unable to paste something into the pastebin

Name, Version, and so on are called tags, and are defined in RPM. This means you can’t just make up tags. RPM won’t understand them if you do! The tags to keep an eye out for are:

  • Source0: tells RPM where the source archive for this software is located.
  • Requires: lists run-time dependencies for the software. RPM can automatically detect quite a few of these, but in some cases they must be mentioned manually. A run-time dependency is a capability (often a package) that must be on the system for this package to function. This is how dnf detects whether it needs to pull in other packages when you install this package.
  • BuildRequires: lists the build-time dependencies for this software. These must generally be determined manually and added to the spec file.
  • BuildArch: the computer architectures that this software is being built for. If this tag is left out, the software will be built for all supported architectures. The value noarch means the software is architecture independent (like fpaste, which is written purely in Python).

This section provides general information about fpaste: what it is, which version is being made into an RPM, its license, and so on. If you have fpaste installed, and look at its metadata, you can see this information included in the RPM:

$ sudo dnf install fpaste
$ rpm -qi fpaste
Name : fpaste
Version : 0.3.9.2
Release : 2.fc30
...

RPM adds a few extra tags automatically that represent things that it knows.

At this point, we have the general information about the software that we’re building an RPM for. Next, we start telling RPM what to do.

Part 2: Preparing for the build

The next part of the spec is the preparation section, denoted by %prep:

%prep
%autosetup

For fpaste, the only command here is %autosetup. This simply extracts the tar archive into a new folder and keeps it ready for the next section where we build it. You can do more here, like apply patches, modify files for different purposes, and so on. If you did look at the contents of the source rpm for Python, you would have seen lots of patches there. These are all applied in this section.

Typically anything in a spec file with the % prefix is a macro or label that RPM interprets in a special way. Often these will appear with curly braces, such as %{example}.

Part 3: Building the software

The next section is where the software is built, denoted by “%build”. Now, since fpaste is a simple, pure Python script, it doesn’t need to be built. So, here we get:

%build
#nothing required

Generally, though, you’d have build commands here, like:

configure; make

The build section is often the hardest section of the spec, because this is where the software is being built from source. This requires you to know what build system the tool is using, which could be one of many: Autotools, CMake, Meson, Setuptools (for Python) and so on. Each has its own commands and style. You need to know these well enough to get the software to build correctly.

Part 4: Installing the files

Once the software is built, it needs to be installed in the %install section:

%install
mkdir -p %{buildroot}%{_bindir}
make install BINDIR=%{buildroot}%{_bindir} MANDIR=%{buildroot}%{_mandir}

RPM doesn’t tinker with your system files when building RPMs. It’s far too risky to add, remove, or modify files to a working installation. What if something breaks? So, instead RPM creates an artificial file system and works there. This is referred to as the buildroot. So, here in the buildroot, we create /usr/bin, represented by the macro %{_bindir}, and then install the files to it using the provided Makefile.

At this point, we have a built version of fpaste installed in our artificial buildroot.

Part 5: Listing all files to be included in the RPM

The last section of the spec file is the files section, %files. This is where we tell RPM what files to include in the archive it creates from this spec file. The fpaste file section is quite simple:

%files
%{_bindir}/%{name}
%doc README.rst TODO
%{_mandir}/man1/%{name}.1.gz
%license COPYING

Notice how, here, we do not specify the buildroot. All of these paths are relative to it. The %doc and %license commands simply do a little more—they create the required folders and remember that these files must go there.

RPM is quite smart. If you’ve installed files in the %install section, but not listed them, it’ll tell you this, for example.

Part 6: Document all changes in the change log

Fedora is a community based project. Lots of contributors maintain and co-maintain packages. So it is imperative that there’s no confusion about what changes have been made to a package. To ensure this, the spec file contains the last section, the Changelog, %changelog:

%changelog
* Thu Jul 25 2019 Fedora Release Engineering  - 0.3.9.2-3
- Rebuilt for https://fedoraproject.org/wiki/Fedora_31_Mass_Rebuild * Thu Jan 31 2019 Fedora Release Engineering  - 0.3.9.2-2
- Rebuilt for https://fedoraproject.org/wiki/Fedora_30_Mass_Rebuild * Tue Jul 24 2018 Ankur Sinha  - 0.3.9.2-1
- Update to 0.3.9.2 * Fri Jul 13 2018 Fedora Release Engineering  - 0.3.9.1-4
- Rebuilt for https://fedoraproject.org/wiki/Fedora_29_Mass_Rebuild * Wed Feb 07 2018 Fedora Release Engineering  - 0.3.9.1-3
- Rebuilt for https://fedoraproject.org/wiki/Fedora_28_Mass_Rebuild * Sun Sep 10 2017 Vasiliy N. Glazov  - 0.3.9.1-2
- Cleanup spec * Fri Sep 08 2017 Ankur Sinha  - 0.3.9.1-1
- Update to latest release
- fixes rhbz 1489605
...
....

There must be a changelog entry for every change to the spec file. As you see here, while I’ve updated the spec as the maintainer, others have too. Having the changes documented clearly helps everyone know what the current status of the spec is. For all packages installed on your system, you can use rpm to see their changelogs:

$ rpm -q --changelog fpaste

Building the RPM

Now we are ready to build the RPM. If you want to follow along and run the commands below, please ensure that you followed the steps in the previous post to set your system up for building RPMs.

We place the fpaste spec file in ~/rpmbuild/SPECS, the source code archive in ~/rpmbuild/SOURCES/ and can now create the source RPM:

$ cd ~/rpmbuild/SPECS
$ wget https://src.fedoraproject.org/rpms/fpaste/raw/master/f/fpaste.spec $ cd ~/rpmbuild/SOURCES
$ wget https://pagure.io/fpaste/archive/0.3.9.2/fpaste-0.3.9.2.tar.gz $ cd ~/rpmbuild/SOURCES
$ rpmbuild -bs fpaste.spec
Wrote: /home/asinha/rpmbuild/SRPMS/fpaste-0.3.9.2-3.fc30.src.rpm

Let’s have a look at the results:

$ ls ~/rpmbuild/SRPMS/fpaste*
/home/asinha/rpmbuild/SRPMS/fpaste-0.3.9.2-3.fc30.src.rpm $ rpm -qpl ~/rpmbuild/SRPMS/fpaste-0.3.9.2-3.fc30.src.rpm
fpaste-0.3.9.2.tar.gz
fpaste.spec

There we are — the source rpm has been built. Let’s build both the source and binary rpm together:

$ cd ~/rpmbuild/SPECS
$ rpmbuild -ba fpaste.spec
..
..
..

RPM will show you the complete build output, with details on what it is doing in each section that we saw before. This “build log” is extremely important. When builds do not go as expected, we packagers spend lots of time going through them, tracing the complete build path to see what went wrong.

That’s it really! Your ready-to-install RPMs are where they should be:

$ ls ~/rpmbuild/RPMS/noarch/
fpaste-0.3.9.2-3.fc30.noarch.rpm

Recap

We’ve covered the basics of how RPMs are built from a spec file. This is by no means an exhaustive document. In fact, it isn’t documentation at all, really. It only tries to explain how things work under the hood. Here’s a short recap:

  • RPMs are of two types: source and binary.
  • Binary RPMs contain the files to be installed to use the software.
  • Source RPMs contain the information needed to build the binary RPMs: the complete source code, and the instructions on how to build the RPM in the spec file.
  • The spec file has various sections, each with its own purpose.

Here, we’ve built RPMs locally, on our Fedora installations. While this is the basic process, the RPMs we get from repositories are built on dedicated servers with strict configurations and methods to ensure correctness and security. This Fedora packaging pipeline will be discussed in a future post.

Would you like to get started with building packages, and help the Fedora community maintain the massive amount of software we provide? You can start here by joining the package collection maintainers.

For any queries, post to the Fedora developers mailing list—we’re always happy to help!

References

Here are some useful references to building RPMs:


Posted on Leave a comment

Command line quick tips: Using pipes to connect tools

One of the most powerful concepts of Linux is carried on from its predecessor, UNIX. Your Fedora system has a bunch of useful, single-purpose utilities available for all sorts of simple operations. Like building blocks, you can attach them in creative and complex ways. Pipes are key to this concept.

Before you hear about pipes, though, it’s helpful to know the basic concept of input and output. Many utilities in your Fedora system can operate against files. But they can often take input not stored on a disk. You can think of input flowing freely into a process such as a utility as its standard input (also sometimes called stdin).

Similarly, a tool or process can display information to the screen by default. This is often because its default output is connected to the terminal. You can think of the free-flowing output of a process as its standard output (or stdout — go figure!).

Examples of standard input and output

Often when you run a tool, it outputs to the terminal. Take for instance this simple sequence command using the seq tool:

$ seq 1 6
1
2
3
4
5
6

The output, which is simply to count integers up from 1 to 6, one number per line, comes to the screen. But you could also send it to a file using the > character. The shell interpreter uses this character to mean “redirect standard output to a file whose name follows.” So as you can guess, this command puts the output into a file called six.txt:

$ seq 1 6 > six.txt

Notice nothing comes to the screen. You’ve sent the ouptut into a file instead. If you run the command cat six.txt you can verify that.

You probably remember the simple use of the grep command from a previous article. You could ask grep to search for a pattern in a file by simply declaring the file name. But that’s simply a convenience feature in grep. Technically it’s built to take standard input, and search that.

The shell uses the < character similarly to mean “redirect standard input from a file whose name follows.” So you could just as well search for the number 4 in the file six.txt this way:

$ grep 4 < six.txt
4

Of course the output here is, by default, the content of any line with a match. So grep finds the digit 4 in the file and outputs that line to standard output.

Introducing pipes

Now imagine: what if you took the standard output of one tool, and instead of sending it to the terminal, you sent it into another tool’s standard input? This is the essence of the pipe.

Your shell uses the vertical bar character | to represent a pipe between two commands. You can find it on most keyboard above the backslash \ character. It’s used like this:

$ command1 | command2

For most simple utilities, you wouldn’t use an output filename option on command1, nor an input file option on command2. (You might use other options, though.) Instead of using files, you’re sending the output of command1 directly into command2. You can use as many pipes in a row as needed, creating complex pipelines of several commands in a row.

This (relatively useless) example combines the commands above:

$ seq 1 6 | grep 4
4

What happened here? The seq command outputs the integers 1 through 6, one line at a time. The grep command processes that output line by line, searching for a match on the digit 4, and outputs any matching line.

Here’s a slightly more useful example. Let’s say you want to find out if TCP port 22, the ssh port, is open on your system. You could find this out using the ss command* by looking through its copious output. Or you could figure out its filter language and use that. Or you could use pipes. For example, pipe it through grep looking for the ssh port label:

$ ss -tl | grep ssh
LISTEN 0 128 0.0.0.0:ssh 0.0.0.0:* LISTEN 0 128 [::]:ssh [::]:*

* Those readers familiar with the venerable netstat command may note it is mostly obsolete, as stated in its man page.

That’s a lot easier than reading through many lines of output. And of course, you can combine redirectors and pipes, for instance:

$ ss -tl | grep ssh > ssh-listening.txt

This is barely scratching the surface of pipes. Let your imagination run wild. Have fun piping!


Posted on Leave a comment

Managing credentials with KeePassXC

A previous article discussed password management tools that use server-side technology. These tools are very interesting and suitable for a cloud installation.
In this article we will talk about KeePassXC, a simple multi-platform open source software that uses a local file as a database.
The main advantage of this type of password management is simplicity. No server-side technology expertise is required and can therefore be used by any type of user.

Introducing KeePassXC

KeePassXC is an open source cross platform password manager: its development started as a fork of KeePassX, a good product but with a not very active development. It saves the secrets in an encrypted database with AES algorithm using 256 bit key, this makes it reasonably safe to save the database in a cloud drive storage such as pCloud or Dropbox.

In addition to the passwords, KeePassXC allows you to save various information and attachments in the encrypted wallet. It also has a valid password generator that helps the user to correctly manage his credentials.

Installation

The program is available both in the standard Fedora repository and in the Flathub repository. Unfortunately the integration with the browser does not work with the application running in the sandbox, so I suggest to install the program via dnf:

 
sudo dnf install keepassxc

Creating your wallet

To create a new database there are two important steps:

  • Choose the encryption settings: the default settings are reasonably safe, increasing the transform rounds also increases the decryption time.
  • Choose the master key and additional protections: the master key must be easy to remember (if you lose it your wallet is lost!) but strong enough, a passphrase with at least 4 random words can be a good choice. As additional protection you can choose a key file (remember: you must always have it available otherwise you cannot open the wallet) and / or a YubiKey hardware key.

The database file will be saved to the file system. If you want to share with other computers / devices you can save it on a USB key or in a cloud storage like pCloud or Dropbox. Of course, if you choose a cloud storage, a particularly strong master password is recommended, better if accompanied by additional protection.

Creating your first entry

Once the database has been created, you can start creating your first entry. For a web login specify a username, password and url in the Entry tab. Optionally you can specify an expiration date for the credentials based on your personal policy: also by pressing the button on the right the favicon of the site is downloaded and associated as an icon of the entry, this is a nice feature.

KeePassXC also offers a good password / passphrase generator, you can choose length and complexity and check the degree of resistance to a brute force attack:

Browser integration

KeePassXC has an extension available for all major browsers. The extension allows you to fill in the login information for all the entries whose URL is specified.

Browser integration must be enabled on KeePassXC (Tools menu -> Settings) specifying which browsers you intend to use:

Once the extension is installed, it is necessary to create a connection with the database. To do this, press the extension button and then the Connect button: if the database is open and unlocked the extension will create an association key and save it in the database, the key is unique to the browser so I suggest naming it appropriately :

When you reach the login page specified in the Url field and the database is unlocked, the extension will offer you all the credentials you have associated with that page:

In this way, browsing with KeePassXC running you will have your internet credentials available without necessarily saving them in the browser.

SSH agent integration

Another interesting feature of KeePassXC is the integration with SSH. If you have ssh-agent running KeePassXC is able to interact and add the ssh keys that you have uploaded as attachments to your entries.

First of all in the general settings (Tools menu -> Settings) you have to enable the ssh agent and restart the program:

At this point it is required to upload your ssh key pair as an attachment to your entry. Then in the “SSH agent” tab select the private key in the attachment drop-down list, the public key will be populated automatically. Don’t forget to select the two checkboxes above to allow the key to be added to the agent when the database is opened / unlocked and removed when the database is closed / locked:

Now with the database open and unlocked you can log in ssh using the keys saved in your wallet.

The only limitation is in the maximum number of keys that can be added to the agent: ssh servers do not accept by default more than 5 login attempts, for security reasons it is not recommended to increase this value.

Posted on Leave a comment

Command line quick tips: Searching with grep

If you use your Fedora system for more than just browsing the web, you have probably needed to search for text in your files. For instance, you might be a developer that can’t remember where you left some code snippet. Or you might be looking for a setting stored in your system configuration files. Whatever the reason, there are plenty of ways to search for text on your Fedora system. This article will show you how, including using the built-in utility grep.

Introducing grep

The grep utility allows you to search for text, or more specifically text patterns, on your file system. The name grep comes from global regular expression print. Yikes, what a mouthful! This is because a regular expression (or regex) is a way of defining text patterns.

The grep utility lets you find and print out matches on these patterns — thus the name. It’s a powerful system, and you can even find it in modern code editors like Visual Studio Code or Atom.

Regular expressions

Harnessing all the power of regular expressions is a topic bigger than this article, for sure. The simplest kind of regex can be just a word, or a portion of a word. That pattern is simply “the following characters, in the same order.” The pattern is searched line by line. For example:

  • pciutil – matches any time the 7 characters pciutil appear together — including pciutil, pciutils, pciutil123, and foopciutil.
  • ^pciutil – matches any time the 7 characters pciutil appear together immediately at the beginning of a line (that’s what the ^ stands for)
  • pciutil$ – matches any time the 7 characters pciutil appear together immediately before the end of a line (that’s what the $ stands for)

More complicated expressions are also possible. Special characters are used in a regex as wildcards, or to change the way the regex works. If you want to match on one of these characters, use a \ (backslash) before the character.

For instance, the . (period or full stop) is a wildcard that matches any single character. If you use it in the expression pci.til, it matches pciutil, pci4til, or pci!til, but does not match pcitil. There must be a character to match the . in the regular expression.

The ? is a marker in a regex that marks the previous element as optional. So if you built on the previous example, the expression pci.?til would also match on pcitil because there need not be a character between i and t for a valid match.

The + and * are markers that stand for repetition. While + stands for one or more of the previous element, * stands for zero or more. So the regex pci.+til would match any of these: pciutil, pci4til, pci!til, pciuuuuuutil, pci423til. However, it wouldn’t match pcitil — but the regex pci.*til would.

Examples of grep

Now that you know a little about regex, let’s put it to work. Imagine that you’re trying to find a configuration file that mentions a user account jpublic. You tried a bunch of files already, but none were the correct one, and you’re sure it’s there. So, try searching the /etc folder (using sudo because some subfolders are not readable outside the root account):

$ sudo grep -r jpublic /etc/

The -r switch searches the folder recursively. The utility prints a list of matching files, and the line where the hit occurred. In most modern terminal environments, the hit is color highlighted for better readability.

Imagine you have a much larger selection of files in /home/shared and you need to establish which ones mention the name MacNulty. However, you’re not sure whether the capitalization will be consistent, and you’re just looking for names of files, not the context. Also, you believe someone may have misspelled the name as McNulty in some places.

Use the -l switch to only output filenames with a match, a ? marker for optional a in the name, and -i to make the search case-insensitive:

$ sudo grep -irl 'ma\?cnulty' /home/shared

This command will match on strings like Macnulty, McNulty, Mcnulty, and macNulty with no problem. You’ll get a simple list of filenames where the match was found in the contents.

These are only the simplest ways to use grep and regular expressions. You can learn a lot more about both using the info grep command.

But wait, there’s more…

The grep command is venerable but in some situations may not be as efficient as newer search utilities. For instance, the ripgrep utility is engineered to be a fast search utility that can take the place of grep. We covered ripgrep as part of an article on Rust and Rust applications previously in the Magazine:

It’s important to note that ripgrep has its own command line switches and syntax. For example, it has simple switches to print only filename matches, invert searches, and many other useful functions. It can also ignore based on .rgignore files placed in any subdirectories. (It’s also noteworthy that the -r switch is used differently for ripgrep, because it is automatically recursive.)

To install, use this command:

$ sudo dnf install ripgrep

To explore the options, use the manual page (man rg). You’ll find that many, but not all, options are the same as grep.

Have fun searching!


Posted on Leave a comment

Use a drop-down terminal for fast commands in Fedora

A drop-down terminal lets you tap a key and quickly enter any command on your desktop. Often it creates a terminal in a smooth way, sometimes with effects. This article demonstrates how it helps to improve and speed up daily tasks, using drop-down terminals like Yakuake, Tilda, Guake and a GNOME extension.

Yakuake

Yakuake is a drop-down terminal emulator based on KDE Konsole techonology. It is distributed under the terms of the GNU GPL Version 2. It includes features such as:

  • Smoothly rolls down from the top of your screen
  • Tabbed interface
  • Configurable dimensions and animation speed
  • Skinnable
  • Sophisticated D-Bus interface

To install Yakuake, use the following command:

$ sudo dnf install -y yakuake

Startup and configuration

If you’re runnign KDE, open the System Settings and go to Startup and Shutdown. Add yakuake to the list of programs under Autostart, like this:

It’s easy to configure Yakuake while running the app. To begin, launch the program at the command line:

$ yakuake &

The following welcome dialog appears. You can set a new keyboard shortcut if the standard one conflicts with another keystroke you already use:

Now click the menu button, and the following help menu appears. Next, select Configure Yakuake… to access the configuration options.

You can customize the options for appearance, such as opacity; behavior, such as focusing terminals when the mouse pointer is moved over them; and window, such as size and animation. In the window options you’ll find one of the most useful options is you use two or more monitors: Open on screen: At mouse location.

Using Yakuake

The main shortcuts are:

  • F12 = Open/Retract Yakuake
  • Ctrl+F11 = Full Screen Mode
  • Ctrl+) = Split Top/Bottom
  • Ctrl+( = Split Left/Right
  • Ctrl+Shift+T = New Session
  • Shift+Right = Next Session
  • Shift+Left = Previous Session
  • Ctrl+Alt+S = Rename Session

Below is an example of Yakuake being used to split the session like a terminal multiplexer. Using this feature, you can run several shells in one session.

Tilda

Tilda is a drop-down terminal that compares with other popular terminal emulators such as GNOME Terminal, KDE’s Konsole, xterm, and many others.

It features a highly configurable interface. You can even change options such as the terminal size and animation speed. Tilda also lets you enable hotkeys you can bind to commands and operations.

To install Tilda, run this command:

$ sudo dnf install -y tilda

Startup and configuration

Most users prefer to have a drop-down terminal available behind the scenes when they login. To set this option, first go to the app launcher in your desktop, search for Tilda, and open it.

Next, open up the Tilda Config window. Select Start Tilda hidden, which means it will not display a terminal immediately when started.

Next, you’ll set your desktop to start Tilda automatically. If you’re using KDE, go to System Settings > Startup and Shutdown > Autostart and use Add a Program.

If you’re using GNOME, you can run this command in a terminal:

$ ln -s /usr/share/applications/tilda.desktop ~/.config/autostart/

When you run for the first time, a wizard shows up to set your preferences. If you need to change something, right click and go to Preferences in the menu.

You can also create multiple configuration files, and bind other keys to open new terminals at different places on the screen. To do that, run this command:

$ tilda -C

Every time you use the above command, Tilda creates a new config file located in the ~/.config/tilda/ folder called config_0, config_1, and so on. You can then map a key combination to open a new Tilda terminal with a specific set of options.

Using Tilda

The main shortcuts are:

  • F1 = Pull Down Terminal Tilda (Note: If you have more than one config file, the shortcuts are the same, with a diferent open/retract shortcut like F1, F2, F3, and so on)
  • F11 = Full Screen Mode
  • F12 = Toggle Transparency
  • Ctrl+Shift+T = Add Tab
  • Ctrl+Page Up = Go to Next Tab
  • Ctrl+Page Down = Go to Previous Tab

GNOME Extension

The Drop-down Terminal GNOME Extension lets you use this useful tool in your GNOME Shell. It is easy to install and configure, and gives you fast access to a terminal session.

Installation

Open a browser and go to the site for this GNOME extension. Enable the extension setting to On, as shown here:

Then select Install to install the extension on your system.

Once you do this, there’s no reason to set any autostart options. The extension will automatically run whenever you login to GNOME!

Configuration

After install, the Drop Down Terminal configuration window opens to set your preferences. For example, you can set the size of the terminal, animation, transparency, and scrollbar use.

If you need change some preferences in the future, run the gnome-shell-extension-prefs command and choose Drop Down Terminal.

Using the extension

The shortcuts are simple:

  • ` (usually the key above Tab) = Open/Retract Terminal
  • F12 (customize as you prefer) = Open/Retract Terminal

Posted on Leave a comment

Trace code in Fedora with bpftrace

bpftrace is a new eBPF-based tracing tool that was first included in Fedora 28. It was developed by Brendan Gregg, Alastair Robertson and Matheus Marchini with the help of a loosely-knit team of hackers across the Net. A tracing tool lets you analyze what a system is doing behind the curtain. It tells you which functions in code are being called, with which arguments, how many times, and so on.

This article covers some basics about bpftrace, and how it works. Read on for more information and some useful examples.

eBPF (extended Berkeley Packet Filter)

eBPF is a tiny virtual machine, or a virtual CPU to be more precise, in the Linux Kernel. The eBPF can load and run small programs in a safe and controlled way in kernel space. This makes it safer to use, even in production systems. This virtual machine has its own instruction set architecture (ISA) resembling a subset of modern processor architectures. The ISA makes it easy to translate those programs to the real hardware. The kernel performs just-in-time translation to native code for main architectures to improve the performance.

The eBPF virtual machine allows the kernel to be extended programmatically. Nowadays several kernel subsystems take advantage of this new powerful Linux Kernel capability. Examples include networking, seccomp, tracing, and more. The main idea is to attach eBPF programs into specific code points, and thereby extend the original kernel behavior.

eBPF machine language is very powerful. But writing code directly in it is extremely painful, because it’s a low level language. This is where bpftrace comes in. It provides a high-level language to write eBPF tracing scripts. The tool then translates these scripts to eBPF with the help of clang/LLVM libraries, and then attached to the specified code points.

Installation and quick start

To install bpftrace, run the following command in a terminal using sudo:

$ sudo dnf install bpftrace

Try it out with a “hello world” example:

$ sudo bpftrace -e 'BEGIN { printf("hello world\n"); }'

Note that you must run bpftrace as root due to the privileges required. Use the -e option to specify a program, and to construct the so-called “one-liners.” This example only prints hello world, and then waits for you to press Ctrl+C.

BEGIN is a special probe name that fires only once at the beginning of execution. Every action inside the curly braces { } fires whenever the probe is hit — in this case, it’s just a printf.

Let’s jump now to a more useful example:

$ sudo bpftrace -e 't:syscalls:sys_enter_execve { printf("%s called %s\n", comm, str(args->filename)); }'

This example prints the parent process name (comm) and the name of every new process being created in the system. t:syscalls:sys_enter_execve is a kernel tracepoint. It’s a shorthand for tracepoint:syscalls:sys_enter_execve, but both forms can be used. The next section shows you how to list all available tracepoints.

comm is a bpftrace builtin that represents the process name. filename is a field of the t:syscalls:sys_enter_execve tracepoint. You can access these fields through the args builtin.

All available fields of the tracepoint can be listed with this command:

bpftrace -lv "t:syscalls:sys_enter_execve"

Example usage

Listing probes

A central concept for bpftrace are probe points. Probe points are instrumentation points in code (kernel or userspace) where eBPF programs can be attached. They fit into the following categories:

  • kprobe – kernel function start
  • kretprobe – kernel function return
  • uprobe – user-level function start
  • uretprobe – user-level function return
  • tracepoint – kernel static tracepoints
  • usdt – user-level static tracepoints
  • profile – timed sampling
  • interval – timed output
  • software – kernel software events
  • hardware – processor-level events

All available kprobe/kretprobe, tracepoints, software and hardware probes can be listed with this command:

$ sudo bpftrace -l

The uprobe/uretprobe and usdt probes are userspace probes specific to a given executable. To use them, use the special syntax shown later in this article.

The profile and interval probes fire at fixed time intervals. Fixed time intervals are not covered in this article.

Counting system calls

Maps are special BPF data types that store counts, statistics, and histograms. You can use maps to summarize how many times each syscall is being called:

$ sudo bpftrace -e 't:syscalls:sys_enter_* { @[probe] = count(); }'

Some probe types allow wildcards to match multiple probes. You can also specify multiple attach points for an action block using a comma separated list. In this example, the action block attaches to all tracepoints whose name starts with t:syscalls:sys_enter_, which means all available syscalls.

The bpftrace builtin function count() counts the number of times this function is called. @[] represents a map (an associative array). The key of this map is probe, which is another bpftrace builtin that represents the full probe name.

Here, the same action block is attached to every syscall. Then, each time a syscall is called the map will be updated, and the entry is incremented in the map relative to this same syscall. When the program terminates, it automatically prints out all declared maps.

This example counts the syscalls called globally, it’s also possible to filter for a specific process by PID using the bpftrace filter syntax:

$ sudo bpftrace -e 't:syscalls:sys_enter_* / pid == 1234 / { @[probe] = count(); }'

Write bytes by process

Using these concepts, let’s analyze how many bytes each process is writing:

$ sudo bpftrace -e 't:syscalls:sys_exit_write /args->ret > 0/ { @[comm] = sum(args->ret); }'

bpftrace attaches the action block to the write syscall return probe (t:syscalls:sys_exit_write). Then, it uses a filter to discard the negative values, which are error codes (/args->ret > 0/).

The map key comm represents the process name that called the syscall. The sum() builtin function accumulates the number of bytes written for each map entry or process. args is a bpftrace builtin to access tracepoint’s arguments and return values. Finally, if successful, the write syscall returns the number of written bytes. args->ret provides access to the bytes.

Read size distribution by process (histogram):

bpftrace supports the creation of histograms. Let’s analyze one example that creates a histogram of the read size distribution by process:

$ sudo bpftrace -e 't:syscalls:sys_exit_read { @[comm] = hist(args->ret); }'

Histograms are BPF maps, so they must always be attributed to a map (@). In this example, the map key is comm.

The example makes bpftrace generate one histogram for every process that calls the read syscall. To generate just one global histogram, attribute the hist() function just to ‘@’ (without any key).

bpftrace automatically prints out declared histograms when the program terminates. The value used as base for the histogram creation is the number of read bytes, found through args->ret.

Tracing userspace programs

You can also trace userspace programs with uprobes/uretprobes and USDT (User-level Statically Defined Tracing). The next example uses a uretprobe, which probes to the end of a user-level function. It gets the command lines issued in every bash running in the system:

$ sudo bpftrace -e 'uretprobe:/bin/bash:readline { printf("readline: \"%s\"\n", str(retval)); }'

To list all available uprobes/uretprobes of the bash executable, run this command:

$ sudo bpftrace -l "uprobe:/bin/bash"

uprobe instruments the beginning of a user-level function’s execution, and uretprobe instruments the end (its return). readline() is a function of /bin/bash, and it returns the typed command line. retval is the return value for the instrumented function, and can only be accessed on uretprobe.

When using uprobes, you can access arguments with arg0..argN. A str() call is necessary to turn the char * pointer to a string.

Shipped Scripts

There are many useful scripts shipped with bpftrace package. You can find them in the /usr/share/bpftrace/tools/ directory.

Among them, you can find:

  • killsnoop.bt – Trace signals issued by the kill() syscall.
  • tcpconnect.bt – Trace all TCP network connections.
  • pidpersec.bt – Count new procesess (via fork) per second.
  • opensnoop.bt – Trace open() syscalls.
  • vfsstat.bt – Count some VFS calls, with per-second summaries.

You can directly use the scripts. For example:

$ sudo /usr/share/bpftrace/tools/killsnoop.bt

You can also study these scripts as you create new tools.

Links


Photo by Roman Romashov on Unsplash.

Posted on Leave a comment

4 cool new projects to try in COPR for August 2019

COPR is a collection of personal repositories for software that isn’t carried in Fedora. Some software doesn’t conform to standards that allow easy packaging. Or it may not meet other Fedora standards, despite being free and open source. COPR can offer these projects outside the Fedora set of packages. Software in COPR isn’t supported by Fedora infrastructure or signed by the project. However, it can be a neat way to try new or experimental software.

Here’s a set of new and interesting projects in COPR.

Duc

Duc is a collection of tools for disk usage inspection and visualization. Duc uses an indexed database to store sizes of files on your system. Once the indexing is done, you can then quickly overview your disk usage either by its command-line interface or the GUI.

Installation instructions

The repo currently provides duc for EPEL 7, Fedora 29 and 30. To install duc, use these commands:

sudo dnf copr enable terrywang/duc sudo dnf install duc

MuseScore

MuseScore is a software for working with music notation. With MuseScore, you can create sheet music either by using a mouse, virtual keyboard or a MIDI controller. MuseScore can then play the created music or export it as a PDF, MIDI or MusicXML. Additionally, there’s an extensive database of sheet music created by Musescore users.

Installation instructions

The repo currently provides MuseScore for Fedora 29 and 30. To install MuseScore, use these commands:

sudo dnf copr enable jjames/MuseScore
sudo dnf install musescore

Dynamic Wallpaper Editor

Dynamic Wallpaper Editor is a tool for creating and editing a collection of wallpapers in GNOME that change in time. This can be done using XML files, however, Dynamic Wallpaper Editor makes this easy with its graphical interface, where you can simply add pictures, arrange them and set the duration of each picture and transitions between them.

Installation instructions

The repo currently provides dynamic-wallpaper-editor for Fedora 30 and Rawhide. To install dynamic-wallpaper-editor, use these commands:

sudo dnf copr enable atim/dynamic-wallpaper-editor
sudo dnf install dynamic-wallpaper-editor

Manuskript

Manuskript is a tool for writers and is aimed to make creating large writing projects easier. It serves as an editor for writing the text itself, as well as a tool for organizing notes about the story itself, characters of the story and individual plots.

Installation instructions

The repo currently provides Manuskript for Fedora 29, 30 and Rawhide. To install Manuskript, use these commands:

sudo dnf copr enable notsag/manuskript sudo dnf install manuskript