post

Use a drop-down terminal for fast commands in Fedora

A drop-down terminal lets you tap a key and quickly enter any command on your desktop. Often it creates a terminal in a smooth way, sometimes with effects. This article demonstrates how it helps to improve and speed up daily tasks, using drop-down terminals like Yakuake, Tilda, Guake and a GNOME extension.

Yakuake

Yakuake is a drop-down terminal emulator based on KDE Konsole techonology. It is distributed under the terms of the GNU GPL Version 2. It includes features such as:

  • Smoothly rolls down from the top of your screen
  • Tabbed interface
  • Configurable dimensions and animation speed
  • Skinnable
  • Sophisticated D-Bus interface

To install Yakuake, use the following command:

$ sudo dnf install -y yakuake

Startup and configuration

If you’re runnign KDE, open the System Settings and go to Startup and Shutdown. Add yakuake to the list of programs under Autostart, like this:

It’s easy to configure Yakuake while running the app. To begin, launch the program at the command line:

$ yakuake &

The following welcome dialog appears. You can set a new keyboard shortcut if the standard one conflicts with another keystroke you already use:

Now click the menu button, and the following help menu appears. Next, select Configure Yakuake… to access the configuration options.

You can customize the options for appearance, such as opacity; behavior, such as focusing terminals when the mouse pointer is moved over them; and window, such as size and animation. In the window options you’ll find one of the most useful options is you use two or more monitors: Open on screen: At mouse location.

Using Yakuake

The main shortcuts are:

  • F12 = Open/Retract Yakuake
  • Ctrl+F11 = Full Screen Mode
  • Ctrl+) = Split Top/Bottom
  • Ctrl+( = Split Left/Right
  • Ctrl+Shift+T = New Session
  • Shift+Right = Next Session
  • Shift+Left = Previous Session
  • Ctrl+Alt+S = Rename Session

Below is an example of Yakuake being used to split the session like a terminal multiplexer. Using this feature, you can run several shells in one session.

Tilda

Tilda is a drop-down terminal that compares with other popular terminal emulators such as GNOME Terminal, KDE’s Konsole, xterm, and many others.

It features a highly configurable interface. You can even change options such as the terminal size and animation speed. Tilda also lets you enable hotkeys you can bind to commands and operations.

To install Tilda, run this command:

$ sudo dnf install -y tilda

Startup and configuration

Most users prefer to have a drop-down terminal available behind the scenes when they login. To set this option, first go to the app launcher in your desktop, search for Tilda, and open it.

Next, open up the Tilda Config window. Select Start Tilda hidden, which means it will not display a terminal immediately when started.

Next, you’ll set your desktop to start Tilda automatically. If you’re using KDE, go to System Settings > Startup and Shutdown > Autostart and use Add a Program.

If you’re using GNOME, you can run this command in a terminal:

$ ln -s /usr/share/applications/tilda.desktop ~/.config/autostart/

When you run for the first time, a wizard shows up to set your preferences. If you need to change something, right click and go to Preferences in the menu.

You can also create multiple configuration files, and bind other keys to open new terminals at different places on the screen. To do that, run this command:

$ tilda -C

Every time you use the above command, Tilda creates a new config file located in the ~/.config/tilda/ folder called config_0, config_1, and so on. You can then map a key combination to open a new Tilda terminal with a specific set of options.

Using Tilda

The main shortcuts are:

  • F1 = Pull Down Terminal Tilda (Note: If you have more than one config file, the shortcuts are the same, with a diferent open/retract shortcut like F1, F2, F3, and so on)
  • F11 = Full Screen Mode
  • F12 = Toggle Transparency
  • Ctrl+Shift+T = Add Tab
  • Ctrl+Page Up = Go to Next Tab
  • Ctrl+Page Down = Go to Previous Tab

GNOME Extension

The Drop-down Terminal GNOME Extension lets you use this useful tool in your GNOME Shell. It is easy to install and configure, and gives you fast access to a terminal session.

Installation

Open a browser and go to the site for this GNOME extension. Enable the extension setting to On, as shown here:

Then select Install to install the extension on your system.

Once you do this, there’s no reason to set any autostart options. The extension will automatically run whenever you login to GNOME!

Configuration

After install, the Drop Down Terminal configuration window opens to set your preferences. For example, you can set the size of the terminal, animation, transparency, and scrollbar use.

If you need change some preferences in the future, run the gnome-shell-extension-prefs command and choose Drop Down Terminal.

Using the extension

The shortcuts are simple:

  • ` (usually the key above Tab) = Open/Retract Terminal
  • F12 (customize as you prefer) = Open/Retract Terminal

Use Postfix to get email from your Fedora system

Communication is key. Your computer might be trying to tell you something important. But if your mail transport agent (MTA) isn’t properly configured, you might not be getting the notifications. Postfix is a MTA that’s easy to configure and known for a strong security record. Follow these steps to ensure that email notifications sent from local services will get routed to your internet email account through the Postfix MTA.

Install packages

Use dnf to install the required packages (you configured sudo, right?):

$ sudo -i
# dnf install postfix mailx

If you previously had a different MTA configured, you may need to set Postfix to be the system default. Use the alternatives command to set your system default MTA:

$ sudo alternatives --config mta
There are 2 programs which provide 'mta'. Selection Command
*+ 1 /usr/sbin/sendmail.sendmail 2 /usr/sbin/sendmail.postfix
Enter to keep the current selection[+], or type selection number: 2

Create a password_maps file

You will need to create a Postfix lookup table entry containing the email address and password of the account that you want to use to for sending email:

# MY_EMAIL_ADDRESS=glb@gmail.com
# MY_EMAIL_PASSWORD=abcdefghijklmnop
# MY_SMTP_SERVER=smtp.gmail.com
# MY_SMTP_SERVER_PORT=587
# echo "[$MY_SMTP_SERVER]:$MY_SMTP_SERVER_PORT $MY_EMAIL_ADDRESS:$MY_EMAIL_PASSWORD" >> /etc/postfix/password_maps
# chmod 600 /etc/postfix/password_maps
# unset MY_EMAIL_PASSWORD
# history -c

If you are using a Gmail account, you’ll need to configure an “app password” for Postfix, rather than using your gmail password. See “Sign in using App Passwords” for instructions on configuring an app password.

Next, you must run the postmap command against the Postfix lookup table to create or update the hashed version of the file that Postfix actually uses:

# postmap /etc/postfix/password_maps

The hashed version will have the same file name but it will be suffixed with .db.

Update the main.cf file

Update Postfix’s main.cf configuration file to reference the Postfix lookup table you just created. Edit the file and add these lines.

relayhost = smtp.gmail.com:587
smtp_tls_security_level = verify
smtp_tls_mandatory_ciphers = high
smtp_tls_verify_cert_match = hostname
smtp_sasl_auth_enable = yes
smtp_sasl_security_options = noanonymous
smtp_sasl_password_maps = hash:/etc/postfix/password_maps

The example assumes you’re using Gmail for the relayhost setting, but you can substitute the correct hostname and port for the mail host to which your system should hand off mail for sending.

For the most up-to-date details about the above configuration options, see the man page:

$ man postconf.5

Enable, start, and test Postfix

After you have updated the main.cf file, enable and start the Postfix service:

# systemctl enable --now postfix.service

You can then exit your sudo session as root using the exit command or Ctrl+D. You should now be able to test your configuration with the mail command:

$ echo 'It worked!' | mail -s "Test: $(date)" glb@gmail.com

Update services

If you have services like logwatch, mdadm, fail2ban, apcupsd or certwatch installed, you can now update their configurations so that their email notifications will go to your internet email address.

Optionally, you may want to configure all email that is sent to your local system’s root account to go to your internet email address. Add this line to the /etc/aliases file on your system (you’ll need to use sudo to edit this file, or switch to the root account first):

root: glb+root@gmail.com

Now run this command to re-read the aliases:

# newaliases
  • TIP: If you are using Gmail, you can add an alpha-numeric mark between your username and the @ symbol as demonstrated above to make it easier to identify and filter the email that you will receive from your computer(s).

Troubleshooting

View the mail queue:

$ mailq

Clear all email from the queues:

# postsuper -d ALL

Filter the configuration settings for interesting values:

$ postconf | grep "^relayhost\|^smtp_"

View the postfix/smtp logs:

$ journalctl --no-pager -t postfix/smtp

Reload postfix after making configuration changes:

$ systemctl reload postfix

Photo by Sharon McCutcheon on Unsplash.

Command line quick tips: More about permissions

A previous article covered some basics about file permissions on your Fedora system. This installment shows you additional ways to use permissions to manage file access and sharing. It also builds on the knowledge and examples in the previous article, so if you haven’t read that one, do check it out.

Symbolic and octal

In the previous article you saw how there are three distinct permission sets for a file. The user that owns the file has a set, members of the group that owns the file has a set, and then a final set is for everyone else. These permissions are expressed on screen in a long listing (ls -l) using symbolic mode.

Each set has r, w, and x entries for whether a particular user (owner, group member, or other) can read, write, or execute that file. But there’s another way to express these permissions: in octal mode.

You’re used to the decimal numbering system, which has ten distinct values (0 through 9). The octal system, on the other hand, has eight distinct values (0 through 7). In the case of permissions, octal is used as a shorthand to show the value of the r, w, and x fields. Think of each field as having a value:

  • r = 4
  • w = 2
  • x = 1

Now you can express any combination with a single octal value. For instance, read and write permission, but no execute permission, would have a value of 6. Read and execute permission only would have a value of 5. A file’s rwxr-xr-x symbolic permission has an octal value of 755.

You can use octal values to set file permissions with the chmod command similarly to symbolic values. The following two commands set the same permissions on a file:

chmod u=rw,g=r,o=r myfile1
chmod 644 myfile1

Special permission bits

There are several special permission bits also available on a file. These are called setuid (or suid), setgid (or sgid), and the sticky bit (or delete inhibit). Think of this as yet another set of octal values:

  • setuid = 4
  • setgid = 2
  • sticky = 1

The setuid bit is ignored unless the file is executable. If that’s the case, the file (presumably an app or a script) runs as if it were launched by the user who owns the file. A good example of setuid is the /bin/passwd utility, which allows a user to set or change passwords. This utility must be able to write to files no user should be allowed to change. Therefore it is carefully written, owned by the root user, and has a setuid bit so it can alter the password related files.

The setgid bit works similarly for executable files. The file will run with the permissions of the group that owns it. However, setgid also has an additional use for directories. If a file is created in a directory with setgid permission, the group owner for the file will be set to the group owner of the directory.

Finally, the sticky bit, while ignored for files, is useful for directories. The sticky bit set on a directory will prevent a user from deleting files in that directory owned by other users.

The way to set these bits with chmod in octal mode is to add a value prefix, such as 4755 to add setuid to an executable file. In symbolic mode, the u and g can be used to set or remove setuid and setgid, such as u+s,g+s. The sticky bit is set using o+t. (Other combinations, like o+s or u+t, are meaningless and ignored.)

Sharing and special permissions

Recall the example from the previous article concerning a finance team that needs to share files. As you can imagine, the special permission bits help to solve their problem even more effectively. The original solution simply made a directory the whole group could write to:

drwxrwx---. 2 root finance 4096 Jul 6 15:35 finance

One problem with this directory is that users dwayne and jill, who are both members of the finance group, can delete each other’s files. That’s not optimal for a shared space. It might be useful in some situations, but probably not when dealing with financial records!

Another problem is that files in this directory may not be truly shared, because they will be owned by the default groups of dwayne and jill — most likely the user private groups also named dwayne and jill.

A better way to solve this is to set both setgid and the sticky bit on the folder. This will do two things — cause files created in the folder to be owned by the finance group automatically, and prevent dwayne and jill from deleting each other’s files. Either of these commands will work:

sudo chmod 3770 finance
sudo chmod u+rwx,g+rwxs,o+t finance

The long listing for the file now shows the new special permissions applied. The sticky bit appears as T and not t because the folder is not searchable for users outside the finance group.

drwxrws--T. 2 root finance 4096 Jul 6 15:35 finance

Introducing Fedora CoreOS

The Fedora CoreOS team is excited to announce the first preview release of Fedora CoreOS, a new Fedora edition built specifically for running containerized workloads securely and at scale. It’s the successor to both Fedora Atomic Host and CoreOS Container Linux. Fedora CoreOS combines the provisioning tools, automatic update model, and philosophy of Container Linux with the packaging technology, OCI support, and SELinux security of Atomic Host.

Read on for more details about this exciting new release.

Why Fedora CoreOS?

Containers allow workloads to be reproducibly deployed to production and automatically scaled to meet demand. The isolation provided by a container means that the host OS can be small. It only needs a Linux kernel, systemd, a container runtime, and a few additional services such as an SSH server.

While containers can be run on a full-sized server OS, an operating system built specifically for containers can provide functionality that a general purpose OS cannot. Since the required software is minimal and uniform, the entire OS can be deployed as a unit with little customization. And, since containers are deployed across multiple nodes for redundancy, the OS can update itself automatically and then reboot without interrupting workloads.

Fedora CoreOS is built to be the secure and reliable host for your compute clusters. It’s designed specifically for running containerized workloads without regular maintenance, automatically updating itself with the latest OS improvements, bug fixes, and security updates. It provisions itself with Ignition, runs containers with Podman and Moby, and updates itself atomically and automatically with rpm-ostree.

Provisioning immutable infrastructure

Whether you run in the cloud, virtualized, or on bare metal, a Fedora CoreOS machine always begins from the same place: a generic OS image. Then, during the first boot, Fedora CoreOS uses Ignition to provision the system. Ignition reads an Ignition config from cloud user data or a remote URL, and uses it to create disk partitions and file systems, users, files and systemd units.

To provision a machine:

  1. Write a Fedora CoreOS Config (FCC), a YAML document that specifies the desired configuration of a machine. FCCs support all Ignition functionality, and also provide additional syntax (“sugar”) that makes it easier to specify typical configuration changes.
  2. Use the Fedora CoreOS Config Transpiler to validate your FCC and convert it to an Ignition config.
  3. Launch a Fedora CoreOS machine and pass it the Ignition config. If the machine boots successfully, provisioning has completed without errors.

Fedora CoreOS is designed to be managed as immutable infrastructure. After a machine is provisioned, you should not modify /etc or otherwise reconfigure the machine. Instead, modify the FCC and use it to provision a replacement machine.

This is similar to how you’d manage a container: container images are not updated in place, but rebuilt from scratch and redeployed. This approach makes it easy to scale out when load increases. Simply use the same Ignition config to launch additional machines.

Automatic updates

By default, Fedora CoreOS automatically downloads new OS releases, atomically installs them, and reboots into them. Releases roll out gradually over time. We can even stop a rollout if we discover a problem in a new release. Upgrades between Fedora releases are treated as any other update, and are automatically applied without user intervention.

The Linux ecosystem evolves quickly, and software updates can bring undesired behavior changes. However, for automatic updates to be trustworthy, they cannot break existing machines. To avoid this, Fedora CoreOS takes a two-pronged approach. First, we automatically test each change to the OS. However, automatic testing can’t catch all regressions, so Fedora CoreOS also ships multiple independent release streams:

  • The testing stream is a regular snapshot of the current Fedora release, plus updates.
  • After a testing release has been available for two weeks, it is sent to the stable stream. Bugs discovered in testing will be fixed before a release is sent to stable.
  • The next stream is a regular snapshot of the upcoming Fedora release, allowing additional time for testing larger changes.

All three streams receive security updates and critical bugfixes, and are intended to be safe for production use. Most machines should run the stable stream, since that receives the most testing. However, users should run a few percent of their nodes on the next and testing streams, and report problems to the issue tracker. This helps ensure that bugs that only affect certain workloads or certain hardware are fixed before they reach stable.

Telemetry

To help direct our development efforts, Fedora CoreOS performs some telemetry by default. A service called fedora-coreos-pinger periodically collects non-identifying information about the machine, such as the OS version, cloud platform, and instance type, and report it to servers controlled by the Fedora project.

No unique identifiers are reported or collected, and the data is only used in aggregate to answer questions about how Fedora CoreOS is being used. We prominently document that this collection is occurring and how to disable it. We also tell you how to help the project by reporting additional detail, including information that might identify the machine.

Current status of Fedora CoreOS

Fedora CoreOS is still under active development, and some planned functionality is not available in the first preview release:

  • Only the testing stream currently exists; the next and stable streams are not yet available.
  • Several cloud and virtualization platforms are not yet available. Only x86_64 is currently supported.
  • Booting a live Fedora CoreOS system via network (PXE) or CD is not yet supported.
  • We are actively discussing plans for closer integration with Kubernetes distributions, including OKD.
  • Fedora CoreOS Config Transpiler will gain more sugar over time.
  • Telemetry is not yet active.
  • Documentation is still under development.

While Fedora CoreOS is intended for production use, preview releases should not be used in production. Fedora CoreOS may change in incompatible ways during the preview period. There is no guarantee that a preview release will successfully update to a later preview release, or to a stable release.

The future

We expect the preview period to continue for about six months. At the end of the preview, we will declare Fedora CoreOS stable and encourage its use in production.

CoreOS Container Linux will be maintained until about six months after Fedora CoreOS is declared stable. We’ll announce the exact timing later this year. During the preview period, we’ll publish tools and documentation to help Container Linux users migrate to Fedora CoreOS.

Fedora Atomic Host will be maintained until the end of life of Fedora 29, expected in late November. Before then, Fedora Atomic Host users should migrate to Fedora CoreOS.

Getting involved in Fedora CoreOS

To try out the new release, head over to the download page to get OS images or cloud image IDs. Then use the quick start guide to get a machine running quickly. Finally, get involved! You can report bugs and missing features to the issue tracker. You can also discuss Fedora CoreOS in Fedora Discourse, the development mailing list, or in #fedora-coreos on Freenode.

Welcome to Fedora CoreOS, and let us know what you think!

Bond WiFi and Ethernet for easier networking mobility

Sometimes one network interface isn’t enough. Network bonding allows multiple network connections to act together with a single logical interface. You might do this because you want more bandwidth than a single connection can handle. Or maybe you want to switch back and forth between your wired and wireless networks without losing your network connection.

The latter applies to me. One of the benefits to working from home is that when the weather is nice, it’s enjoyable to work from a sunny deck instead of inside. But every time I did that, I lost my network connections. IRC, SSH, VPN — everything goes away, at least for a moment while some clients reconnect. This article describes how I set up network bonding on my Fedora 30 laptop to seamlessly move from the wired connection my laptop dock to a WiFi connection.

In Linux, interface bonding is handled by the bonding kernel module. Fedora does not ship with this enabled by default, but it is included in the kernel-core package. This means that enabling interface bonding is only a command away:

sudo modprobe bonding

Note that this will only have effect until you reboot. To permanently enable interface bonding, create a file called bonding.conf in the /etc/modules-load.d directory that contains only the word “bonding”.

Now that you have bonding enabled, it’s time to create the bonded interface. First, you must get the names of the interfaces you want to bond. To list the available interfaces, run:

sudo nmcli device status

You will see output that looks like this:

DEVICE          TYPE      STATE         CONNECTION         
enp12s0u1       ethernet  connected     Wired connection 1
tun0            tun       connected     tun0               
virbr0          bridge    connected     virbr0             
wlp2s0          wifi      disconnected  --      
p2p-dev-wlp2s0  wifi-p2p disconnected  --      
enp0s31f6       ethernet  unavailable   --      
lo              loopback  unmanaged     --                 
virbr0-nic      tun       unmanaged     --       

In this case, there are two (wired) Ethernet interfaces available. enp12s0u1 is on a laptop docking station, and you can tell that it’s connected from the STATE column. The other, enp0s31f6, is the built-in port in the laptop. There is also a WiFi connection called wlp2s0. enp12s0u1 and wlp2s0 are the two interfaces we’re interested in here. (Note that it’s not necessary for this exercise to understand how network devices are named, but if you’re interested you can see the systemd.net-naming-scheme man page.)

The first step is to create the bonded interface:

sudo nmcli connection add type bond ifname bond0 con-name bond0

In this example, the bonded interface is named bond0. The “con-name bond0” sets the connection name to bond0; leaving this off would result in a connection named bond-bond0. You can also set the connection name to something more human-friendly, like “Docking station bond” or “Ben”

The next step is to add the interfaces to the bonded interface:

sudo nmcli connection add type ethernet ifname enp12s0u1 master bond0 con-name bond-ethernet
sudo nmcli connection add type wifi ifname wlp2s0 master bond0 ssid Cotton con-name bond-wifi

As above, the connection name is specified to be more descriptive. Be sure to replace enp12s0u1 and wlp2s0 with the appropriate interface names on your system. For the WiFi interface, use your own network name (SSID) where I use “Cotton”. If your WiFi connection has a password (and of course it does!), you’ll need to add that to the configuration, too. The following assumes you’re using WPA2-PSK authentication

sudo nmcli connection modify bond-wifi wifi-sec.key-mgmt wpa-psk
sudo nmcli connection edit bond-wif

The second command will bring you into the interactive editor where you can enter your password without it being logged in your shell history. Enter the following, replacing password with your actual password

set wifi-sec.psk password
save
quit

Now you’re ready to start your bonded interface and the secondary interfaces you created

sudo nmcli connection up bond0
sudo nmcli connection up bond-ethernet
sudo nmcli connection up bond-wifi

You should now be able to disconnect your wired or wireless connections without losing your network connections.

A caveat: using other WiFi networks

This configuration works well when moving around on the specified WiFi network, but when away from this network, the SSID used in the bond is not available. Theoretically, one could add an interface to the bond for every WiFi connection used, but that doesn’t seem reasonable. Instead, you can disable the bonded interface:

sudo nmcli connection down bond0

When back on the defined WiFi network, simply start the bonded interface as above.

Fine-tuning your bond

By default, the bonded interface uses the “load balancing (round-robin)” mode. This spreads the load equally across the interfaces. But if you have a wired and a wireless connection, you may want to prefer the wired connection. The “active-backup” mode enables this. You can specify the mode and primary interface when you are creating the interface, or afterward using this command (the bonded interface should be down):

sudo nmcli connection modify bond0 +bond.options "mode=active-backup,primary=enp12s0u1"

The kernel documentation has much more information about bonding options.

Manage your shell environment

Some time ago, the Fedora Magazine has published an article introducing ZSH — an alternative shell to Fedora’s default, bash. This time, we’re going to look into customizing it to use it in a more effective way. All of the concepts shown in this article also work in other shells such as bash.

Alias

Aliases are shortcuts for commands. This is useful for creating short commands for actions that are performed often, but require a long command that would take too much time to type. The syntax is:

$ alias yourAlias='complex command with arguments'

They don’t always need to be used for shortening long commands. Important is that you use them for tasks that you do often. An example could be:

$ alias dnfUpgrade='dnf -y upgrade'

That way, to do a system upgrade, I just type dnfUpgrade instead of the whole dnf command.

The problem of setting aliases right in the console is that once the terminal session is closed, the alias would be lost. To set them permanently, resource files are used.

Resource Files

Resource files (or rc files) are configuration files that are loaded per user in the beginning of a session or a process (when a new terminal window is opened, or a new program like vim is started). In the case of ZSH, the resource file is .zshrc, and for bash it’s .bashrc.

To make the aliases permanent, you can either put them in your resource. You can edit your resource file with a text editor of your choice. This example uses vim:

$ vim $HOME/.zshrc

Or for bash:

$ vim $HOME/.bashrc

Note that the location of the resource file is specified relatively to a home directory — and that’s where ZSH (or bash) are going to look for the file by default for each user.

Other option is to put your configuration in any other file, and then source it:

$ source /path/to/your/rc/file

Again, sourcing it right in your session will only apply it to the session, so to make it permanent, add the source command to your resource file. The advantage of having your source file in a different location is that you can source it any time. Or anywhere which is especially useful in shared environments.

Environment Variables

Environment variables are values assigned to a specific name which can be then called in scripts and commands. They start with the $ dollar sign. One of the most common is $HOME that references the home directory.

As the name suggests, environment variables are a part of your environment. Set a variable using the following syntax:

$ http_proxy="http://your.proxy"

And to make it an environment variable, export it with the following command:

$ export $http_proxy

To see all the environment variables that are currently set, use the env command:

$ env

The command outputs all the variables available in your session. To demonstrate how to use them in a command, try running the following echo commands:

$ echo $PWD
/home/fedora
$ echo $USER
fedora

What happens here is variable expansion — the value stored in the variable is used in your command.

Another useful variable is $PATH, that defines directories that your shell uses to look for binaries.

The $PATH variable

There are many directories, or folders (the way they are called in graphical environments) that are important to the OS. Some directories are set to hold binaries you can use directly in your shell. And these directories are defined in the $PATH variable.

$ echo $PATH
/usr/lib64/qt-3.3/bin:/usr/share/Modules/bin:/usr/lib64/ccache:/usr/local/bin:/usr/bin:/bin:/usr/local/sbin:/usr/sbin:/usr/libexec/sdcc:/usr/libexec/sdcc:/usr/bin:/bin:/sbin:/usr/sbin:/opt/FortiClient

This will help you when you want to have your own binaries (or scripts) accessible in the shell.

Securing telnet connections with stunnel

Telnet is a client-server protocol that connects to a remote server through TCP over port 23. Telnet does not encrypt data and is considered insecure and passwords can be easily sniffed because data is sent in the clear. However there are still legacy systems that need to use it. This is where stunnel comes to the rescue.

Stunnel is designed to add SSL encryption to programs that have insecure connection protocols. This article shows you how to use it, with telnet as an example.

Server Installation

Install stunnel along with the telnet server and client using sudo:

sudo dnf -y install stunnel telnet-server telnet

Add a firewall rule, entering your password when prompted:

firewall-cmd --add-service=telnet --perm
firewall-cmd --reload

Next, generate an RSA private key and an SSL certificate:

openssl genrsa 2048 > stunnel.key
openssl req -new -key stunnel.key -x509 -days 90 -out stunnel.crt

You will be prompted for the following information one line at a time. When asked for Common Name you must enter the correct host name or IP address, but everything else you can skip through by hitting the Enter key.

You are about to be asked to enter information that will be
incorporated into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [XX]:
State or Province Name (full name) []:
Locality Name (eg, city) [Default City]:
Organization Name (eg, company) [Default Company Ltd]:
Organizational Unit Name (eg, section) []:
Common Name (eg, your name or your server's hostname) []:
Email Address []

Merge the RSA key and SSL certificate into a single .pem file, and copy that to the SSL certificate directory:

cat stunnel.crt stunnel.key > stunnel.pem
sudo cp stunnel.pem /etc/pki/tls/certs/

Now it’s time to define the service and the ports to use for encrypting your connection. Choose a port that is not already in use. This example uses port 450 for tunneling telnet. Edit or create the /etc/stunnel/telnet.conf file:

cert = /etc/pki/tls/certs/stunnel.pem
sslVersion = TLSv1
chroot = /var/run/stunnel
setuid = nobody
setgid = nobody
pid = /stunnel.pid
socket = l:TCP_NODELAY=1
socket = r:TCP_NODELAY=1
[telnet]
accept = 450
connect = 23

The accept option is the port the server will listen to for incoming telnet requests. The connect option is the internal port the telnet server listens to.

Next, make a copy of the systemd unit file that allows you to override the packaged version:

sudo cp /usr/lib/systemd/system/stunnel.service /etc/systemd/system

Edit the /etc/systemd/system/stunnel.service file to add two lines. These lines create a chroot jail for the service when it starts.

[Unit]
Description=TLS tunnel for network daemons
After=syslog.target network.target

[Service]
ExecStart=/usr/bin/stunnel
Type=forking
PrivateTmp=true
ExecStartPre=-/usr/bin/mkdir /var/run/stunnel
ExecStartPre=/usr/bin/chown -R nobody:nobody /var/run/stunnel

[Install]
WantedBy=multi-user.target

Next, configure SELinux to listen to telnet on the new port you just specified:

sudo semanage port -a -t telnetd_port_t -p tcp 450

Finally, add a new firewall rule:

firewall-cmd --add-port=450/tcp --perm
firewall-cmd --reload

Now you can enable and start telnet and stunnel.

systemctl enable telnet.socket stunnel@telnet.service --now

A note on the systemctl command is in order. Systemd and the stunnel package provide an additional template unit file by default. The template lets you drop multiple configuration files for stunnel into /etc/stunnel, and use the filename to start the service. For instance, if you had a foobar.conf file, you could start that instance of stunnel with systemctl start stunnel@foobar.service, without having to write any unit files yourself.

If you want, you can set this stunnel template service to start on boot:

systemctl enable stunnel@telnet.service

Client Installation

This part of the article assumes you are logged in as a normal user (with sudo privileges) on the client system. Install stunnel and the telnet client:

dnf -y install stunnel telnet

Copy the stunnel.pem file from the remote server to your client /etc/pki/tls/certs directory. In this example, the IP address of the remote telnet server is 192.168.1.143.

sudo scp myuser@192.168.1.143:/etc/pki/tls/certs/stunnel.pem
/etc/pki/tls/certs/

Create the /etc/stunnel/telnet.conf file:

cert = /etc/pki/tls/certs/stunnel.pem
client=yes
[telnet]
accept=450
connect=192.168.1.143:450

The accept option is the port that will be used for telnet sessions. The connect option is the IP address of your remote server and the port it’s listening on.

Next, enable and start stunnel:

systemctl enable stunnel@telnet.service --now

Test your connection. Since you have a connection established, you will telnet to localhost instead of the hostname or IP address of the remote telnet server:

[user@client ~]$ telnet localhost 450
Trying ::1...
telnet: connect to address ::1: Connection refused
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.

Kernel 5.0.9-301.fc30.x86_64 on an x86_64 (0)
server login: myuser
Password: XXXXXXX
Last login: Sun May  5 14:28:22 from localhost
[myuser@server ~]$

Manage business documents with OpenAS2 on Fedora

Business documents often require special handling. Enter Electronic Document Interchange, or EDI. EDI is more than simply transferring files using email or http (or ftp), because these are documents like orders and invoices. When you send an invoice, you want to be sure that:

1. It goes to the right destination, and is not intercepted by competitors.
2. Your invoice cannot be forged by a 3rd party.
3. Your customer can’t claim in court that they never got the invoice.

The first two goals can be accomplished by HTTPS or email with S/MIME, and in some situations, a simple HTTPS POST to a web API is sufficient. What EDI adds is the last part.

This article does not cover the messy topic of formats for the files exchanged. Even when using a standardized format like ANSI or EDIFACT, it is ultimately up to the business partners. It is not uncommon for business partners to use an ad-hoc CSV file format. This article shows you how to configure Fedora to send and receive in an EDI setup.

Centralized EDI

The traditional solution is to use a Value Added Network, or VAN. The VAN is a central hub that transfers documents between their customers. Most importantly, it keeps a secure record of the documents exchanged that can be used as evidence in disputes. The VAN can use different transfer protocols for each of its customers

AS Protocols and MDN

The AS protocols are a specification for adding a digital signature with optional encryption to an electronic document. What it adds over HTTPS or S/MIME is the Message Disposition Notification, or MDN. The MDN is a signed and dated response that says, in essence, “We got your invoice.” It uses a secure hash to identify the specific document received. This addresses point #3 without involving a third party.

The AS2 protocol uses HTTP or HTTPS for transport. Other AS protocols target FTP and SMTP. AS2 is used by companies big and small to avoid depending on (and paying) a VAN.

OpenAS2

OpenAS2 is an open source Java implemention of the AS2 protocol. It is available in Fedora since 28, and installed with:

$ sudo dnf install openas2
$ cd /etc/openas2

Configuration is done with a text editor, and the config files are in XML. The first order of business before starting OpenAS2 is to change the factory passwords.

Edit /etc/openas2/config.xml and search for ChangeMe. Change those passwords. The default password on the certificate store is testas2, but that doesn’t matter much as anyone who can read the certificate store can read config.xml and get the password.

What to share with AS2 partners

There are 3 things you will exchange with an AS2 peer.

AS2 ID

Don’t bother looking up the official AS2 standard for legal AS2 IDs. While OpenAS2 implements the standard, your partners will likely be using a proprietary product which doesn’t. While AS2 allows much longer IDs, many implementations break with more than 16 characters. Using otherwise legal AS2 ID chars like ‘:’ that can appear as path separators on a proprietary OS is also a problem. Restrict your AS2 ID to upper and lower case alpha, digits, and ‘_’ with no more than 16 characters.

SSL certificate

For real use, you will want to generate a certificate with SHA256 and RSA. OpenAS2 ships with two factory certs to play with. Don’t use these for anything real, obviously. The certificate file is in PKCS12 format. Java ships with keytool which can maintain your PKCS12 “keystore,” as Java calls it. This article skips using openssl to generate keys and certificates. Simply note that sudo keytool -list -keystore as2_certs.p12 will list the two factory practice certs.

AS2 URL

This is an HTTP URL that will access your OpenAS2 instance. HTTPS is also supported, but is redundant. To use it you have to uncomment the https module configuration in config.xml, and supply a certificate signed by a public CA. This requires another article and is entirely unnecessary here.

By default, OpenAS2 listens on 10080 for HTTP and 10443 for HTTPS. OpenAS2 can talk to itself, so it ships with two partnerships using http://localhost:10080 as the AS2 URL. If you don’t find this a convincing demo, and can install a second instance (on a VM, for instance), you can use private IPs for the AS2 URLs. Or install Cjdns to get IPv6 mesh addresses that can be used anywhere, resulting in AS2 URLs like http://[fcbf:fc54:e597:7354:8250:2b2e:95e6:d6ba]:10080.

Most businesses will also want a list of IPs to add to their firewall. This is actually bad practice. An AS2 server has the same security risk as a web server, meaning you should isolate it in a VM or container. Also, the difficulty of keeping mutual lists of IPs up to date grows with the list of partners. The AS2 server rejects requests not signed by a configured partner.

OpenAS2 Partners

With that in mind, open partnerships.xml in your editor. At the top is a list of “partners.” Each partner has a name (referenced by the partnerships below as “sender” or “receiver”), AS2 ID, certificate, and email. You need a partner definition for yourself and those you exchange documents with. You can define multiple partners for yourself. OpenAS2 ships with two partners, OpenAS2A and OpenAS2B, which you’ll use to send a test document.

OpenAS2 Partnerships

Next is a list of “partnerships,” one for each direction. Each partnership configuration includes the sender, receiver, and the AS2 URL used to send the documents. By default, partnerships use synchronous MDN. The MDN is returned on the same HTTP transaction. You could uncomment the as2_receipt_option for asynchronous MDN, which is sent some time later. Use synchronous MDN whenever possible, as tracking pending MDNs adds complexity to your application.

The other partnership options select encryption, signature hash, and other protocol options. A fully implemented AS2 receiver can handle any combination of options, but AS2 partners may have incomplete implementations or policy requirements. For example, DES3 is a comparatively weak encryption algorithm, and may not be acceptable. It is the default because it is almost universally implemented.

If you went to the trouble to set up a second physical or virtual machine for this test, designate one as OpenAS2A and the other as OpenAS2B. Modify the as2_url on the OpenAS2A-to-OpenAS2B partnership to use the IP (or hostname) of OpenAS2B, and vice versa for the OpenAS2B-to-OpenAS2A partnership. Unless they are using the FedoraWorkstation firewall profile, on both machines you’ll need:

# sudo firewall-cmd --zone=public --add-port=10080/tcp

Now start the openas2 service (on both machines if needed):

# sudo systemctl start openas2

Resetting the MDN password

This initializes the MDN log database with the factory password, not the one you changed it to. This is a packaging bug to be fixed in the next release. To avoid frustration, here’s how to change the h2 database password:

$ sudo systemctl stop openas2
$ cat >h2passwd <<'DONE'
#!/bin/bash
AS2DIR="/var/lib/openas2"
java -cp "$AS2DIR"/lib/h2* org.h2.tools.Shell \
-url jdbc:h2:"$AS2DIR"/db/openas2 \
-user sa -password "$1" <<EOF
alter user sa set password '$2';
exit
EOF
DONE
$ sudo sh h2passwd ChangeMe yournewpasswordsetabove
$ sudo systemctl start openas2

Testing the setup

With that out of the way, let’s send a document. Assuming you are on OpenAS2A machine:

$ cat >testdoc <<'DONE'
This is not a real EDI format, but is nevertheless a document.
DONE
$ sudo chown openas2 testdoc
$ sudo mv testdoc /var/spool/openas2/toOpenAS2B
$ sudo journalctl -f -u openas2
... log output of sending file, Control-C to stop following log
^C

OpenAS2 does not send a document until it is writable by the openas2 user or group. As a consequence, your actual business application will copy, or generate in place, the document. Then it changes the group or permissions to send it on its way, to avoid sending a partial document.

Now, on the OpenAS2B machine, /var/spool/openas2/OpenAS2A_OID-OpenAS2B_OID/inbox shows the message received. That should get you started!


Photo by Beatriz Pérez Moya on Unsplash.

Check storage performance with dd

This article includes some example commands to show you how to get a rough estimate of hard drive and RAID array performance using the dd command. Accurate measurements would have to take into account things like write amplification and system call overhead, which this guide does not. For a tool that might give more accurate results, you might want to consider using hdparm.

To factor out performance issues related to the file system, these examples show how to test the performance of your drives and arrays at the block level by reading and writing directly to/from their block devices. WARNING: The write tests will destroy any data on the block devices against which they are run. Do not run them against any device that contains data you want to keep!

Four tests

Below are four example dd commands that can be used to test the performance of a block device:

  1. One process reading from $MY_DISK:
    # dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache
  2. One process writing to $MY_DISK:
    # dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct
  3. Two processes reading concurrently from $MY_DISK:
    # (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &)
  4. Two processes writing concurrently to $MY_DISK:
    # (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &)

– The iflag=nocache and oflag=direct parameters are important when performing the read and write tests (respectively) because without them the dd command will sometimes show the resulting speed of transferring the data to/from RAM rather than the hard drive.

– The values for the bs and count parameters are somewhat arbitrary and what I have chosen should be large enough to provide a decent average in most cases for current hardware.

– The null and zero devices are used for the destination and source (respectively) in the read and write tests because they are fast enough that they will not be the limiting factor in the performance tests.

– The skip=200 parameter on the second dd command in the concurrent read and write tests is to ensure that the two copies of dd are operating on different areas of the hard drive.

16 examples

Below are demonstrations showing the results of running each of the above four tests against each of the following four block devices:

  1. MY_DISK=/dev/sda2 (used in examples 1-X)
  2. MY_DISK=/dev/sdb2 (used in examples 2-X)
  3. MY_DISK=/dev/md/stripped (used in examples 3-X)
  4. MY_DISK=/dev/md/mirrored (used in examples 4-X)

A video demonstration of the these tests being run on a PC is provided at the end of this guide.

Begin by putting your computer into rescue mode to reduce the chances that disk I/O from background services might randomly affect your test results. WARNING: This will shutdown all non-essential programs and services. Be sure to save your work before running these commands. You will need to know your root password to get into rescue mode. The passwd command, when run as the root user, will prompt you to (re)set your root account password.

$ sudo -i
# passwd
# setenforce 0
# systemctl rescue

You might also want to temporarily disable logging to disk:

# sed -r -i.bak 's/^#?Storage=.*/Storage=none/' /etc/systemd/journald.conf
# systemctl restart systemd-journald.service

If you have a swap device, it can be temporarily disabled and used to perform the following tests:

# swapoff -a
# MY_DEVS=$(mdadm --detail /dev/md/swap | grep active | grep -o "/dev/sd.*")
# mdadm --stop /dev/md/swap
# mdadm --zero-superblock $MY_DEVS

Example 1-1 (reading from sda)

# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1)
# dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.7003 s, 123 MB/s

Example 1-2 (writing to sda)

# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1)
# dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.67117 s, 125 MB/s

Example 1-3 (reading concurrently from sda)

# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1)
# (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &)
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.42875 s, 61.2 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.52614 s, 59.5 MB/s

Example 1-4 (writing concurrently to sda)

# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 1)
# (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &)
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.2435 s, 64.7 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.60872 s, 58.1 MB/s

Example 2-1 (reading from sdb)

# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2)
# dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.67285 s, 125 MB/s

Example 2-2 (writing to sdb)

# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2)
# dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.67198 s, 125 MB/s

Example 2-3 (reading concurrently from sdb)

# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2)
# (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &)
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.52808 s, 59.4 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.57736 s, 58.6 MB/s

Example 2-4 (writing concurrently to sdb)

# MY_DISK=$(echo $MY_DEVS | cut -d ' ' -f 2)
# (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &)
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.7841 s, 55.4 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 3.81475 s, 55.0 MB/s

Example 3-1 (reading from RAID0)

# mdadm --create /dev/md/stripped --homehost=any --metadata=1.0 --level=0 --raid-devices=2 $MY_DEVS
# MY_DISK=/dev/md/stripped
# dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 0.837419 s, 250 MB/s

Example 3-2 (writing to RAID0)

# MY_DISK=/dev/md/stripped
# dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 0.823648 s, 255 MB/s

Example 3-3 (reading concurrently from RAID0)

# MY_DISK=/dev/md/stripped
# (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &)
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.31025 s, 160 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.80016 s, 116 MB/s

Example 3-4 (writing concurrently to RAID0)

# MY_DISK=/dev/md/stripped
# (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &)
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.65026 s, 127 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.81323 s, 116 MB/s

Example 4-1 (reading from RAID1)

# mdadm --stop /dev/md/stripped
# mdadm --create /dev/md/mirrored --homehost=any --metadata=1.0 --level=1 --raid-devices=2 --assume-clean $MY_DEVS
# MY_DISK=/dev/md/mirrored
# dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.74963 s, 120 MB/s

Example 4-2 (writing to RAID1)

# MY_DISK=/dev/md/mirrored
# dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.74625 s, 120 MB/s

Example 4-3 (reading concurrently from RAID1)

# MY_DISK=/dev/md/mirrored
# (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache &); (dd if=$MY_DISK of=/dev/null bs=1MiB count=200 iflag=nocache skip=200 &)
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.67171 s, 125 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 1.67685 s, 125 MB/s

Example 4-4 (writing concurrently to RAID1)

# MY_DISK=/dev/md/mirrored
# (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct &); (dd if=/dev/zero of=$MY_DISK bs=1MiB count=200 oflag=direct skip=200 &)
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 4.09666 s, 51.2 MB/s
200+0 records in
200+0 records out
209715200 bytes (210 MB, 200 MiB) copied, 4.1067 s, 51.1 MB/s

Restore your swap device and journald configuration

# mdadm --stop /dev/md/stripped /dev/md/mirrored
# mdadm --create /dev/md/swap --homehost=any --metadata=1.0 --level=1 --raid-devices=2 $MY_DEVS
# mkswap /dev/md/swap
# swapon -a
# mv /etc/systemd/journald.conf.bak /etc/systemd/journald.conf
# systemctl restart systemd-journald.service
# reboot

Interpreting the results

Examples 1-1, 1-2, 2-1, and 2-2 show that each of my drives read and write at about 125 MB/s.

Examples 1-3, 1-4, 2-3, and 2-4 show that when two reads or two writes are done in parallel on the same drive, each process gets at about half the drive’s bandwidth (60 MB/s).

The 3-x examples show the performance benefit of putting the two drives together in a RAID0 (data stripping) array. The numbers, in all cases, show that the RAID0 array performs about twice as fast as either drive is able to perform on its own. The trade-off is that you are twice as likely to lose everything because each drive only contains half the data. A three-drive array would perform three times as fast as a single drive (all drives being equal) but it would be thrice as likely to suffer a catastrophic failure.

The 4-x examples show that the performance of the RAID1 (data mirroring) array is similar to that of a single disk except for the case where multiple processes are concurrently reading (example 4-3). In the case of multiple processes reading, the performance of the RAID1 array is similar to that of the RAID0 array. This means that you will see a performance benefit with RAID1, but only when processes are reading concurrently. For example, if a process tries to access a large number of files in the background while you are trying to use a web browser or email client in the foreground. The main benefit of RAID1 is that your data is unlikely to be lost if a drive fails.

Video demo

Testing storage throughput using dd

Troubleshooting

If the above tests aren’t performing as you expect, you might have a bad or failing drive. Most modern hard drives have built-in Self-Monitoring, Analysis and Reporting Technology (SMART). If your drive supports it, the smartctl command can be used to query your hard drive for its internal statistics:

# smartctl --health /dev/sda
# smartctl --log=error /dev/sda
# smartctl -x /dev/sda

Another way that you might be able to tune your PC for better performance is by changing your I/O scheduler. Linux systems support several I/O schedulers and the current default for Fedora systems is the multiqueue variant of the deadline scheduler. The default performs very well overall and scales extremely well for large servers with many processors and large disk arrays. There are, however, a few more specialized schedulers that might perform better under certain conditions.

To view which I/O scheduler your drives are using, issue the following command:

$ for i in /sys/block/sd?/queue/scheduler; do echo "$i: $(<$i)"; done

You can change the scheduler for a drive by writing the name of the desired scheduler to the /sys/block/<device name>/queue/scheduler file:

# echo bfq > /sys/block/sda/queue/scheduler

You can make your changes permanent by creating a udev rule for your drive. The following example shows how to create a udev rule that will set all rotational drives to use the BFQ I/O scheduler:

# cat << END > /etc/udev/rules.d/60-ioscheduler-rotational.rules
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="1", ATTR{queue/scheduler}="bfq"
END

Here is another example that sets all solid-state drives to use the NOOP I/O scheduler:

# cat << END > /etc/udev/rules.d/60-ioscheduler-solid-state.rules
ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="none"
END

Changing your I/O scheduler won’t affect the raw throughput of your devices, but it might make your PC seem more responsive by prioritizing the bandwidth for the foreground tasks over the background tasks or by eliminating unnecessary block reordering.


Photo by James Donovan on Unsplash.

Use udica to build SELinux policy for containers

While modern IT environments move towards Linux containers, the need to secure these environments is as relevant as ever. Containers are a process isolation technology. While containers can be a defense mechanism, they only excel when combined with SELinux.

Fedora SELinux engineering built a new standalone tool, udica, to generate SELinux policy profiles for containers by automatically inspecting them. This article focuses on why udica is needed in the container world, and how it makes SELinux and containers work better together. You’ll find examples of SELinux separation for containers that let you avoid turning protection off because the generic SELinux type container_t is too tight. With udica you can easily customize the policy with limited SELinux policy writing skills.

SELinux technology

SELinux is a security technology that brings proactive security to Linux systems. It’s a labeling system that assigns a label to all subjects (processes and users) and objects (files, directories, sockets, etc.). These labels are then used in a security policy that controls access throughout the system. It’s important to mention that what’s not allowed in an SELinux security policy is denied by default. The policy rules are enforced by the kernel. This security technology has been in use on Fedora for several years. A real example of such a rule is:

allow httpd_t httpd_log_t: file { append create getattr ioctl lock open read setattr };

The rule allows any process labeled as httpd_t to create, append, read and lock files labeled as httpd_log_t. Using the ps command, you can list all processes with their labels:

$ ps -efZ | grep httpd
system_u:system_r:httpd_t:s0 root 13911 1 0 Apr14 ? 00:05:14 /usr/sbin/httpd -DFOREGROUND
...

To see which objects are labeled as httpd_log_t, use semanage:

# semanage fcontext -l | grep httpd_log_t
/var/log/httpd(/.)? all files system_u:object_r:httpd_log_t:s0
/var/log/nginx(/.)? all files system_u:object_r:httpd_log_t:s0
...

The SELinux security policy for Fedora is shipped in the selinux-policyRPM package.

SELinux vs. containers

In Fedora, the container-selinux RPM package provides a generic SELinux policy for all containers started by engines like podman or docker. Its main purposes are to protect the host system against a container process, and to separate containers from each other. For instance, containers confined by SELinux with the process type container_t can only read/execute files in /usr and write to container_file_t files type on host file system. To prevent attacks by containers on each other, Multi-Category Security (MCS) is used.

Using only one generic policy for containers is problematic, because of the huge variety of container usage. On one hand, the default container type (container_t) is often too strict. For example:

  • Fedora SilverBlue needs containers to read/write a user’s home directory
  • Fluentd project needs containers to be able to read logs in the /var/log directory

On the other hand, the default container type could be too loose for certain use cases:

  • It has no SELinux network controls — all container processes can bind to any network port
  • It has no SELinux control on Linux capabilities — all container processes can use all capabilities

There is one solution to handle both use cases: write a custom SELinux security policy for the container. This can be tricky, because SELinux expertise is required. For this purpose, the udica tool was created.

Introducing udica

Udica generates SELinux security profiles for containers. Its concept is based on the “block inheritance” feature inside the common intermediate language (CIL) supported by SELinux userspace. The tool creates a policy that combines:

  • Rules inherited from specified CIL blocks (templates), and
  • Rules discovered by inspection of container JSON file, which contains mountpoints and ports definitions

You can load the final policy immediately, or move it to another system to load into the kernel. Here’s an example, using a container that:

  • Mounts /home as read only
  • Mounts /var/spool as read/write
  • Exposes port tcp/21

The container starts with this command:

# podman run -v /home:/home:ro -v /var/spool:/var/spool:rw -p 21:21 -it fedora bash

The default container type (container_t) doesn’t allow any of these three actions. To prove it, you could use the sesearch tool to query that the allow rules are present on system:

# sesearch -A -s container_t -t home_root_t -c dir -p read 

There’s no allow rule present that lets a process labeled as container_t access a directory labeled home_root_t (like the /home directory). The same situation occurs with /var/spool, which is labeled var_spool_t:

# sesearch -A -s container_t -t var_spool_t -c dir -p read

On the other hand, the default policy completely allows network access.

# sesearch -A -s container_t -t port_type -c tcp_socket
allow container_net_domain port_type:tcp_socket { name_bind name_connect recv_msg send_msg };
allow sandbox_net_domain port_type:tcp_socket { name_bind name_connect recv_msg send_msg };

Securing the container

It would be great to restrict this access and allow the container to bind just on TCP port 21 or with the same label. Imagine you find an example container using podman ps whose ID is 37a3635afb8f:

# podman ps -q
37a3635afb8f

You can now inspect the container and pass the inspection file to the udica tool. The name for the new policy is my_container.

# podman inspect 37a3635afb8f > container.json
# udica -j container.json my_container
Policy my_container with container id 37a3635afb8f created!

Please load these modules using:
# semodule -i my_container.cil /usr/share/udica/templates/{base_container.cil,net_container.cil,home_container.cil}

Restart the container with: "--security-opt label=type:my_container.process" parameter

That’s it! You just created a custom SELinux security policy for the example container. Now you can load this policy into the kernel and make it active. The udica output above even tells you the command to use:

# semodule -i my_container.cil /usr/share/udica/templates/{base_container.cil,net_container.cil,home_container.cil}

Now you must restart the container to allow the container engine to use the new custom policy:

# podman run --security-opt label=type:my_container.process -v /home:/home:ro -v /var/spool:/var/spool:rw -p 21:21 -it fedora bash

The example container is now running in the newly created my_container.process SELinux process type:

# ps -efZ | grep my_container.process
unconfined_u:system_r:container_runtime_t:s0-s0:c0.c1023 root 2275 434 1 13:49 pts/1 00:00:00 podman run --security-opt label=type:my_container.process -v /home:/home:ro -v /var/spool:/var/spool:rw -p 21:21 -it fedora bash
system_u:system_r:my_container.process:s0:c270,c963 root 2317 2305 0 13:49 pts/0 00:00:00 bash

Seeing the results

The command sesearch now shows allow rules for accessing /home and /var/spool:

# sesearch -A -s my_container.process -t home_root_t -c dir -p read
allow my_container.process home_root_t:dir { getattr ioctl lock open read search };
# sesearch -A -s my_container.process -t var_spool_t -c dir -p read
allow my_container.process var_spool_t:dir { add_name getattr ioctl lock open read remove_name search write }

The new custom SELinux policy also allows my_container.process to bind only to TCP/UDP ports labeled the same as TCP port 21:

# semanage port -l | grep 21 | grep ftp
ftp_port_t tcp 21, 989, 990
# sesearch -A -s my_container.process -c tcp_socket -p name_bind
allow my_container.process ftp_port_t:tcp_socket name_bind;

Conclusion

The udica tool helps you create SELinux policies for containers based on an inspection file without any SELinux expertise required. Now you can increase the security of containerized environments. Sources are available on GitHub, and an RPM package is available in Fedora repositories for Fedora 28 and later.


Photo by Samuel Zeller on Unsplash.