Posted on Leave a comment

Set up an offline command line dictionary in Fedora

You don’t need an internet connection to have an easily searchable and extendable dictionary on your Fedora computer. You can use sdcv (StarDict under Console Version) and the public Stardict files on the default repositories to keep a local record for offline use. This article shows you how.

What is sdcv?

sdcv is a command line variant of Stardict. Stardict is a part of a long legacy of GUI offline dictionaries. The “dic” files it uses are formatted as a colon delimited file, with the word in first column and the definition in the second column. You can have multiple lines with the same word and different definitions. sdcv will provide you with a search function and formatted display of your results.

Installing sdcv

You can get started quickly with sdcv and the English dictionary by installing them from the default repos:

sudo dnf install sdcv stardict-dic-en

sdcv will be ready for use right away. If you want to see what other languages are available, use this command:

dnf search stardict

How to use sdcv

sdcv has an interactive and non-interactive mode. You can perform a quick search on a word or term using this command:

sdcv word

For example, you could search sdcv linux. Alternately, you can run sdcv by itself to activate interactive mode.

Customizing sdcv

sdcv has a –color option that adds coloring to the words and source of the definition. You can also use an alias to enable –color by default. Simply edit your shell resource file (default on Fedora is ~/.bashrc) to add this command:

alias sdcv="sdcv --color"

You can also use a more friendly name like this: 

alias describe="sdcv --color"

sdcv references /usr/share/stardic/dic by default, or it uses the path located in the shell variable STARDICT_DATA_DIR. You can also set up a personal dictionary in the file $HOME/.stardict/dic.

Fun facts

Believe it or not, the dict network protocol is still alive to this day. You can use it with the curl command by using a command like this to search for a word:

curl dict://dict.org/d:<word>

This pull definitions straight from the internet via your command line. Enjoy using sdcv!


Photo by Pisit Heng on Unsplash.

Posted on Leave a comment

Learning about Partitions and How to Create Them for Fedora

Operating system distributions try to craft a one size fits all partition layout for their file systems. Distributions cannot know the details about how your hardware is configured or how you use your system though. Do you have more than one storage drive? If so, you might be able to get a performance benefit by putting the write-heavy partitions (var and swap for example) on a separate drive from the others that tend to be more read-intensive since most drives cannot read and write at the same time. Or maybe you are running a database and have a small solid-state drive that would improve the database’s performance if its files are stored on the SSD.

The following sections attempt to describe in brief some of the historical reasons for separating some parts of the file system out into separate partitions so that you can make a more informed decision when you install your Linux operating system.

If you know more (or contradictory) historical details about the partitioning decisions that shaped the Linux operating systems used today, contribute what you know below in the comments section!

Common partitions and why or why not to create them

The boot partition

One of the reasons for putting the /boot directory on a separate partition was to ensure that the boot loader and kernel were located within the first 1024 cylinders of the disk. Most modern computers do not have the 1024 cylinder restriction. So for most people, this concern is no longer relevant. However, modern UEFI-based computers have a different restriction that makes it necessary to have a separate partition for the boot loader. UEFI-based computers require that the boot loader (which can be the Linux kernel directly) be on a FAT-formatted file system. The Linux operating system, however, requires a POSIX-compliant file system that can designate access permissions to individual files. Since FAT file systems do not support access permissions, the boot loader must be on a separate file system than the rest of the operating system on modern UEFI-based computers. A single partition cannot be formatted with more than one type of file system.

The var partition

One of the historical reasons for putting the /var directory on a separate partition was to prevent files that were frequently written to (/var/log/* for example) from filling up the entire drive. Since modern drives tend to be much larger and since other means like log rotation and disk quotas are available to manage storage utilization, putting /var on a separate partition may not be necessary. It is much easier to change a disk quota than it is to re-partition a drive.

Another reason for isolating /var was that file system corruption was much more common in the original version of the Linux Extended File System (EXT). The file systems that had more write activity were much more likely to be irreversibly corrupted by a power outage than those that did not. By partitioning the disk into separate file systems, one could limit the scope of the damage in the event of file system corruption. This concern is no longer as significant because modern file systems support journaling.

The home partition

Having /home on a separate partition makes it possible to re-format the other partitions without overwriting your home directories. However, because modern Linux distributions are much better at doing in-place operating system upgrades, re-formatting shouldn’t be needed as frequently as it might have been in the past.

It can still be useful to have /home on a separate partition if you have a dual-boot setup and want both operating systems to share the same home directories. Or if your operating system is installed on a file system that supports snapshots and rollbacks and you want to be able to rollback your operating system to an older snapshot without reverting the content in your user profiles. Even then, some file systems allow their descendant file systems to be rolled back independently, so it still may not be necessary to have a separate partition for /home. On ZFS, for example, one pool/partition can have multiple descendant file systems.

The swap partition

The swap partition reserves space for the contents of RAM to be written to permanent storage. There are pros and cons to having a swap partition. A pro of having swap memory is that it theoretically gives you time to gracefully shutdown unneeded applications before the OOM killer takes matters into its own hands. This might be important if the system is running mission-critical software that you don’t want abruptly terminated. A con might be that your system runs so slow when it starts swapping memory to disk that you’d rather the OOM killer take care of the problem for you.

Another use for swap memory is hibernation mode. This might be where the rule that the swap partition should be twice the size of your computer’s RAM originated. Ideally, you should be able to put a system into hibernation even if nearly all of its RAM is in use. Beware that Linux’s support for hibernation is not perfect. It is not uncommon that after a Linux system is resumed from hibernation some hardware devices are left in an inoperable state (for example, no video from the video card or no internet from the WiFi card).

In any case, having a swap partition is more a matter of taste. It is not required.

The root partition

The root partition (/) is the catch-all for all directories that have not been assigned to a separate partition. There is always at least one root partition. BIOS-based systems that are new enough to not have the 1024 cylinder limit can be configured with only a root partition and no others so that there is never a need to resize a partition or file system if space requirements change.

The EFI system partition

The EFI System Partition (ESP) serves the same purpose on UEFI-based computers as the boot partition did on the older BIOS-based computers. It contains the boot loader and kernel. Because the files on the ESP need to be accessible by the computer’s firmware, the ESP has a few restrictions that the older boot partition did not have. The restrictions are:

  1. The ESP must be formatted with a FAT file system (vfat in Anaconda)
  2. The ESP must have a special type-code (EF00 when using gdisk)

Because the older boot partition did not have file system or type-code restrictions, it is permissible to apply the above properties to the boot partition and use it as your ESP. Note, however, that the GRUB boot loader does not support combining the boot and ESP partitions. If you use GRUB, you will have to create a separate partition and mount it beneath the /boot directory.

The Boot Loader Specification (BLS) lists several reasons why it is ideal to use the legacy boot partition as your ESP. The reasons include:

  1. The UEFI firmware should be able to load the kernel directly. Having a separate, non-ESP compliant boot partition for the kernel prevents the UEFI firmware from being able to directly load the kernel.
  2. Nesting the ESP mount point three mount levels deep increases the likelihood that an intermediate mount could fail or otherwise be unavailable when needed. That is, requiring root (/), then boot (/boot), then efi (/efi) to be consecutively mounted is unnecessarily complex and prone to error.
  3. Requiring the boot loader to be able to read other partitions/disks which may be formatted with arbitrary file systems is non-trivial. Even when the boot loader does contain such code, the code that works at installation time can become outdated and fail to access the kernel/initrd after a file system update. This is currently true of GRUB’s ZFS file system driver, for example. You must be careful not to update your ZFS file system if you use the GRUB boot loader or else your system may not come back up the next time you reboot.

Besides the concerns listed above, it is a good idea to have your startup environment — up to and including your initramfs — on a single self-contained file system for recovery purposes. Suppose, for example, that you need to rollback your root file system because it has become corrupted or it has become infected with malware. If your kernel and initramfs are on the root file system, you may be unable to perform the recovery. By having the boot loader, kernel, and initramfs all on a single file system that is rarely accessed or updated, you can increase your chances of being able to recover the rest of your system.

In summary, there are many ways that you can layout your partitions and the type of hardware (BIOS or UEFI) and the brand of boot loader (GRUB, Syslinux or systemd-boot) are among the factors that will influence which layouts will work.

Other considerations

MBR vs. GPT

GUID Partition Table (GPT) is the newer partition format that supports larger disks. GPT was designed to work with the newer UEFI firmware. It is backward-compatible with the older Master Boot Record (MBR) partition format but not all boot loaders support the MBR boot method. GRUB and Syslinux support both MBR and UEFI, but systemd-boot only supports the newer UEFI boot method.

By using GPT now, you can increase the likelihood that your storage device, or an image of it, can be transferred over to a newer computer in the future should you wish to do so. If you have an older computer that natively supports only MBR-partitioned drives, you may need to add the inst.gpt parameter to Anaconda when starting the installer to get it to use the newer format. How to add the inst.gpt parameter is shown in the below video titled “Partitioning a BIOS Computer”.

If you use the GPT partition format on a BIOS-based computer, and you use the GRUB boot loader, you must additionally create a one megabyte biosboot partition at the start of your storage device. The biosboot partition is not needed by any other brand of boot loader. How to create the biosboot partition is demonstrated in the below video titled “Partitioning a BIOS Computer”.

LVM

One last thing to consider when manually partitioning your Linux system is whether to use standard partitions or logical volumes. Logical volumes are managed by the Logical Volume Manager (LVM). You can setup LVM volumes directly on your disk without first creating standard partitions to hold them. However, most computers still require that the boot partition be a standard partition and not an LVM volume. Consequently, having LVM volumes only increases the complexity of the system because the LVM volumes must be created within standard partitions.

The main features of LVM — online storage resizing and clustering — are not really applicable to the typical end user. Most laptops do not have hot-swappable drive bays for adding or reconfiguring storage while the system is running. And not many laptop or desktop users have clvmd configured so they can access a centralized storage device concurrently from multiple client computers.

LVM is great for servers and clusters. But it adds extra complexity for the typical end user. Go with standard partitions unless you are a server admin who needs the more advanced features.

Video demonstrations

Now that you know which partitions you need, you can watch the sort video demonstrations below to see how to manually partition a Fedora Linux computer from the Anaconda installer.

These videos demonstrate creating only the minimally required partitions. You can add more if you choose.

Because the GRUB boot loader requires a more complex partition layout on UEFI systems, the below video titled “Partitioning a UEFI Computer” additionally demonstrates how to install the systemd-boot boot loader. By using the systemd-boot boot loader, you can reduce the number of needed partitions to just two — boot and root. How to use a boot loader other than the default (GRUB) with Fedora’s Anaconda installer is officially documented here.

Partitioning a UEFI Computer
Partitioning a BIOS Computer
Posted on Leave a comment

Develop GUI apps using Flutter on Fedora

When it comes to app development frameworks, Flutter is the latest and greatest. Google seems to be planning to take over the entire GUI app development world with Flutter, starting with mobile devices, which are already perfectly supported. Flutter allows you to develop cross-platform GUI apps for multiple targets — mobile, web, and desktop — from a single codebase.

This post will go through how to install the Flutter SDK and tools on Fedora, as well as how to use them both for mobile development and web/desktop development.

Installing Flutter and Android SDKs on Fedora

To get started building apps with Flutter, you need to install

  • the Android SDK;
  • the Flutter SDK itself; and,
  • optionally, an IDE and its Flutter plugins.

Installing the Android SDK

Flutter requires the installation of the Android SDK with the entire Android Studio suite of tools. Google provides a tar.gz archive. The Android Studio executable can be found in the android-studio/bin directory and is called studio.sh. To run it, open a terminal, cd into the aforementioned directory, and then run:

$ ./studio.sh

Installing the Flutter SDK

Before you install Flutter you may want to consider what release channel you want to be on.

The stable channel is least likely to give you a headache if you just want to build a mobile app using mainstream Flutter features.

On the other hand, you may want to use the latest features, especially for desktop and web app development. In that case, you might be better off installing either the latest version of the beta or even the dev channel.

Either way, you can switch between channels after you install using the flutter channel command explained later in the article.

Head over to the official SDK archive page and download the latest installation bundle for the release channel most appropriate for your use case.

The installation bundle is simply a xz-compressed tarball (.tar.xz extension). You can extract it wherever you want, given that you add the flutter/bin subdirectory to the PATH environment variable.

Installing the IDE plugins

To install the plugin for Visual Studio Code, you need to search for Flutter in the Extensions tab. Installing it will also install the Dart plugin.

The same will happen when you install the plugin for Android Studio by opening the Settings, then the Plugins tab and installing the Flutter plugin.

Using the Flutter and Android CLI Tools on Fedora

Now that you’ve installed Flutter, here’s how to use the CLI tool.

Upgrading and Maintaining Your Flutter Installations

The flutter doctor command is used to check whether your installation and related tools are complete and don’t require any further action.

For example, the output you may get from flutter doctor right after installing on Fedora is:

Doctor summary (to see all details, run flutter doctor -v): [✓] Flutter (Channel stable, v1.12.13+hotfix.5, on Linux, locale it_IT.UTF-8) [!] Android toolchain - develop for Android devices (Android SDK version 29.0.2) ✗ Android licenses not accepted. To resolve this, run: flutter doctor --android-licenses [!] Android Studio (version 3.5) ✗ Flutter plugin not installed; this adds Flutter specific functionality. ✗ Dart plugin not installed; this adds Dart specific functionality. [!] Connected device ! No devices available ! Doctor found issues in 3 categories.

Of course the issue with the Android toolchain has to be resolved in order to build for Android. Run this command to accept the licenses:

$ flutter doctor --android-licenses

Use the flutter channel command to switch channels after installation. It’s just like switching branches on Git (and that’s actually what it does). You use it in the following way:

$ flutter channel <channel_name>

…where you’d replace <channel_name> with the release channel you want to switch to.

After doing that, or whenever you feel the need to do it, you need to update your installation. You might consider running this every once in a while or when a major update comes out if you follow Flutter news. Run this command:

$ flutter upgrade

Building for Mobile

You can build for Android very easily: the flutter build command supports it by default, and it allows you to build both APKs and newfangled app bundles.

All you need to do is to create a project with flutter create, which will generate some code for an example app and the necessary android and ios folders.

When you’re done coding you can either run:

  • flutter build apk or flutter build appbundle to generate the necessary app files to distribute, or
  • flutter run to run the app on a connected device or emulator directly.

When you run the app on a phone or emulator with flutter run, you can use the R button on the keyboard to use stateful hot reload. This feature updates what’s displayed on the phone or emulator to reflect the changes you’ve made to the code without requiring a full rebuild.

If you input a capital R character to the debug console, you trigger a hot restart. This restart doesn’t preserve state and is necessary for bigger changes to the app.

If you’re using a GUI IDE, you can trigger a hot reload using the bolt icon button and a hot restart with the typical refresh button.

Building for the Desktop

To build apps for the desktop on Fedora, use the flutter-desktop-embedding repository. The flutter create command doesn’t have templates for desktop Linux apps yet. That repository contains examples of desktop apps and files required to build on desktop, as well as examples of plugins for desktop apps.

To build or run apps for Linux, you also need to be on the master release channel and enable Linux desktop app development. To do this, run:

$ flutter config --enable-linux-desktop

After that, you can use flutter run to run the app on your development workstation directly, or run flutter build linux to build a binary file in the build/ directory.

If those commands don’t work, run this command in the project directory to generate the required files to build in the linux/ directory:

$ flutter create .

Building for the Web

Starting with Flutter 1.12, you can build Web apps using Flutter with the mainline codebase, without having to use the flutter_web forked libraries, but you have to be running on the beta channel.

If you are (you can switch to it using flutter channel beta and flutter upgrade as we’ve seen earlier), you need to enable web development by running flutter config –enable-web.

After doing that, you can run flutter run -d web and a local web server will be started from which you can access your app. The command returns the URL at which the server is listening, including the port number.

You can also run flutter build web to build the static website files in the build/ directory.

If those commands don’t work, run this command in the project directory to generate the required files to build in the web/ directory:

$ flutter create .

Packages for Installing Flutter

Other distributions have packages or community repositories to install and update in a more straightforward and intuitive way. However, at the time of writing, no such thing exists for Flutter. If you have experience packaging RPMs for Fedora, consider contributing to this GitHub repository for this COPR package.

The next step is learning Flutter. You can do that in a number of ways:

  • Read the good API reference documentation on the official site
  • Watching some of the introductory video courses available online
  • Read one of the many books out there today. [Check out the author’s bio for a suggestion! — Ed.]

Photo by Randall Ruiz on Unsplash.

Posted on Leave a comment

Using Ansible to organize your SSH keys in AWS

If you’ve worked with instances in Amazon Web Services (AWS) for a long time, you may run into this common issue. It’s not technical, but more to do with the human nature of getting too comfortable. When you launch a new instance in a region you haven’t used recently, you may end up creating a new SSH key pair. This leads to having too many keys, which can become complicated and disordered.

This article shows you a way to have your public key in all regions. A recent Fedora Magazine article includes one solution. But the solution in this article is automated even further, and in a more concise and scalable way.

Say you have a Fedora 30 or 31 desktop system where your key is stored, and Ansible is installed as well. These two things together provide the solution to this problem and many more.

With Ansible’s ec2_key module, you can create a simple playbook that will maintain your SSH key pair in all regions. If you need to add or remove keys, it’s as simple as adding and removing lines from a file.

Setting up and running the playbook

To use the playbook, first install necessary dependencies for the ec2_key module:

$ sudo dnf install python3-boto python3-boto3

The playbook is simple: you need only to change your key and its name as in the example below. After that, run the playbook and it iterates over all the public AWS regions listed. The example also includes the restricted regions in case you have access. To include them, uncomment each line as needed, save the file, and then run the playbook again.

---
- name: Maintain an ssh key pair in ec2 hosts: localhost connection: local gather_facts: no vars: ansible_python_interpreter: python tasks: - name: Make available your ssh public key in ec2 for new instances ec2_key: name: "YOUR KEY NAME GOES HERE" key_material: 'YOUR KEY GOES HERE' state: present region: "{{ item }}" with_items: - us-east-2 #US East (Ohio) - us-east-1 #US East (N. Virginia) - us-west-1 #US West (N. California) - us-west-2 #US West (Oregon) - ap-east-1 #Asia Pacific (Hong Kong) - ap-south-1 #Asia Pacific (Mumbai) - ap-northeast-2 #Asia Pacific (Seoul) - ap-southeast-1 #Asia Pacific (Singapore) - ap-southeast-2 #Asia Pacific (Sydney) - ap-northeast-1 #Asia Pacific (Tokyo) - ca-central-1 #Canada (Central) - eu-central-1 #EU (Frankfurt) - eu-west-1 #EU (Ireland) - eu-west-2 #EU (London) - eu-west-3 #EU (Paris) - eu-north-1 #EU (Stockholm) - me-south-1 #Middle East (Bahrain) - sa-east-1 #South America (Sao Paulo) # - us-gov-east-1 #AWS GovCloud (US-East) # - us-gov-west-1 #AWS GovCloud (US-West) # - ap-northeast-3 #Asia Pacific (Osaka-Local) # - cn-north-1 #China (Beijing) # - cn-northwest-1 #China (Ningxia)

This playbook requires AWS access via API, as well. To do this, use environment variables as follows:

$ AWS_ACCESS_KEY="aws-access-key-id" AWS_SECRET_KEY="aws-secret-key-id" ansible-playbook ec2-playbook.yml

Another option is to install the aws cli tools and add the credentials as explained in a previous Fedora Magazine article. It is not recommended to insert these values in the playbook if you store it anywhere online! You can find this playbook code on GitHub.

After the playbook finishes, confirm that your key is available on the AWS console. To do that:

  1. Log into your AWS console
  2. Go to EC2 > Key Pairs
  3. You should see your key listed. The only limitation is that you have to check region-by-region with this method.

Another way is to use a quick command in a shell to do this check for you.

First create a variable with all regions on the playbook:

AWS_REGION="us-east-1 us-west-1 us-west-2 ap-east-1 ap-south-1 ap-northeast-2 ap-southeast-1 ap-southeast-2 ap-northeast-1 ca-central-1 eu-central-1 eu-west-1 eu-west-2 eu-west-3 eu-north-1 me-south-1 sa-east-1"

Then do a for loop and you will get the result from aws API:

for each in ${AWS_REGION} ; do aws ec2 describe-key-pairs --key-name <YOUR KEY GOES HERE> ; done

Keep in mind that to do the above you need to have the aws cli installed.

Posted on Leave a comment

How to setup an anonymous FTP download server

Sometimes you may not need to set up a full FTP server with authenticated users with upload and download privileges. If you are simply looking for a quick way to allow users to grab a few files, an anonymous FTP server can fit the bill. This article shows you show to set it up.

This example uses the vsftp server.

Installing and configuring the anonymous FTP server

Install the vsftp server using sudo:

$ sudo dnf install vsftpd

Enable the vsftp server.

$ sudo systemctl enable vsftpd

Next, edit your /etc/vsftpd/vsftpd.conf file to allow anonymous downloads. Make sure you have the following entries.

anonymous_enable=YES

This option controls whether anonymous logins are permitted or not. If enabled, both the usernames ftp and anonymous are recognized as anonymous logins.

local_enable=NO

This option controls whether local logins are permitted.

write_enable=NO

This option controls whether any FTP commands which change the filesystem are allowed.

no_anon_password=YES

When enabled, this option prevents vsftpd from asking for an anonymous password. With this setting, the anonymous user will log straight in without one.

hide_ids=YES

Enable this option to display all user and group information in directory listings as ftp.

pasv_min_port=40000
pasv_max_port=40001

Finally, these options set the minimum and maximum port to allocate for PASV style data connections. Use them to specify a narrow port range to assist firewalling. You should choose a range for ports that aren’t currently in use. This example uses port 40000-40001 to limit the ports to a range of 1.

Final steps

Now that you’ve set the options, add the appropriate firewall rules to allow vsftp connections along with the passive port range you specified.

$ firewall-cmd --add-service=ftp --perm
$ firewall-cmd --add-port=40000-40001/tcp --perm
$ firewall-cmd --reload

Next, configure SELinux to allow passive FTP:

$ setsebool -P ftpd_use_passive_mode on

And finally, start the vsftp server:

$ systemctl start vsftpd

At this point you have a working FTP server. Place the content you want to offer in /var/ftp. (Typically, system administrators put publicly downloadable content under /var/ftp/pub.) Now you can connect to your server using an FTP client on another system.


Image courtesy of Tom Woodward on Flickr, CC-BY-SA 2.0.

Posted on Leave a comment

How to set up a TFTP server on Fedora

TFTP, or Trivial File Transfer Protocol, allows users to transfer files between systems using the UDP protocol. By default, it uses UDP port 69. The TFTP protocol is extensively used to support remote booting of diskless devices. So, setting up a TFTP server on your own local network can be an interesting way to do Fedora installations, or other diskless operations.

TFTP can only read and write files to or from a remote system. It doesn’t have the capability to list files or make any changes on the remote server. There are also no provisions for user authentication. Because of security implications and the lack of advanced features, TFTP is generally only used on a local area network (LAN).

TFTP server installation

The first thing you will need to do is install the TFTP client and server packages:

dnf install tftp-server tftp -y

This creates a tftp service and socket file for systemd under /usr/lib/systemd/system.

/usr/lib/systemd/system/tftp.service
/usr/lib/systemd/system/tftp.socket

Next, copy and rename these files to /etc/systemd/system:

cp /usr/lib/systemd/system/tftp.service /etc/systemd/system/tftp-server.service cp /usr/lib/systemd/system/tftp.socket /etc/systemd/system/tftp-server.socket

Making local changes

You need to edit these files from the new location after you’ve copied and renamed them, to add some additional parameters. Here is what the tftp-server.service file initially looks like:

[Unit]
Description=Tftp Server
Requires=tftp.socket
Documentation=man:in.tftpd [Service]
ExecStart=/usr/sbin/in.tftpd -s /var/lib/tftpboot
StandardInput=socket [Install]
Also=tftp.socket

Make the following changes to the [Unit] section:

Requires=tftp-server.socket

Make the following changes to the ExecStart line:

ExecStart=/usr/sbin/in.tftpd -c -p -s /var/lib/tftpboot

Here are what the options mean:

  • The -c option allows new files to be created.
  • The -p option is used to have no additional permissions checks performed above the normal system-provided access controls.
  • The -s option is recommended for security as well as compatibility with some boot ROMs which cannot be easily made to include a directory name in its request.

The default upload/download location for transferring the files is /var/lib/tftpboot.

Next, make the following changes to the [Install] section:

[Install]
WantedBy=multi-user.target
Also=tftp-server.socket

Don’t forget to save your changes!

Here is the completed /etc/systemd/system/tftp-server.service file:

[Unit]
Description=Tftp Server
Requires=tftp-server.socket
Documentation=man:in.tftpd [Service]
ExecStart=/usr/sbin/in.tftpd -c -p -s /var/lib/tftpboot
StandardInput=socket [Install]
WantedBy=multi-user.target
Also=tftp-server.socket

Starting the TFTP server

Reload the systemd daemon:

systemctl daemon-reload

Now start and enable the server:

systemctl enable --now tftp-server

To change the permissions of the TFTP server to allow upload and download functionality, use this command. Note TFTP is an inherently insecure protocol, so this may not be advised on a network you share with other people.

chmod 777 /var/lib/tftpboot

Configure your firewall to allow TFTP traffic:

firewall-cmd --add-service=tftp --perm
firewall-cmd --reload

Client Configuration

Install the TFTP client:

yum install tftp -y

Run the tftp command to connect to the TFTP server. Here is an example that enables the verbose option:

[client@thinclient:~ ]$ tftp 192.168.1.164
tftp> verbose
Verbose mode on.
tftp> get server.logs
getting from 192.168.1.164:server.logs to server.logs [netascii]
Received 7 bytes in 0.0 seconds [inf bits/sec]
tftp> quit
[client@thinclient:~ ]$ 

Remember, TFTP does not have the ability to list file names. So you’ll need to know the file name before running the get command to download any files.


Photo by Laika Notebooks on Unsplash.

Posted on Leave a comment

Command line quick tips: Using pipes to connect tools

One of the most powerful concepts of Linux is carried on from its predecessor, UNIX. Your Fedora system has a bunch of useful, single-purpose utilities available for all sorts of simple operations. Like building blocks, you can attach them in creative and complex ways. Pipes are key to this concept.

Before you hear about pipes, though, it’s helpful to know the basic concept of input and output. Many utilities in your Fedora system can operate against files. But they can often take input not stored on a disk. You can think of input flowing freely into a process such as a utility as its standard input (also sometimes called stdin).

Similarly, a tool or process can display information to the screen by default. This is often because its default output is connected to the terminal. You can think of the free-flowing output of a process as its standard output (or stdout — go figure!).

Examples of standard input and output

Often when you run a tool, it outputs to the terminal. Take for instance this simple sequence command using the seq tool:

$ seq 1 6
1
2
3
4
5
6

The output, which is simply to count integers up from 1 to 6, one number per line, comes to the screen. But you could also send it to a file using the > character. The shell interpreter uses this character to mean “redirect standard output to a file whose name follows.” So as you can guess, this command puts the output into a file called six.txt:

$ seq 1 6 > six.txt

Notice nothing comes to the screen. You’ve sent the ouptut into a file instead. If you run the command cat six.txt you can verify that.

You probably remember the simple use of the grep command from a previous article. You could ask grep to search for a pattern in a file by simply declaring the file name. But that’s simply a convenience feature in grep. Technically it’s built to take standard input, and search that.

The shell uses the < character similarly to mean “redirect standard input from a file whose name follows.” So you could just as well search for the number 4 in the file six.txt this way:

$ grep 4 < six.txt
4

Of course the output here is, by default, the content of any line with a match. So grep finds the digit 4 in the file and outputs that line to standard output.

Introducing pipes

Now imagine: what if you took the standard output of one tool, and instead of sending it to the terminal, you sent it into another tool’s standard input? This is the essence of the pipe.

Your shell uses the vertical bar character | to represent a pipe between two commands. You can find it on most keyboard above the backslash \ character. It’s used like this:

$ command1 | command2

For most simple utilities, you wouldn’t use an output filename option on command1, nor an input file option on command2. (You might use other options, though.) Instead of using files, you’re sending the output of command1 directly into command2. You can use as many pipes in a row as needed, creating complex pipelines of several commands in a row.

This (relatively useless) example combines the commands above:

$ seq 1 6 | grep 4
4

What happened here? The seq command outputs the integers 1 through 6, one line at a time. The grep command processes that output line by line, searching for a match on the digit 4, and outputs any matching line.

Here’s a slightly more useful example. Let’s say you want to find out if TCP port 22, the ssh port, is open on your system. You could find this out using the ss command* by looking through its copious output. Or you could figure out its filter language and use that. Or you could use pipes. For example, pipe it through grep looking for the ssh port label:

$ ss -tl | grep ssh
LISTEN 0 128 0.0.0.0:ssh 0.0.0.0:* LISTEN 0 128 [::]:ssh [::]:*

* Those readers familiar with the venerable netstat command may note it is mostly obsolete, as stated in its man page.

That’s a lot easier than reading through many lines of output. And of course, you can combine redirectors and pipes, for instance:

$ ss -tl | grep ssh > ssh-listening.txt

This is barely scratching the surface of pipes. Let your imagination run wild. Have fun piping!


Posted on Leave a comment

Trace code in Fedora with bpftrace

bpftrace is a new eBPF-based tracing tool that was first included in Fedora 28. It was developed by Brendan Gregg, Alastair Robertson and Matheus Marchini with the help of a loosely-knit team of hackers across the Net. A tracing tool lets you analyze what a system is doing behind the curtain. It tells you which functions in code are being called, with which arguments, how many times, and so on.

This article covers some basics about bpftrace, and how it works. Read on for more information and some useful examples.

eBPF (extended Berkeley Packet Filter)

eBPF is a tiny virtual machine, or a virtual CPU to be more precise, in the Linux Kernel. The eBPF can load and run small programs in a safe and controlled way in kernel space. This makes it safer to use, even in production systems. This virtual machine has its own instruction set architecture (ISA) resembling a subset of modern processor architectures. The ISA makes it easy to translate those programs to the real hardware. The kernel performs just-in-time translation to native code for main architectures to improve the performance.

The eBPF virtual machine allows the kernel to be extended programmatically. Nowadays several kernel subsystems take advantage of this new powerful Linux Kernel capability. Examples include networking, seccomp, tracing, and more. The main idea is to attach eBPF programs into specific code points, and thereby extend the original kernel behavior.

eBPF machine language is very powerful. But writing code directly in it is extremely painful, because it’s a low level language. This is where bpftrace comes in. It provides a high-level language to write eBPF tracing scripts. The tool then translates these scripts to eBPF with the help of clang/LLVM libraries, and then attached to the specified code points.

Installation and quick start

To install bpftrace, run the following command in a terminal using sudo:

$ sudo dnf install bpftrace

Try it out with a “hello world” example:

$ sudo bpftrace -e 'BEGIN { printf("hello world\n"); }'

Note that you must run bpftrace as root due to the privileges required. Use the -e option to specify a program, and to construct the so-called “one-liners.” This example only prints hello world, and then waits for you to press Ctrl+C.

BEGIN is a special probe name that fires only once at the beginning of execution. Every action inside the curly braces { } fires whenever the probe is hit — in this case, it’s just a printf.

Let’s jump now to a more useful example:

$ sudo bpftrace -e 't:syscalls:sys_enter_execve { printf("%s called %s\n", comm, str(args->filename)); }'

This example prints the parent process name (comm) and the name of every new process being created in the system. t:syscalls:sys_enter_execve is a kernel tracepoint. It’s a shorthand for tracepoint:syscalls:sys_enter_execve, but both forms can be used. The next section shows you how to list all available tracepoints.

comm is a bpftrace builtin that represents the process name. filename is a field of the t:syscalls:sys_enter_execve tracepoint. You can access these fields through the args builtin.

All available fields of the tracepoint can be listed with this command:

bpftrace -lv "t:syscalls:sys_enter_execve"

Example usage

Listing probes

A central concept for bpftrace are probe points. Probe points are instrumentation points in code (kernel or userspace) where eBPF programs can be attached. They fit into the following categories:

  • kprobe – kernel function start
  • kretprobe – kernel function return
  • uprobe – user-level function start
  • uretprobe – user-level function return
  • tracepoint – kernel static tracepoints
  • usdt – user-level static tracepoints
  • profile – timed sampling
  • interval – timed output
  • software – kernel software events
  • hardware – processor-level events

All available kprobe/kretprobe, tracepoints, software and hardware probes can be listed with this command:

$ sudo bpftrace -l

The uprobe/uretprobe and usdt probes are userspace probes specific to a given executable. To use them, use the special syntax shown later in this article.

The profile and interval probes fire at fixed time intervals. Fixed time intervals are not covered in this article.

Counting system calls

Maps are special BPF data types that store counts, statistics, and histograms. You can use maps to summarize how many times each syscall is being called:

$ sudo bpftrace -e 't:syscalls:sys_enter_* { @[probe] = count(); }'

Some probe types allow wildcards to match multiple probes. You can also specify multiple attach points for an action block using a comma separated list. In this example, the action block attaches to all tracepoints whose name starts with t:syscalls:sys_enter_, which means all available syscalls.

The bpftrace builtin function count() counts the number of times this function is called. @[] represents a map (an associative array). The key of this map is probe, which is another bpftrace builtin that represents the full probe name.

Here, the same action block is attached to every syscall. Then, each time a syscall is called the map will be updated, and the entry is incremented in the map relative to this same syscall. When the program terminates, it automatically prints out all declared maps.

This example counts the syscalls called globally, it’s also possible to filter for a specific process by PID using the bpftrace filter syntax:

$ sudo bpftrace -e 't:syscalls:sys_enter_* / pid == 1234 / { @[probe] = count(); }'

Write bytes by process

Using these concepts, let’s analyze how many bytes each process is writing:

$ sudo bpftrace -e 't:syscalls:sys_exit_write /args->ret > 0/ { @[comm] = sum(args->ret); }'

bpftrace attaches the action block to the write syscall return probe (t:syscalls:sys_exit_write). Then, it uses a filter to discard the negative values, which are error codes (/args->ret > 0/).

The map key comm represents the process name that called the syscall. The sum() builtin function accumulates the number of bytes written for each map entry or process. args is a bpftrace builtin to access tracepoint’s arguments and return values. Finally, if successful, the write syscall returns the number of written bytes. args->ret provides access to the bytes.

Read size distribution by process (histogram):

bpftrace supports the creation of histograms. Let’s analyze one example that creates a histogram of the read size distribution by process:

$ sudo bpftrace -e 't:syscalls:sys_exit_read { @[comm] = hist(args->ret); }'

Histograms are BPF maps, so they must always be attributed to a map (@). In this example, the map key is comm.

The example makes bpftrace generate one histogram for every process that calls the read syscall. To generate just one global histogram, attribute the hist() function just to ‘@’ (without any key).

bpftrace automatically prints out declared histograms when the program terminates. The value used as base for the histogram creation is the number of read bytes, found through args->ret.

Tracing userspace programs

You can also trace userspace programs with uprobes/uretprobes and USDT (User-level Statically Defined Tracing). The next example uses a uretprobe, which probes to the end of a user-level function. It gets the command lines issued in every bash running in the system:

$ sudo bpftrace -e 'uretprobe:/bin/bash:readline { printf("readline: \"%s\"\n", str(retval)); }'

To list all available uprobes/uretprobes of the bash executable, run this command:

$ sudo bpftrace -l "uprobe:/bin/bash"

uprobe instruments the beginning of a user-level function’s execution, and uretprobe instruments the end (its return). readline() is a function of /bin/bash, and it returns the typed command line. retval is the return value for the instrumented function, and can only be accessed on uretprobe.

When using uprobes, you can access arguments with arg0..argN. A str() call is necessary to turn the char * pointer to a string.

Shipped Scripts

There are many useful scripts shipped with bpftrace package. You can find them in the /usr/share/bpftrace/tools/ directory.

Among them, you can find:

  • killsnoop.bt – Trace signals issued by the kill() syscall.
  • tcpconnect.bt – Trace all TCP network connections.
  • pidpersec.bt – Count new procesess (via fork) per second.
  • opensnoop.bt – Trace open() syscalls.
  • vfsstat.bt – Count some VFS calls, with per-second summaries.

You can directly use the scripts. For example:

$ sudo /usr/share/bpftrace/tools/killsnoop.bt

You can also study these scripts as you create new tools.

Links


Photo by Roman Romashov on Unsplash.

Posted on Leave a comment

Use Postfix to get email from your Fedora system

Communication is key. Your computer might be trying to tell you something important. But if your mail transport agent (MTA) isn’t properly configured, you might not be getting the notifications. Postfix is a MTA that’s easy to configure and known for a strong security record. Follow these steps to ensure that email notifications sent from local services will get routed to your internet email account through the Postfix MTA.

Install packages

Use dnf to install the required packages (you configured sudo, right?):

$ sudo -i
# dnf install postfix mailx

If you previously had a different MTA configured, you may need to set Postfix to be the system default. Use the alternatives command to set your system default MTA:

$ sudo alternatives --config mta
There are 2 programs which provide 'mta'. Selection Command
*+ 1 /usr/sbin/sendmail.sendmail 2 /usr/sbin/sendmail.postfix
Enter to keep the current selection[+], or type selection number: 2

Create a password_maps file

You will need to create a Postfix lookup table entry containing the email address and password of the account that you want to use to for sending email:

# MY_EMAIL_ADDRESS=glb@gmail.com
# MY_EMAIL_PASSWORD=abcdefghijklmnop
# MY_SMTP_SERVER=smtp.gmail.com
# MY_SMTP_SERVER_PORT=587
# echo "[$MY_SMTP_SERVER]:$MY_SMTP_SERVER_PORT $MY_EMAIL_ADDRESS:$MY_EMAIL_PASSWORD" >> /etc/postfix/password_maps
# chmod 600 /etc/postfix/password_maps
# unset MY_EMAIL_PASSWORD
# history -c

If you are using a Gmail account, you’ll need to configure an “app password” for Postfix, rather than using your gmail password. See “Sign in using App Passwords” for instructions on configuring an app password.

Next, you must run the postmap command against the Postfix lookup table to create or update the hashed version of the file that Postfix actually uses:

# postmap /etc/postfix/password_maps

The hashed version will have the same file name but it will be suffixed with .db.

Update the main.cf file

Update Postfix’s main.cf configuration file to reference the Postfix lookup table you just created. Edit the file and add these lines.

relayhost = smtp.gmail.com:587
smtp_tls_security_level = verify
smtp_tls_mandatory_ciphers = high
smtp_tls_verify_cert_match = hostname
smtp_sasl_auth_enable = yes
smtp_sasl_security_options = noanonymous
smtp_sasl_password_maps = hash:/etc/postfix/password_maps

The example assumes you’re using Gmail for the relayhost setting, but you can substitute the correct hostname and port for the mail host to which your system should hand off mail for sending.

For the most up-to-date details about the above configuration options, see the man page:

$ man postconf.5

Enable, start, and test Postfix

After you have updated the main.cf file, enable and start the Postfix service:

# systemctl enable --now postfix.service

You can then exit your sudo session as root using the exit command or Ctrl+D. You should now be able to test your configuration with the mail command:

$ echo 'It worked!' | mail -s "Test: $(date)" glb@gmail.com

Update services

If you have services like logwatch, mdadm, fail2ban, apcupsd or certwatch installed, you can now update their configurations so that their email notifications will go to your internet email address.

Optionally, you may want to configure all email that is sent to your local system’s root account to go to your internet email address. Add this line to the /etc/aliases file on your system (you’ll need to use sudo to edit this file, or switch to the root account first):

root: glb+root@gmail.com

Now run this command to re-read the aliases:

# newaliases
  • TIP: If you are using Gmail, you can add an alpha-numeric mark between your username and the @ symbol as demonstrated above to make it easier to identify and filter the email that you will receive from your computer(s).

Troubleshooting

View the mail queue:

$ mailq

Clear all email from the queues:

# postsuper -d ALL

Filter the configuration settings for interesting values:

$ postconf | grep "^relayhost\|^smtp_"

View the postfix/smtp logs:

$ journalctl --no-pager -t postfix/smtp

Reload postfix after making configuration changes:

$ systemctl reload postfix

Photo by Sharon McCutcheon on Unsplash.

Posted on Leave a comment

Command line quick tips: More about permissions

A previous article covered some basics about file permissions on your Fedora system. This installment shows you additional ways to use permissions to manage file access and sharing. It also builds on the knowledge and examples in the previous article, so if you haven’t read that one, do check it out.

Symbolic and octal

In the previous article you saw how there are three distinct permission sets for a file. The user that owns the file has a set, members of the group that owns the file has a set, and then a final set is for everyone else. These permissions are expressed on screen in a long listing (ls -l) using symbolic mode.

Each set has r, w, and x entries for whether a particular user (owner, group member, or other) can read, write, or execute that file. But there’s another way to express these permissions: in octal mode.

You’re used to the decimal numbering system, which has ten distinct values (0 through 9). The octal system, on the other hand, has eight distinct values (0 through 7). In the case of permissions, octal is used as a shorthand to show the value of the r, w, and x fields. Think of each field as having a value:

  • r = 4
  • w = 2
  • x = 1

Now you can express any combination with a single octal value. For instance, read and write permission, but no execute permission, would have a value of 6. Read and execute permission only would have a value of 5. A file’s rwxr-xr-x symbolic permission has an octal value of 755.

You can use octal values to set file permissions with the chmod command similarly to symbolic values. The following two commands set the same permissions on a file:

chmod u=rw,g=r,o=r myfile1
chmod 644 myfile1

Special permission bits

There are several special permission bits also available on a file. These are called setuid (or suid), setgid (or sgid), and the sticky bit (or delete inhibit). Think of this as yet another set of octal values:

  • setuid = 4
  • setgid = 2
  • sticky = 1

The setuid bit is ignored unless the file is executable. If that’s the case, the file (presumably an app or a script) runs as if it were launched by the user who owns the file. A good example of setuid is the /bin/passwd utility, which allows a user to set or change passwords. This utility must be able to write to files no user should be allowed to change. Therefore it is carefully written, owned by the root user, and has a setuid bit so it can alter the password related files.

The setgid bit works similarly for executable files. The file will run with the permissions of the group that owns it. However, setgid also has an additional use for directories. If a file is created in a directory with setgid permission, the group owner for the file will be set to the group owner of the directory.

Finally, the sticky bit, while ignored for files, is useful for directories. The sticky bit set on a directory will prevent a user from deleting files in that directory owned by other users.

The way to set these bits with chmod in octal mode is to add a value prefix, such as 4755 to add setuid to an executable file. In symbolic mode, the u and g can be used to set or remove setuid and setgid, such as u+s,g+s. The sticky bit is set using o+t. (Other combinations, like o+s or u+t, are meaningless and ignored.)

Sharing and special permissions

Recall the example from the previous article concerning a finance team that needs to share files. As you can imagine, the special permission bits help to solve their problem even more effectively. The original solution simply made a directory the whole group could write to:

drwxrwx---. 2 root finance 4096 Jul 6 15:35 finance

One problem with this directory is that users dwayne and jill, who are both members of the finance group, can delete each other’s files. That’s not optimal for a shared space. It might be useful in some situations, but probably not when dealing with financial records!

Another problem is that files in this directory may not be truly shared, because they will be owned by the default groups of dwayne and jill — most likely the user private groups also named dwayne and jill.

A better way to solve this is to set both setgid and the sticky bit on the folder. This will do two things — cause files created in the folder to be owned by the finance group automatically, and prevent dwayne and jill from deleting each other’s files. Either of these commands will work:

sudo chmod 3770 finance
sudo chmod u+rwx,g+rwxs,o+t finance

The long listing for the file now shows the new special permissions applied. The sticky bit appears as T and not t because the folder is not searchable for users outside the finance group.

drwxrws--T. 2 root finance 4096 Jul 6 15:35 finance