Posted on Leave a comment

Using the Quarkus Framework on Fedora Silverblue – Just a Quick Look

Quarkus is a framework for Java development that is described on their web site as:

A Kubernetes Native Java stack tailored for OpenJDK HotSpot and GraalVM, crafted from the best of breed Java libraries and standards

https://quarkus.io/ – Feb. 5, 2020

Silverblue — a Fedora Workstation variant with a container based workflow central to its functionality — should be an ideal host system for the Quarkus framework.

There are currently two ways to use Quarkus with Silverblue. It can be run in a pet container such as Toolbox/Coretoolbox. Or it can be run directly in a terminal emulator. This article will focus on the latter method.

Why Quarkus

According to Quarkus.io: “Quarkus has been designed around a containers first philosophy. What this means in real terms is that Quarkus is optimized for low memory usage and fast startup times.” To achieve this, they employ first class support for Graal/Substrate VM, build time Metadata processing, reduction in reflection usage, and native image preboot. For details about why this matters, read Container First at Quarkus.

Prerequisites

A few prerequisites will need to configured before you can start using Quarkus. First, you need an IDE of your choice. Any of the popular ones will do. VIM or Emacs will work as well. The Quarkus site provides full details on how to set up the three major Java IDE’s (Eclipse, Intellij Idea, and Apache Netbeans). You will need a version of JDK installed. JDK 8, JDK 11 or any distribution of OpenJDK is fine. GrallVM 19.2.1 or 19.3.1 is needed for compiling down to native. You will also need Apache Maven 3.53+ or Gradle. This article will use Maven because that is what the author is more familiar with. Use the following command to layer Java 11 OpenJDK and Maven onto Silverblue:

$ rpm-ostree install java-11-openjdk* maven

Alternatively, you can download your favorite version of Java and install it directly in your home directory.

After rebooting, configure your JAVA_HOME and PATH environment variables to reference the new applications. Next, go to the GraalVM download page, and get GraalVM version 19.2.1 or version 19.3.1 for Java 11 OpenJDK. Install Graal as per the instructions provided. Basically, copy and decompress the archive into a directory under your home directory, then modify the PATH environment variable to include Graal. You use it as you would any JDK. So you can set it up as a platform in the IDE of your choice. Now is the time to setup the native image if you are going to use one. For more details on setting up your system to use Quarkus and the Quarkus native image, check out their Getting Started tutorial. With these parts installed and the environment setup, you can now try out Quarkus.

Bootstrapping

Quarkus recommends you create a project using the bootstrapping method. Below are some example commands entered into a terminal emulator in the Gnome shell on Silverblue.

$ mvn io.quarkus:quarkus-maven-plugin:1.2.1.Final:create \ -DprojectGroupId=org.jakfrost \ -DprojectArtifactId=silverblue-logo \ -DclassName="org.jakfrost.quickstart.GreetingResource" \ -Dpath="/hello"
$ cd silverblue-logo

The bootstrapping process shown above will create a project under the current directory with the name silverblue-logo. After this completes, start the application in development mode:

$ ./mvnw compile quarkus:dev

With the application running, check whether it responds as expected by issuing the following command:

$ curl -w '\n' http://localhost:8080/hello

The above command should print hello on the next line. Alternatively, test the application by browsing to http://localhost:8080/hello with your web browser. You should see the same lonely hello on an otherwise empty page. Leave the application running for the next section.

Injection

Open the project in your favorite IDE. If you are using Netbeans, simply open the project directory where the pom.xml file resides. Now would be a good time to have a look at the pom.xml file.

Quarkus uses ArC for its dependency injection. ArC is a dependency of quarkus-resteasy, so it is already part of the core Quarkus installation. Add a companion bean to the project by creating a java class in your IDE called GreetingService.java. Then put the following code into it:

import javax.enterprise.context.ApplicationScoped; @ApplicationScoped
public class GreetingService { public String greeting(String name) { return "hello " + name; } }

The above code is a verbatim copy of what is used in the injection example in the Quarkus Getting Started tutorial. Modify GreetingResource.java by adding the following lines of code:

import javax.inject.Inject;
import org.jboss.resteasy.annotations.jaxrs.PathParam; @Inject GreetingService service;//inject the service @GET //add a getter to use the injected service @Produces(MediaType.TEXT_PLAIN) @Path("/greeting/{name}") public String greeting(@PathParam String name) { return service.greeting(name); }

If you haven’t stopped the application, it will be easy to see the effect of your changes. Just enter the following curl command:

$ curl -w '\n' http://localhost:8080/hello/greeting/Silverblue

The above command should print hello Silverblue on the following line. The URL should work similarly in a web browser. There are two important things to note:

  1. The application was running and Quarkus detected the file changes on the fly.
  2. The injection of code into the app was very easy to perform.

The native image

Next, package your application as a native image that will work in a podman container. Exit the application by pressing CTRL-C. Then use the following command to package it:

$ ./mvnw package -Pnative -Dquarkus.native.container-runtime=podman

Now, build the container:

$ podman build -f src/main/docker/Dockerfile.native -t silverblue-logo/silverblue-logo

Now run it with the following:

$ podman run -i --rm -p 8080:8080 localhost/silverblue-logo/silverblue-logo

To get the container build to successfully complete, it was necessary to copy the /target directory and contents into the src/main/docker/ directory. Investigation as to the reason why is still required, and though the solution used was quick and easy, it is not an acceptable way to solve the problem.

Now that you have the container running with the application inside, you can use the same methods as before to verify that it is working.

Point your browser to the URL http://localhost:8080/ and you should get a index.html that is automatically generated by Quarkus every time you create or modify an application. It resides in the src/main/resources/META-INF/resources/ directory. Drop other HTML files in this resources directory to have Quarkus serve them on request.

For example, create a file named logo.html in the resources directory containing the below markup:

<!DOCTYPE html>
<!--
To change this license header, choose License Headers in Project Properties.
To change this template file, choose Tools | Templates
and open the template in the editor.
-->
<html> <head> <title>Silverblue</title> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> </head> <body> <div> <img src="fedora-silverblue-logo.png" alt="Fedora Silverblue"/> </div> </body>
</html>

Next, save the below image alongside the logo.html file with the name fedora-silverblue-logo.png:

Now view the results at http://localhost:8080/logo.html.

Testing your application

Quarkus supports junit 5 tests. Look at your project’s pom.xml file. In it you should see two test dependencies. The generated project will contain a simple test, named GreetingResourceTest.java. Testing for the native file is only supported in prod mode. However, you can test the jar file in dev mode. These tests are RestAssured, but you can use whatever test library you wish with Quarkus. Use Maven to run the tests:

$ ./mvnw test

More details can be found in the Quarkus Getting Started tutorial.

Further reading and tutorials

Quarkus has an extensive collection of tutorials and guides. They are well worth the time to delve into the breadth of this microservices framework.

Quarkus also maintains a publications page that lists some very interesting articles on actual use cases of Quarkus. This article has only just scratched the surface of the topic. If what was presented here has piqued your interest, then follow the above links for more information.

Posted on Leave a comment

Demonstrating PERL with Tic-Tac-Toe, Part 2

The astute observer may have noticed that PERL is misspelled. In a March 1, 1999 interview with Linux Journal, Larry Wall explained that he originally intended to include the letter “A” from the word “And” in the title “Practical Extraction And Report Language” such that the acronym would correctly spell the word PEARL. However, before he released PERL, Larry heard that another programming language had already taken that name. To resolve the name collision, he dropped the “A”. The acronym is still valid because title case and acronyms allow articles, short prepositions and conjunctions to be omitted (compare for example the acronym LASER).

Name collisions happen when distinct commands or variables with the same name are merged into a single namespace. Because Unix commands share a common namespace, two commands cannot have the same name.

The same problem exists for the names of global variables and subroutines within programs written in languages like PERL. This is an especially significant problem when programmers try to collaborate on large software projects or otherwise incorporate code written by other programmers into their own code base.

Starting with version 5, PERL supports packages. Packages allow PERL code to be modularized with unique namespaces so that the global variables and functions of the modularized code will not collide with the variables and functions of another script or module.

Shortly after its release, PERL5 software developers all over the world began writing software modules to extend PERL’s core functionality. Because many of those developers (currently about 15,000) have made their work freely available on the Comprehensive Perl Archive Network (CPAN), you can easily extend the functionality of PERL on your PC so that you can perform very advanced and complex tasks with just a few commands.

The remainder of this article builds on the previous article in this series by demonstrating how to install, use and create PERL modules on Fedora Linux.

An example PERL program

See the example program from the previous article below, with a few lines of code added to import and use some modules named chip1, chip2 and chip3. It is written in such a way that the program should work even if the chip modules cannot be found. Future articles in this series will build on the below script by adding the additional modules named chip2 and chip3.

You should be able to copy and paste the below code into a plain text file and use the same one-liner that was provided in the previous article to strip the leading numbers.

00 #!/usr/bin/perl
01 02 use strict;
03 use warnings;
04 05 use feature 'state';
06 07 use constant MARKS=>[ 'X', 'O' ];
08 use constant HAL9K=>'O';
09 use constant BOARD=>'
10 ┌───┬───┬───┐
11 │ 1 │ 2 │ 3 │
12 ├───┼───┼───┤
13 │ 4 │ 5 │ 6 │
14 ├───┼───┼───┤
15 │ 7 │ 8 │ 9 │
16 └───┴───┴───┘
17 ';
18 19 use lib 'hal';
20 use if -e 'hal/chip1.pm', 'chip1';
21 use if -e 'hal/chip2.pm', 'chip2';
22 use if -e 'hal/chip3.pm', 'chip3';
23 24 sub get_mark {
25 my $game = shift;
26 my @nums = $game =~ /[1-9]/g;
27 my $indx = (@nums+1) % 2;
28 29 return MARKS->[$indx];
30 }
31 32 sub put_mark {
33 my $game = shift;
34 my $mark = shift;
35 my $move = shift;
36 37 $game =~ s/$move/$mark/;
38 39 return $game;
40 }
41 42 sub get_move {
43 return (<> =~ /^[1-9]$/) ? $& : '0';
44 }
45 46 PROMPT: {
47 no strict;
48 no warnings;
49 50 state $game = BOARD;
51 52 my $mark;
53 my $move;
54 55 print $game;
56 57 if (defined &get_victor) {
58 my $victor = get_victor $game, MARKS;
59 if (defined $victor) {
60 print "$victor wins!\n";
61 complain if ($victor ne HAL9K);
62 last PROMPT;
63 }
64 }
65 66 last PROMPT if ($game !~ /[1-9]/);
67 68 $mark = get_mark $game;
69 print "$mark\'s move?: ";
70 71 if ($mark eq HAL9K and defined &hal_move) {
72 $move = hal_move $game, $mark, MARKS;
73 print "$move\n";
74 } else {
75 $move = get_move;
76 }
77 $game = put_mark $game, $mark, $move;
78 79 redo PROMPT;
80 }

Once you have the above code downloaded and working, create a subdirectory named hal under the same directory that you put the above program. Then copy and paste the below code into a plain text file and use the same procedure to strip the leading numbers. Name the version without the line numbers chip1.pm and move it into the hal subdirectory.

00 # basic operations chip
01 02 package chip1;
03 04 use strict;
05 use warnings;
06 07 use constant MAGIC=>'
08 ┌───┬───┬───┐
09 │ 2 │ 9 │ 4 │
10 ├───┼───┼───┤
11 │ 7 │ 5 │ 3 │
12 ├───┼───┼───┤
13 │ 6 │ 1 │ 8 │
14 └───┴───┴───┘
15 ';
16 17 use List::Util 'sum';
18 use Algorithm::Combinatorics 'combinations';
19 20 sub get_moves {
21 my $game = shift;
22 my $mark = shift;
23 my @nums;
24 25 while ($game =~ /$mark/g) {
26 push @nums, substr(MAGIC, $-[0], 1);
27 }
28 29 return @nums;
30 }
31 32 sub get_victor {
33 my $game = shift;
34 my $marks = shift;
35 my $victor;
36 37 TEST: for (@$marks) {
38 my $mark = $_;
39 my @nums = get_moves $game, $mark;
40 41 next unless @nums >= 3;
42 for (combinations(\@nums, 3)) {
43 my @comb = @$_;
44 if (sum(@comb) == 15) {
45 $victor = $mark;
46 last TEST;
47 }
48 }
49 }
50 51 return $victor;
52 }
53 54 sub hal_move {
55 my $game = shift;
56 my @nums = $game =~ /[1-9]/g;
57 my $rand = int rand @nums;
58 59 return $nums[$rand];
60 }
61 62 sub complain {
63 print "Daisy, Daisy, give me your answer do.\n";
64 }
65 66 sub import {
67 no strict;
68 no warnings;
69 70 my $p = __PACKAGE__;
71 my $c = caller;
72 73 *{ $c . '::get_victor' } = \&{ $p . '::get_victor' };
74 *{ $c . '::hal_move' } = \&{ $p . '::hal_move' };
75 *{ $c . '::complain' } = \&{ $p . '::complain' };
76 }
77 78 1;

The first thing that you will probably notice when you try to run the program with chip1.pm in place is an error message like the following (emphasis added):

$ Can't locate Algorithm/Combinatorics.pm in @INC (you may need to install the Algorithm::Combinatorics module) (@INC contains: hal /usr/local/lib64/perl5/5.30 /usr/local/share/perl5/5.30 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5) at hal/chip1.pm line 17.
BEGIN failed--compilation aborted at hal/chip1.pm line 17.
Compilation failed in require at /usr/share/perl5/if.pm line 15.
BEGIN failed--compilation aborted at game line 18.

When you see an error like the one above, just use the dnf command to search Fedora’s package repository for the name of the system package that provides the needed PERL module as shown below. Note that the module name and path from the above error message have been prefixed with */ and then surrounded with single quotes.

$ dnf provides '*/Algorithm/Combinatorics.pm'
...
perl-Algorithm-Combinatorics-0.27-17.fc31.x86_64 : Efficient generation of combinatorial sequences
Repo : fedora
Matched from:
Filename : /usr/lib64/perl5/vendor_perl/Algorithm/Combinatorics.pm

Hopefully it will find the needed package which you can then install:

$ sudo dnf install perl-Algorithm-Combinatorics

Once you have all the needed modules installed, the program should work.

How it works

This example is admittedly quite contrived. Nothing about Tic-Tac-Toe is complex enough to need a CPAN module. To demonstrate installing and using a non-standard module, the above program uses the combinations library routine from the Algorithm::Combinatorics module to generate a list of the possible combinations of three numbers from the provided set. Because the board numbers have been mapped to a 3×3 magic square, any set of three numbers that sum to 15 will be aligned on a column, row or diagonal and will therefore be a winning combination.

Modules are imported into a program with the use and require commands. The only difference between them is that the use command automatically calls the import subroutine (if one exists) in the module being imported. The require command does not automatically call any subroutines.

Modules are just files with a .pm extension that contain PERL subroutines and variables. They begin with the package command and end with 1;. But otherwise, they look like any other PERL script. The file name should match the package name. Package and file names are case sensitive.

Beware that when you are reading online documentation about PERL modules, the documentation often veers off into topics about classes. Classes are built on modules, but a simple module does not have to adhere to all the restrictions that apply to classes. When you start seeing words like method, inheritance and polymorphism, you are reading about classes, not modules.

There are two subroutine names that are reserved for special use in modules. They are import and unimport and they are called by the use and no directives respectively.

The purpose of the import and unimport subroutines is typically to alias and unalias the module’s subroutines in and out of the calling namespace respectively. For example, line 17 of chip1.pm shows the sum subroutine being imported from the List::Util module.

The constant module, as used on lines 07 of chip1.pm, is also altering the caller’s namespace (chip1), but rather than importing a predefined subroutine, it is creating a special type of variable.

All the identifiers immediately following the use keywords in the above examples are modules. On my system, many of them can be found under the /usr/share/perl5 directory.

Notice that the above error message states “@INC contains:” followed by a list of directories. INC is a special PERL variable that lists, in order, the directories from which modules should be loaded. The first file found with a matching name will be used.

As demonstrated on line 19 of the Tic-Tac-Toe game, the lib module can be used to update the list of directories in the INC variable.

The chip1 module above provides an example of a very simple import subroutine. In most cases you will want to use the import subroutine that is provided by the Exporter module rather than implementing your own. A custom import subroutine is used in the above example to demonstrate the basics of what it does. Also, the custom implementation makes it easy to override the subroutine definitions in later examples.

The import subroutine shown above reveals some of the hidden magic that makes packages work. All variables that are both globally scoped (that is, created outside of any pair of curly brackets) and dynamically scoped (that is, not prefixed with the keywords my or state) and all global subroutines are automatically prefixed with a package name. The default package name if no package command has been issued is main.

By default, the current package is assumed when an unqualified variable or subroutine is used. When get_move is called from the PROMPT block in the above example, main::get_move is assumed because the PROMPT block exists in the main package. Likewise, when get_moves is called from the get_victor subroutine, chip1::get_moves is assumed because get_victor exists in the chip1 package.

If you want to access a variable or subroutine that exists in a different package, you either have to use its fully qualified name or create a local alias that refers to the desired subroutine.

The import subroutine shown above demonstrates how to create subroutine aliases that refer to subroutines in other packages. On lines 73-75, the fully qualified names for the subroutines are being constructed and then the symbol table name for the subroutine in the calling namespace (the package in which the use statement is being executed) is being assigned the reference of the subroutine in the local package (the package in which the import subroutine is defined).

Notice that subroutines, like variables, have sigils. The sigil for subroutines is the ampersand symbol (&). In most contexts, the sigil for subroutines is optional. When working with references (as shown on lines 73-75 of the import subroutine) and when checking if a subroutine is defined (as shown on lines 57 and 71 of the PROMPT block), the sigil for subroutines is required.

The import subroutine shown above is just a bare minimum example. There is a lot that it doesn’t do. In particular, a proper import subroutine would not automatically import any subroutines or variables. Normally, the user would be expected to provide a list of the routines to be imported on the use line and that list is available to the import subroutine in the @_ array.

Final notes

Lines 25-27 of chip1.pm provide a good example of PERL’s dense notation problem. With just a couple of lines code, the board numbers on which a given mark has been placed can be determined. But does the statement within the conditional clause of the while loop perform the search from the beginning of the game variable on each iteration? Or does it continue from where it left off each time? PERL correctly guesses that I want it to provide the position ($-[0]) of the next mark, if any exits, on each iteration. But exactly what it will do can be very difficult to determine just by looking at the code.

The last things of note in the above examples are the strict and warnings directives. They enable extra compile-time and runtime debugging messages respectively. Many PERL programmers recommend always including these directives so that programming errors are more likely to be spotted. The downside of having them enabled is that some complex code will sometimes cause the debugger to erroneously generate unwanted output. Consequently, the strict and/or warnings directives may need to be disabled in some code blocks to get your program to run correctly as demonstrated on lines 67 and 68 of the example chip1 module. The strict and warnings directives have nothing to do with the program and they can be omitted. Their only purpose is to provide feedback to the program developer.

Posted on Leave a comment

Make free encrypted backups to the cloud on Fedora

Most free cloud storage is limited to 5GB or less. Even Google Drive is limited to 15GB. While not heavily advertised, IBM offers free accounts with a whopping 25GB of cloud storage for free. This is not a limited time offer, and you don’t have to provide a credit card. It’s absolutely free! Better yet, since it’s S3 compatible, most of the S3 tools available for backups should work fine.

This article will show you how to use restic for encrypted backups onto this free storage. Please also refer to this previous Magazine article about installing and configuring restic. Let’s get started!

Creating your free IBM account and storage

Head over to the IBM cloud services site and follow the steps to sign up for a free account here: https://cloud.ibm.com/registration. You’ll need to verify your account from the email confirmation that IBM sends to you.

Then log in to your account to bring up your dashboard, at https://cloud.ibm.com/.

Click on the Create resource button.

Click on Storage and then Object Storage.

Next click on the Create Bucket button.

This brings up the Configure your resource section.

Next, click on the Create button to use the default settings.

Under Predefined buckets click on the Standard box:

A unique bucket name is automatically created, but it’s suggested that you change this.

In this example, the bucket name is changed to freecloudstorage.

Click on the Next button after choosing a bucket name:

Continue to click on the Next button until you get the the Summary page:

Scroll down to the Endpoints section.

The information in the Public section is the location of your bucket. This is what you need to specify in restic when you create your backups. In this example, the location is s3.us-south.cloud-object-storage.appdomain.cloud.

Making your credentials

The last thing that you need to do is create an access ID and secret key. To start, click on Service credentials.

Click on the New credential button.

Choose a name for your credential, make sure you check the Include HMAC Credential box and then click on the Add button. In this example I’m using the name resticbackup.

Click on View credentials.

The access_key_id and secret_access_key is what you are looking for. (For obvious reasons, the author’s details here are obscured.)

You will need to export these by calling them with the export alias in the shell, or putting them into a backup script.

Preparing a new repository

Restic refers to your backup as a repository, and can make backups to any bucket on your IBM cloud account. First, setup the following environment variables using your access_key_id and secret_access_key that you retrieved from your IBM cloud bucket. These can also be set in any backup script you may create.

$ export AWS_ACCESS_KEY_ID=<MY_ACCESS_KEY>
$ export AWS_SECRET_ACCESS_KEY=<MY_SECRET_ACCESS_KEY>

Even though you are using IBM Cloud and not AWS, as previously mentioned, IBM Cloud storage is S3 compatible, and restic uses its interal AWS commands for any S3 compatible storage. So these AWS keys really refer to the keys from your IBM bucket.

Create the repository by initializing it. A prompt appears for you to type a password for the repository. Do not lose this password because your data is irrecoverable without it!

restic -r s3:http://PUBLIC_ENDPOINT_LOCATION/BUCKET init

The PUBLIC_ENDPOINT_LOCATION was specified in the Endpoint section of your Bucket summary.

For example:

$ restic -r s3:http://s3.us-south.cloud-object-storage.appdomain.cloud/freecloudstorage init

Creating backups

Now it’s time to backup some data. Backups are called snapshots. Run the following command and enter the repository password when prompted.

restic -r s3:http://PUBLIC_ENDPOINT_LOCATION/BUCKET backup files_to_backup

For example:

$ restic -r s3:http://s3.us-south.cloud-object-storage.appdomain.cloud/freecloudstorage backup Documents/
Enter password for repository: repository 106a2eb4 opened successfully, password is correct Files: 51 new, 0 changed, 0 unmodified Dirs: 0 new, 0 changed, 0 unmodified Added to the repo: 11.451 MiB processed 51 files, 11.451 MiB in 0:06 snapshot 611e9577 saved

Restoring from backups

Now that you’ve backed up some files, it’s time to make sure you know how to restore them. To get a list of all of your backup snapshots, use this command:

restic -r s3:http://PUBLIC_ENDPOINT_LOCATION/BUCKET snapshots

For example:

$ restic -r s3:http://s3.us-south.cloud-object-storage.appdomain.cloud/freecloudstorage snapshots
Enter password for repository: ID Date Host Tags Directory ------------------------------------------------------------------- 106a2eb4 2020-01-15 15:20:42 client /home/curt/Documents

To restore an entire snapshot, run a command like this:

restic -r s3:http://s3.us-south.cloud-object-storage.appdomain.cloud/freecloudstorage restore snapshotID --target restoreDirectory

For example:

$ restic -r s3:http://s3.us-south.cloud-object-storage.appdomain.cloud/freecloudstorage restore 106a2eb4 --target ~
Enter password for repository: repository 106a2eb4 opened successfully, password is correct
restoring <Snapshot 106a2eb4 of [/home/curt/Documents]

If the directory still exists on your system, be sure to specify a different location for the restoreDirectory. For example:

restic -r s3:http://s3.us-south.cloud-object-storage.appdomain.cloud/freecloudstorage restore 106a2eb4 --target /tmp

To restore an individual file, run a command like this:

restic -r s3:http://PUBLIC_ENDPOINT_LOCATION/BUCKET restore snapshotID --target restoreDirectory --include filename

For example:

$ restic -r s3:http://s3.us-south.cloud-object-storage.appdomain.cloud/freecloudstorage restore 106a2eb4 --target /tmp --include file1.txt Enter password for repository: restoring <Snapshot 106a2eb4 of [/home/curt/Documents] at 2020-01-16 15:20:42.833131988 -0400 EDT by curt@client> to /tmp

Photo by Alex Machado on Unsplash.

[EDITORS NOTE: The Fedora Project is sponsored by Red Hat, which is owned by IBM.]

[EDITORS NOTE: Updated at 1647 UTC on 24 February 2020 to correct a broken link.]

Posted on Leave a comment

Demonstrating PERL with Tic-Tac-Toe, Part 1

Larry Wall’s Practical Extraction and Reporting Language (PERL) was originally developed in 1987 as a general-purpose Unix scripting language that borrowed features from C, sh, awk, sed, BASIC, and LISP. In the late 1990s, before PHP became more popular, PERL was commonly used for CGI scripting. PERL is still the go-to tool for many sysadmins who need something more powerful than sed or awk when writing complex parsing and automation scripts. It has a somewhat high learning curve due to its dense notation. But a recent survey indicates that PERL developers earn 54 per cent more than the average developer. So it may still be a worthwhile language to learn.

PERL is far too complex to cover in any significant detail in this magazine. But this short series of articles will attempt to demonstrate a few of the most basic features of the language so that you can get a sense of what the language is like and the kind of things it can do.

An example PERL program

PERL was originally a language optimized for scanning arbitrary text files, extracting information from those text files, and printing reports based on that information. To demonstrate how this core feature of PERL works, a very simple Tic-Tac-Toe game is provided below. The below program scans a textual representation of a Tic-Tac-Toe board, extracts and manipulates the numbers on the board, and prints the modified result to the console.

00 #!/usr/bin/perl
01 02 use feature 'state';
03 04 use constant MARKS=>[ 'X', 'O' ];
05 use constant BOARD=>'
06 ┌───┬───┬───┐
07 │ 1 │ 2 │ 3 │
08 ├───┼───┼───┤
09 │ 4 │ 5 │ 6 │
10 ├───┼───┼───┤
11 │ 7 │ 8 │ 9 │
12 └───┴───┴───┘
13 ';
14 15 sub get_mark {
16 my $game = shift;
17 my @nums = $game =~ /[1-9]/g;
18 my $indx = (@nums+1) % 2;
19 20 return MARKS->[$indx];
21 }
22 23 sub put_mark {
24 my $game = shift;
25 my $mark = shift;
26 my $move = shift;
27 28 $game =~ s/$move/$mark/;
29 30 return $game;
31 }
32 33 sub get_move {
34 return (<> =~ /^[1-9]$/) ? $& : '0';
35 }
36 37 PROMPT: {
38 state $game = BOARD;
39 40 my $mark;
41 my $move;
42 43 print $game;
44 45 last PROMPT if ($game !~ /[1-9]/);
46 47 $mark = get_mark $game;
48 print "$mark\'s move?: ";
49 50 $move = get_move;
51 $game = put_mark $game, $mark, $move;
52 53 redo PROMPT;
54 }

To try out the above program on your PC, you can copy-and-paste the above text into a plain text file and save and run it. The line numbers will have to be removed before the program will work. Of course, the command that one uses to perform that sort of textual extraction and reporting is perl.

Assuming that you have saved the above text to a file named game.txt, the following command can be used to strip the leading numbers from all the lines and write the modified version to a new file named game:

$ cat game.txt | perl -npe 's/...//' > game

The above command is a very small PERL script and it is an example of what is called a one-liner.

Now that the line numbers have been removed, the program can be run by entering the following command:

$ perl game

How it works

PERL is a procedural programming language. A program written in PERL consists of a series of commands that are executed sequentially. With few exceptions, most commands alter the state of the computer’s memory in some way.

Line 00 in the Tic-Tac-Toe program isn’t technically part of the PERL program and it can be omitted. It is called a shebang (the letter e is pronounced soft as it is in the word shell). The purpose of the shebang line is to tell the operating system what interpreter the remaining text should be processed with if one isn’t specified on the command line.

Line 02 isn’t strictly necessary for this program either. It makes available an advanced command named state. The state command creates a variable that can retain its value after it has gone out of scope. I’m using it here as a way to avoid declaring a global variable. It is considered good practice in computer programming to avoid using global variables where possible because they allow for action at a distance. If you didn’t follow all of that, don’t worry about it. It’s not important at this point.

PERL scopes, blocks and subroutines

Scope is a very important concept that one needs to be familiar with when reading and writing procedural programs. In PERL, scope is often delineated by a pair of curly brackets. Within the global scope, the above Tic-Tac-Toe program defines four sub-scopes on lines 15-21, 23-31, 33-35 and 37-54. The first three scopes are prefixed with subroutine declarations and the last scope is prefixed with the label PROMPT.

Scopes serve multiple purposes in programming languages. One purpose of a scope is to group a set of commands together as a unit so that they can be called repeatedly with a single command rather than having to repeat several lines of code each time in the program. Another purpose is to enhance the readability of the program by denoting a restricted area where the value of a variable can be updated.

Within the scope that is labeled PROMPT and defined on lines 37-54 of the above Tic-Tac-Toe program, a variable named mark is created using the my keyword (line 40). After it is created, it is assigned a value by calling the get_mark subroutine (line 47). Later, the put_mark subroutine is called (line 51) to change the value in the square that was chosen by the get_move subroutine on line 50.

Hopefully it is obvious that the mark that put_mark is setting is meant to be the same mark that get_mark retrieved earlier. As a programmer though, how do I know that the value of mark wasn’t changed when the get_move subroutine was called? This example program is small enough that every line can be examined to make that determination. But most programs are much larger than this example and having to know exactly what is going on at all points in the program’s execution can be overwhelming and error-prone. Because mark was created with the my keyword, its value can only be accessed and modified within the scope that it was created (or a sub-scope). It doesn’t matter what subroutines at parallel or higher scopes do; even if they change variables with the same name in their own scopes. This property of scopes — restricting the range of lines on which the value of a variable can be updated — improves the readability of the code by allowing the programmer to focus on a smaller section of the program without having to be concerned about what is happening elsewhere in the program.

Lines 04 and 05 define the MARKS and BOARD variables, respectively. Because they are not within any curly bracket pairing, they exist in the global scope. It is permissible to create constant variables in the global scope because they are read-only and therefore not subject to the action at a distance concern. In PERL, it is traditional to name constants in all upper case letters.

Notice that scopes can be nested such that variables defined in outer scopes can be accessed and modified from within inner scopes. This is why the MARKS and BOARD variables can be accessed within the get_mark subroutine and PROMPT block respectively — they are sub-scopes of the global scope.

The statements in the program are executed in order from top to bottom and left to right. Each statement is terminated with a semi-colon (;). The semi-colon can be omitted from the last statement in any scope and from after the last block of many statements that define the flow of the program such as sub, if and while.

In PERL nomenclature, scopes are called blocks. Scope is the more general term that is typically used in online references like Wikipedia, but the remainder of this article will use the more perlish term blocks.

The statements within the first three blocks are not immediately executed as the program is evaluated from top to bottom. Rather, they are associated with the subroutine name preceding the block. This is the function of the sub keyword — it associates a subroutine name with a block of statements so that they can be called as a unit elsewhere in the program. The three subroutines get_mark (lines 15-21), put_mark (lines 23-31), and get_move (lines 33-35) are called on lines 47, 51 and 50 respectively.

The PROMPT block is not associated with a subroutine definition or other flow-control statement, so the statements within it are immediately executed in sequence when the program is run.

PERL regular expressions

If there is one feature that is more central to PERL than any other it is regular expressions. Notice that in the example Tic-Tac-Toe program every block contains a =~ (or !~) operator followed by some text surrounded with forward slashes (/). The text within the forward slashes is called a regular expression and the operator binds the regular expression to a variable or data stream.

It is important to note that there are different regular expression syntaxes. Some editors and command-line tools (for example, grep) allow the user to select which regular expression syntax they prefer to use. PERL-Compatible Regular Expressions (PCRE) are by far the most powerful.

Regular expressions used in matching operations

The result of applying the regular expression to a variable or data stream is usually a value that, when used in a flow-control statement such as if or while, will evaluate to true or false depending on whether or not the match succeeded. There are modifiers that can be appended to the closing slash of a regular expression to change its return value.

Line 45 of the Tic-Tac-Toe program provides a typical example of how a regular expression is used in a PERL program. The regular expression [1-9] is being applied to the variable game which holds the in-memory representation of the Tic-Tac-Toe game board. The expression is a character class that matches any character in the range from 1 to 9 (inclusive). The result of the regular expression will be true only if a character from 1 to 9 is present in what is being evaluated. On line 45, the !~ operator applies the regular expression to the game variable and negates its sense such that the result will be true only if none of the characters from 1 to 9 are present. Because the regular expression is embedded within the conditional clause of the if statement modifier, the statement last PROMPT is only executed if there are no characters in the range from 1 to 9 left on the game board.

The last statement is one of a few flow-control statements in PERL that allow the program execution sequence to jump from the current line to another line somewhere else in the program. Other flow-control statements that work in a similar fashion include next, continue, redo and goto (the goto statement should be avoided whenever possible because it allows for spaghetti code).

In the example Tic-Tac-Toe program, the last PROMPT statement on line 45 causes program execution to resume just after the PROMPT block. Because there are no more statements in the program, the program will terminate.

The label PROMPT was chosen arbitrarily. Any label (or none at all) could have been used.

The redo PROMPT statement at the end of the PROMPT block causes program execution to jump back to the beginning of the PROMPT block.

Notice that the state keyword like the my keyword creates a variable that can only be accessed or modified within the block that it is created (or a nested sub-block if any exist). Unlike the my keyword, variables created with the state keyword keep their former value when the blocks they are in are called repeatedly. This is the behavior that is needed for the game variable because it is being updated incrementally each time the PROMPT block is run. The mark and move variables are meant to be different on each iteration of the PROMPT block, so they do not need to be created with the state keyword.

Regular expressions used for input validation

Another common use of regular expressions is for input validation. Line 34 of the example Tic-Tac-Toe program provides an example of a regular expression being used for input validation. The expression on line 34 is similar to the one on line 45. It is also checking for characters from 1 to 9. However, it is performing the check against the null filehandle (<>); it is using the =~ operator; and it is prefixed and suffixed with the zero-width assertions ^ and $ respectively.

The null filehandle, when accessed as it is on line 34, will cause the program to pause until one line of input is provided. The regular expression will evaluate to true only if the line contains one character in the range from 1 to 9. The assertions ^ and $ do not match any characters. Rather, they match the beginning and end positions, respectively, on the line. The regular expression effectively reads: “Begin (^) with one character in the range from 1 to 9 ([1-9]) and end ($)”.

Because it is embedded in the conditional clause of the ternary operator, line 34 will return either what was matched ($&) if the match succeeded or the character zero (0) if it failed. If the input were not validated in this way, then the user could submit their opponent’s mark rather than a number on the board.

Regular expressions used for filtering data

Line 17 demonstrates using the global modifier (g) on a regular expression. With the global modifier, the regular expression will return the number of matches instead of true or false. In list context, it returns a list of all the matched substrings.

Line 17 uses a regular expression to copy all the numbers in the range from 1 to 9 from the game variable into the array named nums. Line 18 then uses the modulo operator with the integer 2 as its second argument to determine whether the length of the nums array is even or odd. The formula on line 18 will result in 0 if the length of nums is odd and 1 if the length of nums is even. Finally, the computed index (indx) is used to access an element of the MARKS array and return it. Using this formula, the get_mark function will alternately return X or O depending on whether there are an odd or even number of positions left on the board.

Regular expressions used for substituting data

Line 28 demonstrates yet another common use of regular expressions in PERL. Rather than being used in a match operator (m), the regular expression on line 28 is being used in a substitution operator (s). If the value in the move variable is found in the game variable, it will be substituted with the value of the mark variable.

PERL sigils and data types

The last things of note that are used in the example Tic-Tac-Toe program are the sigils ($ and @) that are placed before the variable names. When creating a variable, the sigil indicates the type of variable being created. It is important to note that a different sigil can be prefixed to the variable name when it is accessed to indicate whether one or many items should be returned from the variable.

There are three built-in data types in PERL: scalars ($), arrays (@) and associative arrays (%). Scalars hold a single data item such as a number, character or string of characters. Arrays are numerically indexed sets of scalars. Associative arrays are arrays that are indexed by scalars rather than numbers.

Posted on Leave a comment

How to get MongoDB Server on Fedora

Mongo (from “humongous”) is a high-performance, open source, schema-free document-oriented database, which is one of the most favorite so-called NoSQL databases. It uses JSON as a document format, and it is designed to be scalable and replicable across multiple server nodes.

Story about license change

It’s been more than a year when the upstream MongoDB decided to change the license of the Server code. The previous license was GNU Affero General Public License v3 (AGPLv3). However, upstream wrote a new license designed to make companies running MongoDB as a service contribute back to the community. The new license is called Server Side Public License (SSPLv1) and more about this step and its rationale can be found at MongoDB SSPL FAQ.

Fedora has always included only free (as in “freedom”) software. When SSPL was released, Fedora determined that it is not a free software license in this meaning. All versions of MongoDB released before the license change date (October 2018) could be potentially kept in Fedora, but never updating the packages in the future would bring security issues. Hence the Fedora community decided to remove the MongoDB server entirely, starting Fedora 30.

What options are left to developers?

Well, alternatives exist, for example PostgreSQL also supports JSON in the recent versions, and it can be used in cases when MongoDB cannot be used any more. With JSONB type, indexing works very well in PostgreSQL with performance comparable with MongoDB, and even without any compromises from ACID.

The technical reasons that a developer may have chosen MongoDB did not change with the license, so many still want to use it. What is important to realize is that the SSPL license was only changed to the MongoDB server. There are other projects that MongoDB upstream develops, like MongoDB tools, C and C++ client libraries and connectors for various dynamic languages, that are used on the client side (in applications that want to communicate with the server over the network). Since the license is kept free (Apache License mostly) for those packages, they are staying in Fedora repositories, so users can use them for the application development.

The only change is really the server package itself, which was removed entirely from Fedora repos. Let’s see what a Fedora user can do to get the non-free packages.

How to install MongoDB server from the upstream

When Fedora users want to install a MongoDB server, they need to approach MongoDB upstream directly. However, the upstream does not ship RPM packages for Fedora itself. Instead, the MongoDB server is either available as the source tarball, that users need to compile themselves (which requires some developer knowledge), or Fedora user can use some compatible packages. From the compatible options, the best choice is the RHEL-8 RPMs at this point. The following steps describe, how to install them and how to start the daemon.

1. Create a repository with upstream RPMs (RHEL-8 builds)

 
$ sudo cat > /etc/yum.repos.d/mongodb.repo &lt;&lt;EOF
[mongodb-upstream]
name=MongoDB Upstream Repository
baseurl=https://repo.mongodb.org/yum/redhat/8Server/mongodb-org/4.2/x86_64/
gpgcheck=1
enabled=1
gpgkey=https://www.mongodb.org/static/pgp/server-4.2.asc
EOF

2. Install the meta-package, that pulls the server and tools packages

 
$ sudo dnf install mongodb-org
&lt;snipped>
Installed:
  mongodb-org-4.2.3-1.el8.x86_64           mongodb-org-mongos-4.2.3-1.el8.x86_64  
  mongodb-org-server-4.2.3-1.el8.x86_64    mongodb-org-shell-4.2.3-1.el8.x86_64
  mongodb-org-tools-4.2.3-1.el8.x86_64          

Complete!

3. Start the MongoDB daemon

 
$ sudo systemctl status mongod
● mongod.service - MongoDB Database Server
   Loaded: loaded (/usr/lib/systemd/system/mongod.service; enabled; vendor preset: disabled)
   Active: active (running) since Sat 2020-02-08 12:33:45 EST; 2s ago
     Docs: https://docs.mongodb.org/manual
  Process: 15768 ExecStartPre=/usr/bin/mkdir -p /var/run/mongodb (code=exited, status=0/SUCCESS)
  Process: 15769 ExecStartPre=/usr/bin/chown mongod:mongod /var/run/mongodb (code=exited, status=0/SUCCESS)
  Process: 15770 ExecStartPre=/usr/bin/chmod 0755 /var/run/mongodb (code=exited, status=0/SUCCESS)
  Process: 15771 ExecStart=/usr/bin/mongod $OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 15773 (mongod)
   Memory: 70.4M
      CPU: 611ms
   CGroup: /system.slice/mongod.service
           └─15773 /usr/bin/mongod -f /etc/mongod.conf

4. Verify that the server runs by connecting to it from the mongo shell

 
$ mongo
MongoDB shell version v4.2.3
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&amp;gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("20b6e61f-c7cc-4e9b-a25e-5e306d60482f") }
MongoDB server version: 4.2.3
Welcome to the MongoDB shell.
For interactive help, type "help".
For more comprehensive documentation, see
    http://docs.mongodb.org/
---

> _

That’s all. As you see, the RHEL-8 packages are pretty compatible and it should stay that way for as long as the Fedora packages remain compatible with what’s in RHEL-8. Just be careful that you comply with the SSPLv1 license in your use.

Posted on Leave a comment

Enable remote collaboration with tmate.io on Fedora

Being able to collaborate on task remotely is an increasing need in today’s world. Contributing to Open Source project ? Working remotely ? tmate is a tmux fork that makes it easy to share a terminal session with others. It can save you hours of lonely debugging or programming.

tmate, being a tmux fork, supports all of tmux features and configuration. Also tmux and tmate can co-exist on the same system. To learn more about tmux, you can read the following article

Installing tmate on Fedora

tmate is available in the Fedora repository, making it really easy to install.

$ sudo dnf install tmate
$ tmate
Connecting to ssh.tmate.io… Note: clear your terminal before sharing readonly access web session read only: https://tmate.io/t/ro-F2aK7T ssh session read only: ssh [email protected] web session: https://tmate.io/t/H5rPw ssh session: ssh [email protected]

After starting tmate, different ways to share your session will be available. You have the choice between ssh (read-only, read-write) or web (read-only, read-write).

The web client is known to have a few issues and is still work in progress, for example the tmux key bindings are not yet supported.

On the host running tmate, you start a new pane by hitting “Ctrl+b, c”. The new pane will then be available with anyone connected to your session.

You can easily keep track of how many clients are connected to your session, using the tmate control pane. To access it hit “Ctrl+b, 0 (zero)” you will then see something like this.

A mate has joined (109.95.145.251) -- 1 client currently connected
A mate has left (109.95.145.251) -- 0 client currently connected
A mate has joined (109.95.145.251) -- 1 client currently connected

To close a session you can simply close tmate “Ctrl+c, Ctrl+d“.

Running your own server

By default tmate is using a remote server hosted on tmate.io. If you prefer you have the possibility to run your own server. For convenience a container image is provided and instruction are available on tmate.io.

It is important to remember that sharing your terminal session in read-write mode will give full access to your system to the connected client. So make sure you trust the persons you sharing you session with or use the read-only mode.

Posted on Leave a comment

Develop GUI apps using Flutter on Fedora

When it comes to app development frameworks, Flutter is the latest and greatest. Google seems to be planning to take over the entire GUI app development world with Flutter, starting with mobile devices, which are already perfectly supported. Flutter allows you to develop cross-platform GUI apps for multiple targets — mobile, web, and desktop — from a single codebase.

This post will go through how to install the Flutter SDK and tools on Fedora, as well as how to use them both for mobile development and web/desktop development.

Installing Flutter and Android SDKs on Fedora

To get started building apps with Flutter, you need to install

  • the Android SDK;
  • the Flutter SDK itself; and,
  • optionally, an IDE and its Flutter plugins.

Installing the Android SDK

Flutter requires the installation of the Android SDK with the entire Android Studio suite of tools. Google provides a tar.gz archive. The Android Studio executable can be found in the android-studio/bin directory and is called studio.sh. To run it, open a terminal, cd into the aforementioned directory, and then run:

$ ./studio.sh

Installing the Flutter SDK

Before you install Flutter you may want to consider what release channel you want to be on.

The stable channel is least likely to give you a headache if you just want to build a mobile app using mainstream Flutter features.

On the other hand, you may want to use the latest features, especially for desktop and web app development. In that case, you might be better off installing either the latest version of the beta or even the dev channel.

Either way, you can switch between channels after you install using the flutter channel command explained later in the article.

Head over to the official SDK archive page and download the latest installation bundle for the release channel most appropriate for your use case.

The installation bundle is simply a xz-compressed tarball (.tar.xz extension). You can extract it wherever you want, given that you add the flutter/bin subdirectory to the PATH environment variable.

Installing the IDE plugins

To install the plugin for Visual Studio Code, you need to search for Flutter in the Extensions tab. Installing it will also install the Dart plugin.

The same will happen when you install the plugin for Android Studio by opening the Settings, then the Plugins tab and installing the Flutter plugin.

Using the Flutter and Android CLI Tools on Fedora

Now that you’ve installed Flutter, here’s how to use the CLI tool.

Upgrading and Maintaining Your Flutter Installations

The flutter doctor command is used to check whether your installation and related tools are complete and don’t require any further action.

For example, the output you may get from flutter doctor right after installing on Fedora is:

Doctor summary (to see all details, run flutter doctor -v): [✓] Flutter (Channel stable, v1.12.13+hotfix.5, on Linux, locale it_IT.UTF-8) [!] Android toolchain - develop for Android devices (Android SDK version 29.0.2) ✗ Android licenses not accepted. To resolve this, run: flutter doctor --android-licenses [!] Android Studio (version 3.5) ✗ Flutter plugin not installed; this adds Flutter specific functionality. ✗ Dart plugin not installed; this adds Dart specific functionality. [!] Connected device ! No devices available ! Doctor found issues in 3 categories.

Of course the issue with the Android toolchain has to be resolved in order to build for Android. Run this command to accept the licenses:

$ flutter doctor --android-licenses

Use the flutter channel command to switch channels after installation. It’s just like switching branches on Git (and that’s actually what it does). You use it in the following way:

$ flutter channel <channel_name>

…where you’d replace <channel_name> with the release channel you want to switch to.

After doing that, or whenever you feel the need to do it, you need to update your installation. You might consider running this every once in a while or when a major update comes out if you follow Flutter news. Run this command:

$ flutter upgrade

Building for Mobile

You can build for Android very easily: the flutter build command supports it by default, and it allows you to build both APKs and newfangled app bundles.

All you need to do is to create a project with flutter create, which will generate some code for an example app and the necessary android and ios folders.

When you’re done coding you can either run:

  • flutter build apk or flutter build appbundle to generate the necessary app files to distribute, or
  • flutter run to run the app on a connected device or emulator directly.

When you run the app on a phone or emulator with flutter run, you can use the R button on the keyboard to use stateful hot reload. This feature updates what’s displayed on the phone or emulator to reflect the changes you’ve made to the code without requiring a full rebuild.

If you input a capital R character to the debug console, you trigger a hot restart. This restart doesn’t preserve state and is necessary for bigger changes to the app.

If you’re using a GUI IDE, you can trigger a hot reload using the bolt icon button and a hot restart with the typical refresh button.

Building for the Desktop

To build apps for the desktop on Fedora, use the flutter-desktop-embedding repository. The flutter create command doesn’t have templates for desktop Linux apps yet. That repository contains examples of desktop apps and files required to build on desktop, as well as examples of plugins for desktop apps.

To build or run apps for Linux, you also need to be on the master release channel and enable Linux desktop app development. To do this, run:

$ flutter config --enable-linux-desktop

After that, you can use flutter run to run the app on your development workstation directly, or run flutter build linux to build a binary file in the build/ directory.

If those commands don’t work, run this command in the project directory to generate the required files to build in the linux/ directory:

$ flutter create .

Building for the Web

Starting with Flutter 1.12, you can build Web apps using Flutter with the mainline codebase, without having to use the flutter_web forked libraries, but you have to be running on the beta channel.

If you are (you can switch to it using flutter channel beta and flutter upgrade as we’ve seen earlier), you need to enable web development by running flutter config –enable-web.

After doing that, you can run flutter run -d web and a local web server will be started from which you can access your app. The command returns the URL at which the server is listening, including the port number.

You can also run flutter build web to build the static website files in the build/ directory.

If those commands don’t work, run this command in the project directory to generate the required files to build in the web/ directory:

$ flutter create .

Packages for Installing Flutter

Other distributions have packages or community repositories to install and update in a more straightforward and intuitive way. However, at the time of writing, no such thing exists for Flutter. If you have experience packaging RPMs for Fedora, consider contributing to this GitHub repository for this COPR package.

The next step is learning Flutter. You can do that in a number of ways:

  • Read the good API reference documentation on the official site
  • Watching some of the introductory video courses available online
  • Read one of the many books out there today. [Check out the author’s bio for a suggestion! — Ed.]

Photo by Randall Ruiz on Unsplash.

Posted on Leave a comment

A quick introduction to Toolbox on Fedora

Toolbox allows you to sort and manage your development environments in containers without requiring root privileges or manually attaching volumes. It creates a container where you can install your own CLI tools, without installing them on the base system itself. You can also utilize it when you do not have root access or cannot install programs directly. This article gives you an introduction to toolbox and what it does.

Installing Toolbox

Silverblue includes Toolbox by default. For the Workstation and Server editions, you can grab it from the default repositories using dnf install toolbox.

Creating Toolboxes

Open your terminal and run toolbox enter. The utility will automatically request permission to download the latest image, create your first container, and place your shell inside this container.

$ toolbox enter
No toolbox containers found. Create now? [y/N] y
Image required to create toolbox container.
Download registry.fedoraproject.org/f30/fedora-toolbox:30 (500MB)? [y/N]: y

Currently there is no difference between the toolbox and your base system. Your filesystems and packages appear unchanged. Here is an example using a repository that contains documentation source for a resume under a ~/src/resume folder. The resume is built using the pandoc tool.

$ pwd /home/rwaltr $ cd src/resume/ $ head -n 5 Makefile all: pdf html rtf text docx pdf: init pandoc -s -o BUILDS/resume.pdf markdown/* $ make pdf
bash: make: command not found
$ pandoc -v
bash: pandoc: command not found

This toolbox does not have the programs required to build the resume. You can remedy this by installing the tools with dnf. You will not be prompted for the root password, because you are running in a container.

$ sudo dnf groupinstall "Authoring and Publishing" -y && sudo dnf install pandoc make -y
... $ make all #Successful builds
mkdir -p BUILDS
pandoc -s -o BUILDS/resume.pdf markdown/*
pandoc -s -o BUILDS/resume.html markdown/*
pandoc -s -o BUILDS/resume.rtf markdown/*
pandoc -s -o BUILDS/resume.txt markdown/*
pandoc -s -o BUILDS/resume.docx markdown/*
$ ls BUILDS/
resume.docx resume.html resume.pdf resume.rtf resume.txt

Run exit at any time to exit the toolbox.

$ cd BUILDS/
$ pandoc --version || ls
pandoc 2.2.1
Compiled with pandoc-types 1.17.5.4, texmath 0.11.1.2, skylighting 0.7.5
...
for a particular purpose.
resume.docx resume.html resume.pdf resume.rtf resume.txt
$ exit logout
$ pandoc --version || ls
bash: pandoc: command not found...
resume.docx resume.html resume.pdf resume.rtf resume.txt

You retain the files created by your toolbox in your home directory. None of the programs installed in your toolbox will be available outside of it.

Tips and tricks

This introduction to toolbox only scratches the surface. Here are some additional tips, but you can also check out the official documentation.

  • Toolbox –help will show you the man page for Toolbox
  • You can have multiple toolboxes at once. Use toolbox create -c Toolboxname and toolbox enter -c Toolboxname
  • Toolbox uses Podman to do the heavy lifting. Use toolbox list to find the IDs of the containers Toolbox creates. Podman can use these IDs to perform actions such as rm and stop. (You can also read more about Podman in this Magazine article.)

Photo courtesy of Florian Richter from Flickr.

Posted on Leave a comment

Using SSH port forwarding on Fedora

You may already be familiar with using the ssh command to access a remote system. The protocol behind ssh allows terminal input and output to flow through a secure channel. But did you know that you can also use ssh to send and receive other data securely as well? One way is to use port forwarding, which allows you to connect network ports securely while conducting your ssh session. This article shows you how it works.

About ports

A standard Linux system has a set of network ports already assigned, from 0-65535. Your system reserves ports up to 1023 for system use. In many systems you can’t elect to use one of these low-numbered ports. Quite a few ports are commonly expected to run specific services. You can find these defined in your system’s /etc/services file.

You can think of a network port like a physical port or jack to which you can connect a cable. That port may connect to some sort of service on the system, like wiring behind that physical jack. An example is the Apache web server (also known as httpd). The web server usually claims port 80 on the host system for HTTP non-secure connections, and 443 for HTTPS secure connections.

When you connect to a remote system, such as with a web browser, you are also “wiring” your browser to a port on your host. This is usually a random high port number, such as 54001. The port on your host connects to the port on the remote host, such as 443 to reach its secure web server.

So why use port forwarding when you have so many ports available? Here are a couple common cases in the life of a web developer.

Local port forwarding

Imagine that you are doing web development on a remote system called remote.example.com. You usually reach this system via ssh but it’s behind a firewall that allows very little additional access, and blocks most other ports. To try out your web app, it’s helpful to be able to use your web browser to point to the remote system. But you can’t reach it via the normal method of typing the URL in your browser, thanks to that pesky firewall.

Local forwarding allows you to tunnel a port available via the remote system through your ssh connection. The port appears as a local port on your system (thus “local forwarding.”)

Let’s say your web app is running on port 8000 on the remote.example.com box. To locally forward that system’s port 8000 to your system’s port 8000, use the -L option with ssh when you start your session:

$ ssh -L 8000:localhost:8000 remote.example.com

Wait, why did we use localhost as the target for forwarding? It’s because from the perspective of remote.example.com, you’re asking the host to use its own port 8000. (Recall that any host usually can refer to itself as localhost to connect to itself via a network connection.) That port now connects to your system’s port 8000. Once the ssh session is ready, keep it open, and you can type http://localhost:8000 in your browser to see your web app. The traffic between systems now travels securely over an ssh tunnel!

If you have a sharp eye, you may have noticed something. What if we used a different hostname than localhost for the remote.example.com to forward? If it can reach a port on another system on its network, it usually can forward that port just as easily. For example, say you wanted to reach a MariaDB or MySQL service on the db.example.com box also on the remote network. This service typically runs on port 3306. So you could forward it with this command, even if you can’t ssh to the actual db.example.com host:

$ ssh -L 3306:db.example.com:3306 remote.example.com

Now you can run MariaDB commands against your localhost and you’re actually using the db.example.com box.

Remote port forwarding

Remote forwarding lets you do things the opposite way. Imagine you’re designing a web app for a friend at the office, and want to show them your work. Unfortunately, though, you’re working in a coffee shop, and because of the network setup, they can’t reach your laptop via a network connection. However, you both use the remote.example.com system at the office and you can still log in there. Your web app seems to be running well on port 5000 locally.

Remote port forwarding lets you tunnel a port from your local system through your ssh connection, and make it available on the remote system. Just use the -R option when you start your ssh session:

$ ssh -R 6000:localhost:5000 remote.example.com

Now when your friend inside the corporate firewall runs their browser, they can point it at http://remote.example.com:6000 and see your work. And as in the local port forwarding example, the communications travel securely over your ssh session.

By default the sshd daemon running on a host is set so that only that host can connect to its remote forwarded ports. Let’s say your friend wanted to be able to let people on other example.com corporate hosts see your work, and they weren’t on remote.example.com itself. You’d need the owner of the remote.example.com host to add one of these options to /etc/ssh/sshd_config on that box:

GatewayPorts yes # OR
GatewayPorts clientspecified

The first option means remote forwarded ports are available on all the network interfaces on remote.example.com. The second means that the client who sets up the tunnel gets to choose the address. This option is set to no by default.

With this option, you as the ssh client must still specify the interfaces on which the forwarded port on your side can be shared. Do this by adding a network specification before the local port. There are several ways to do this, including the following:

$ ssh -R *:6000:localhost:5000 # all networks
$ ssh -R 0.0.0.0:6000:localhost:5000 # all networks
$ ssh -R 192.168.1.15:6000:localhost:5000 # single network
$ ssh -R remote.example.com:6000:localhost:5000 # single network

Other notes

Notice that the port numbers need not be the same on local and remote systems. In fact, at times you may not even be able to use the same port. For instance, normal users may not to forward onto a system port in a default setup.

In addition, it’s possible to restrict forwarding on a host. This might be important to you if you need tighter security on a network-connected host. The PermitOpen option for the sshd daemon controls whether, and which, ports are available for TCP forwarding. The default setting is any, which allows all the examples above to work. To disallow any port fowarding, choose none, or choose only a specific host:port setting to permit. For more information, search for PermitOpen in the manual page for sshd daemon configuration:

$ man sshd_config

Finally, remember port forwarding only happens as long as the controlling ssh session is open. If you need to keep the forwarding active for a long period, try running the session in the background using the -N option. Make sure your console is locked to prevent tampering while you’re away from it.

Posted on Leave a comment

Make your Python code look good with Black on Fedora

The Python programing language is often praised for its simple syntax. In fact the language recognizes that code is read much more often than it is written. Black is a tool that automatically formats your Python source code making it uniform and compliant to the PEP-8 style guide.

How to install Black on Fedora

Installing Black on Fedora is quite simple. Black is maintained in the official repositories.

$ sudo dnf install python3-black

Black is a command line tool and therefore it is run from the terminal.

$ black --help

Format your Python code with Black

Using Black to format a Python code base is straight forward.

$ black myfile.py
All done! ✨ 🍰 ✨ 1 file left unchanged.
$ black path_to_my_python_project/
All done! ✨ 🍰 ✨
165 files reformatted, 24 files left unchanged.

By default Black allows 88 characters per line, meaning that the code will be reformatted to fit within 88 characters per line. It is possible to change this to a custom value, for example :

$ black --line-length 100 my_python_file.py

This will set the line length to allow 100 characters.

Run Black as part of a CI pipeline

Black really shines when it is integrated with other tools, like a continuous integration pipeline.

The –check option allows to verify if any files need to be reformatted. This is useful to run as a CI test to ensure all your code is formatted in consistent manner.

$ black --check myfile.py
would reformat myfile.py
All done! 💥 💔 💥
1 file would be reformatted.

Integrate Black with your code editor

Running Black during the continuous integration tests is a great way to keep the code base correctly formatted. But developers really wants to forget about formatting and have the tool managing it for them.

Most of the popular code editors support Black. It allows developers to run the format tool every time a file is saved. The official documentation details the configuration needed for each editor.

Black is a must-have tool in the Python developer toolbox and is easily available on Fedora.