Posted on Leave a comment

How to configure an SSH proxy server with Squid

Sometimes you can’t connect to an SSH server from your current location. Other times, you may want to add an extra layer of security to your SSH connection. In these cases connecting to another SSH server via a proxy server is one way to get through.

Squid is a full-featured proxy server application that provides caching and proxy services. It’s normally used to help improve response times and reduce network bandwidth by reusing and caching previously requested web pages during browsing.

However for this setup you’ll configure Squid to be used as an SSH proxy server since it’s a robust trusted proxy server that is easy to configure.

Installation and configuration

Install the squid package using sudo:

$ sudo dnf install squid -y

The squid configuration file is quite extensive but there are only a few things we need to configure. Squid uses access control lists to manage connections.

Edit the /etc/squid/squid.conf file to make sure you have the two lines explained below.

First, specify your local IP network. The default configuration file already has a list of the most common ones but you will need to add yours if it’s not there. For example, if your local IP network range is 192.168.1.X, this is how the line would look:

acl localnet src 192.168.1.0/24

Next, add the SSH port as a safe port by adding the following line:

acl Safe_ports port 22

Save that file. Now enable and restart the squid proxy service:

$ sudo systemctl enable squid
$ sudo systemctl restart squid

4.) By default squid proxy listens on port 3128. Configure firewalld to allow for this:

$ sudo firewall-cmd --add-service=squid --perm
$ sudo firewall-cmd --reload

Testing the ssh proxy connection

To connect to a server via ssh through a proxy server we’ll be using netcat.

Install nmap-ncat if it’s not already installed:

$ sudo dnf install nmap-ncat -y

Here is an example of a standard ssh connection:

$ ssh [email protected]

Here is how you would connect to that same server using the squid proxy server as a gateway.

This example assumes the squid proxy server’s IP address is 192.168.1.63. You can also use the host-name or the FQDN of the squid proxy server:

$ ssh [email protected] -o "ProxyCommand nc --proxy 192.168.1.63:3128 %h %p"

Here are the meanings of the options:

  • ProxyCommand – Tells ssh a proxy command is going to be used.
  • nc – The command used to establish the connection to the proxy server. This is the netcat command.
  • %h – The placeholder for the proxy server’s host-name or IP address.
  • %p – The placeholder for the proxy server’s port number.

There are many ways to configure an SSH proxy server but this is a simple way to get started.

Posted on Leave a comment

Demonstrating Perl with Tic-Tac-Toe, Part 3

The articles in this series have mainly focused on Perl’s ability to manipulate text. Perl was designed to manipulate and analyze text. But Perl is capable of much more. More complex problems often require working with sets of data objects and indexing and comparing them in elaborate ways to compute some desired result.

For working with sets of data objects, Perl provides arrays and hashes. Hashes are also known as associative arrays or dictionaries. This article will prefer the term hash because it is shorter.

The remainder of this article builds on the previous articles in this series by demonstrating basic use of arrays and hashes in Perl.

An example Perl program

Copy and paste the below code into a plain text file and use the same one-liner that was provided in the the first article of this series to strip the leading numbers. Name the version without the line numbers chip2.pm and move it into the hal subdirectory. Use the version of the game that was provided in the second article so that the below chip will automatically load when placed in the hal subdirectory.

00 # advanced operations chip
01 02 package chip2;
03 require chip1;
04 05 use strict;
06 use warnings;
07 08 use constant SCORE=>'
09 ┌───┬───┬───┐
10 │ 3 │ 2 │ 3 │
11 ├───┼───┼───┤
12 │ 2 │ 4 │ 2 │
13 ├───┼───┼───┤
14 │ 3 │ 2 │ 3 │
15 └───┴───┴───┘
16 ';
17 18 sub get_prob {
19 my $game = shift;
20 my @nums;
21 my %odds;
22 23 while ($game =~ /[1-9]/g) {
24 $odds{$&} = substr(SCORE, $-[0], 1);
25 }
26 27 @nums = sort { $odds{$b} <=> $odds{$a} } keys %odds;
28 29 return $nums[0];
30 }
31 32 sub win_move {
33 my $game = shift;
34 my $mark = shift;
35 my $tkns = shift;
36 my @nums = $game =~ /[1-9]/g;
37 my $move;
38 39 TRY: for (@nums) {
40 my $num = $_;
41 my $try = $game =~ s/$num/$mark/r;
42 my $vic = chip1::get_victor $try, $tkns;
43 44 if (defined $vic) {
45 $move = $num;
46 last TRY;
47 }
48 }
49 50 return $move;
51 }
52 53 sub hal_move {
54 my $game = shift;
55 my $mark = shift;
56 my @mark = @{ shift; };
57 my $move;
58 59 $move = win_move $game, $mark, \@mark;
60 61 if (not defined $move) {
62 $mark = ($mark eq $mark[0]) ? $mark[1] : $mark[0];
63 $move = win_move $game, $mark, \@mark;
64 }
65 66 if (not defined $move) {
67 $move = get_prob $game;
68 }
69 70 return $move;
71 }
72 73 sub complain {
74 print "My mind is going. I can feel it.\n";
75 }
76 77 sub import {
78 no strict;
79 no warnings;
80 81 my $p = __PACKAGE__;
82 my $c = caller;
83 84 *{ $c . '::hal_move' } = \&{ $p . '::hal_move' };
85 *{ $c . '::complain' } = \&{ $p . '::complain' };
86 }
87 88 1;

How it works

In the above example Perl module, each position on the Tic-Tac-Toe board is assigned a score based on the number of winning combinations that intersect it. The center square is crossed by four winning combinations – one horizontal, one vertical, and two diagonal. The corner squares each intersect one horizontal, one vertical, and one diagonal combination. The side squares each intersect one horizontal and one vertical combination.

The get_prob subroutine creates a hash named odds (line 21) and uses it to map the numbers on the current game board to their score (line 24). The keys of the hash are then sorted by their score and the resulting list is copied to the nums array (line 27). The get_prob subroutine then returns the first element of the nums array ($nums[0]) which is the number from the original game board that has the highest score.

The algorithm described above is an example of what is called a heuristic in artificial intelligence programming. With the addition of this module, the Tic-Tac-Toe game can be considered a very rudimentary artificial intelligence program. It is really just playing the odds though and it is quite beatable. The next module (chip3.pm) will provide an algorithm that actually calculates the best possible move based on the opponent’s counter moves.

The win_move subroutine simply tries placing the provided mark in each available position and passing the resulting game board to chip1’s get_victor subroutine to see if it contains a winning combination. Notice that the r flag is being passed to the substitution operation (s/$num/$mark/r) on line 41 so that, rather than modifying the original game board, a new copy of the board containing the substitution is created and returned.

Arrays

It was mentioned in part one that arrays are variables whose names are prefixed with an at symbol (@) when they are created. In Perl, these prefixed symbols are called sigils.

Context

In Perl, many things return a different value depending on the context in which they are accessed. The two contexts to be aware of are called scalar context and list context. In the following example, $value1 and $value2 are different because @nums is accessed first in scalar context and then in list context.

$value1 = @nums;
($value2) = @nums;

In the above example, it might seem like @nums should return the same value each time it is accessed, but it doesn’t because what is accessing it (the context) is different. $value1 is a scalar, so it receives the scalar value of @nums which is its length. ($value2) is a list, so it receives the list value of @nums. In the above example, $value2 will receive the value of the first element of the nums array.

In part one, the below statement from the get_mark subroutine copied the numbers from the current Tic-Tac-Toe board into an array named nums.

@nums = $game =~ /[1-9]/g

Since the nums array in the above statement receives one copy of each board number in each of its elements, the count of the board numbers is equal to the length of the array. In Perl, the length of an array is obtained by accessing it in scalar context.

Next, the following formula was used to compute which mark should be placed on the Tic-Tac-Toe board in the next turn.

$indx = (@nums+1) % 2;

Because the plus operator requires a single value (a scalar) on its left hand side, not a list of values, the nums array evaluates to its length, not the list of its values. The parenthesis, in the above example, are just being used to set the order of operations so that the addition (+) will happen before the modulo (%).

Copying

In Perl you can create a list for immediate use by surrounding the list values with parenthesis and separating them with commas. The following example creates a three-element list and copies its values to an array.

@nums = (4, 5, 6);

As long as the elements of the list are variables and not constants, you can also copy the elements of an array to a list:

($four, $five, $six) = @nums;

If there were more elements in the array than the list in the above example, the extra elements would simply be discarded.

Different from lists in scalar context

Be aware that lists and arrays are different things in Perl. A list accessed in scalar context returns its last value, not its length. In the following example, $value3 receives 3 (the length of @nums) while $value4 receives 6 (the last element of the list).

$value3 = @nums;
$value4 = (4, 5, 6);

Indexing

To access an individual element of an array or list, suffix it with the desired index in square brackets as shown on line 29 of the above example Perl module.

Notice that the nums array on line 29 is prefixed with the dollar sigil ($) rather than the at sigil (@). This is done because the get_prob subroutine is supposed to return a single value, not a list. If @nums[0] were used instead of $nums[0], the subroutine would return a one-element list. Since a list evaluates to its last element in scalar context, this program would probably work if I had used @nums[0], but if you mean to retrieve a single element from an array, be sure to use the dollar sigil ($), not the at sigil (@).

It is possible to retrieve a subset from an array (or a list) rather than just one value in which case you would use the at sigil and you would provide a series of indexes or a range instead of a single index. This is what is known in Perl as a list slice.

Hashes

Hashes are variables whose names are prefixed with the percent sigil (%) when they are created. They are subscripted with curly brackets ({}) when accessing individual elements or subsets of elements (hash slices). Like arrays, hashes are variables that can hold multiple discrete data elements. They differ from arrays in the following ways:

  1. Hashes are indexed by strings (or anything that can be converted to a string), not numbers.
  2. Hashes are unordered. If you retrieve a list of their keys, values or key-value pairs, the order of the listing will be random.
  3. The number of elements in the hash will be equal to the number of keys that have been assigned values. If a value is assigned to index 99 of an array that has only three elements (indexes 0-2), the array will grow to a length of 100 elements (indexes 0-99). If a value is assigned to a new key in a hash that has only three elements, the hash will grow by only one element.

As with arrays, if you mean to access (or assign to) a single element of a hash, you should prefix it with the dollar sigil ($). When accessing a single element, Perl will go by the type of the subscript to determine the type of variable being accessed – curly brackets ({}) for hashes or square brackets ([]) for arrays. The get_prob subroutine in the above Perl module demonstrates assigning to and accessing individual elements of a hash.

Perl has two special built-in functions for working with hashes – keys and values. The keys function, when provided a hash, returns a list of all the hash’s keys (indexes). Similarly, the values function will return a list of all the hash’s values. Remember though that the order in which the list is returned is random. This randomness can be seen when playing the Tic-Tac-Toe game. If there is more than one move available with the highest score, the computer will chose one at random because the keys function returns the available moves from the odds hash in random order.

On line 27 of the above example Perl module, the keys function is being used to retrieve the list of keys from the odds hash. The keys of the odds hash are the numbers that were found on the current game board. The values of the odds hash are the corresponding probabilities that were retrieved from the SCORE constant on line 24.

Admittedly, this example could have used an array instead of a string to store and retrieve the scores. I chose to use a string simply because I think it presents the layout of the board a little nicer. An array would likely perform better, but with such a small data set, the difference is probably too small to measure.

Sort

On line 27, the list of keys from the odds hash is being feed to Perl’s built-in sort function. Beware that Perl’s sort function sorts lexicographically by default, not numerically. For example, provided the list (10, 9, 8, 1), Perl’s sort function will return the list (1, 10, 8, 9).

The behavior of Perl’s sort function can be modified by providing it a code block as its first parameter as demonstrated on line 27. The result of the last statement in the code block should be a number less-than, equal-to, or greater-than zero depending on whether element $a should be placed before, concurrent-with, or after element $b in the resulting list respectively. $a and $b are pairs of elements from the provided list. The code in the block is executed repeatedly with $a and $b set to different pairs of elements from the original list until all the pairs have been compared and sorted.

The <=> operator is a special Perl operator that returns -1, 0, or 1 depending on whether the left argument is numerically less-than, equal-to, or greater-than the right argument respectively. By using the <=> operator in the code block of the sort function, Perl’s sort function can be made to sort numerically rather than lexicographically.

Notice that rather than comparing $a and $b directly, they are first being passed through the odds hash. Since the values of the odds hash are the probabilities that were retrieved from the SCORE constant, what is being compared is actually the score of $a versus the score of $b. Consequently, the numbers from the original game board are being sorted by their score, not their value. Numbers with an equal score are left in the same random order that the keys function returned them.

Notice also that I have reversed the typical order of the parameters to <=> in the code block of the sort function ($b on the left and $a on the right). By switching their order in this way, I have caused the sort function to return the elements in reverse order – from greatest to least – so that the number(s) with the highest score will be first in the list.

References

References provide an indirect means of accessing a variable. They are often used when making copies of the variable is either undesirable or impractical. References are a sort of short cut that allows you to skip performing the copy and instead provide access to the original variable.

Why to use references

There is a cost in time and memory associated with making copies of variables. References are sometimes used as a means of reducing that cost. Be aware, however, that recent versions of Perl implement a technology called copy-on-write that greatly reduces the cost of copying variables. This new optimization should work transparently. You don’t have to do anything special to enable the copy-on-write optimization.

Why not to use references

References violate the action-at-a-distance principle that was mentioned in part one of this series. References are just as bad as global variables in terms of their tendency to trip up programmers by allowing data to be modified outside the local scope. You should generally try to avoid using references. But there are times when they are necessary.

How to create references

An example of passing a reference is provided on line 59 of the above Perl module. Rather than placing the mark array directly in the list of parameters to the win_move subroutine, a reference to the array is provided instead by prefixing the variable’s sigil with a backslash (\).

It is necessary to use a reference (\@mark) on line 59 because if the array were placed directly on the list, it would expand such that the first element of the mark array would become the third parameter to the win_move function, the second element of the mark array would become the fourth parameter to the win_move function, and so on for as many elements as the mark array has. Whereas an array will expand in list context, a reference will not. If the array were passed in expanded form, the receiving subroutine would need to call shift once for each element of the array. Also, the receiving function would not be able to tell how long the original array was.

Three ways to dereference references

In the receiving subroutine, the reference has to be dereferenced to get at the original values. An example of dereferencing an array reference is provided on line 56. On line 56, the shift statement has been enclosed in curly brackets and the opening bracket has been prefixed with the array sigil (@).

There is also a shorter form for dereferencing an array reference that is demonstrated on line 43 of the chip1.pm module. The short form allows you to omit the curly brackets and instead place the array sigil directly in front of the sigil of the scalar that holds the array reference. The short form only works when you have an array reference stored in a scalar. When the array reference is coming from a function, as it is on line 56 of the above Perl module, the long form must be used.

There is yet a third way of dereferencing an array reference that is demonstrated on line 29 of the game script. Line 29 shows the MARKS array reference being dereferenced with the arrow operator (->) and an index enclosed in square brackets. The MARKS array reference is missing its sigil because it is a constant. You can tell that what is being dereferenced is an array reference because the arrow operator is followed by square brackets ([]). Had the MARKS constant been a hash reference, the arrow operator would have been followed by curly brackets ({}).

There are also corresponding long and short forms for dereferencing hash references that use the hash sigil (%) instead of the array sigil. Note also that hashes, just like arrays, need to be passed by reference to subroutines unless you want them to expand into their constituent elements. The latter is sometimes done in Perl as a clever way of emulating named parameters.

A word of caution about references

It was stated earlier that references allow data to be modified outside of their declared scope and, just as with global variables, this non-local manipulation of the data can be confusing to the programmer(s) and thereby lead to unintended bugs. This is an important point to emphasize and explain.

On line 35 of the win_move subroutine, you can see that I did not dereference the provided array reference (\@mark) but rather I chose to store the reference in a scalar named tkns. I did this because I do not need to access the individual elements of the provided array in the win_move subroutine. I only need to pass the reference on to the get_victor subroutine. Not making a local copy of the array is a short cut, but it is dangerous. Because $tkns is only a copy of the reference, not a copy of the original data being referred to, if I or a later program developer were to write something like $tkns->[0] = ‘Y’ in the win_move subroutine, it would actually modify the value of the mark array in the hal_move subroutine. By passing a reference to its mark array (\@mark) to the win_move subroutine, the hal_move subroutine has granted access to modify its local copy of @mark. In this case, it would probably be better to make a local copy of the mark array in the win_move subroutine using syntax similar to what is shown on line 56 rather than preserving the reference as I have done for the purpose of demonstration on line 35.

Aliases

In addition to references, there is another way that a local variable created with the my or state keyword can leak into the scope of a called subroutine. The list of parameters that you provide to a subroutine is directly accessible in the @_ array.

To demonstrate, the following example script prints b, not a, because the inc subroutine accesses the first element of @_ directly rather than first making a local copy of the parameter.

#!/usr/bin/perl sub inc { $_[0]++;
} MAIN: { my $var = 'a'; inc $var; print "$var\n";
}

Aliases are different from references in that you don’t have to dereference them to get at their values. They really are just alternative names for the same variable. Be aware that aliases occur in a few other places as well. One such place is the list returned from the sort function – if you were to modify an element of the returned list directly, without first copying it to another variable, you would actually be modifying the element in the original list that was provided to the sort function. Other places where aliases occur include the code blocks of functions like grep and map. The grep and map functions are not covered in this series of articles. See the provided links if you want to know more about them.

Final notes

Many of Perl’s built-in functions will operate on the default scalar ($_) or default array (@_) if they are not explicitly provided a variable to read from or write to. Line 40 of the above Perl module provides an example. The numbers from the nums array are sequentially aliased to $_ by the for keyword. If you chose to use these variables, in most cases you will probably want to retrieve your data from $_ or @_ fairly quickly to prevent it being accidentally overwritten by a subsequent command.

The substitution command (s/…/…/), for example, will manipulate the data stored in $_ if it is not explicitly bound to another variable by one of the =~ or !~ operators. Likewise, the shift function operates on @_ (or @ARGV if called in the global scope) if it is not explicitly provided an array to operate on. There is no obvious rule to which functions support this shortcut. You will have to consult the documentation for the command you are interested in to see if it will operate on a default variable when not provided one explicitly.

As demonstrated on lines 55 and 56, the same name can be reused for variables of different types. Reusing variable names generally makes the code harder to follow. It is probably better for the sake of readability to avoid variable name reuse.

Beware that making copies of arrays or hashes in Perl (as demonstrated on line 56) is shallow by default. If any of the elements of the array or hash are references, the corresponding elements in the duplicated array or hash will be references to the same original data. To make deep copies of data structures, use one of the Clone or Storable Perl modules. An alternative workaround that may work in the case of multi-dimensional arrays is to emulate them with a one-dimensional hash.

Similar in form to Perl’s syntax for creating lists – (1, 2, 3) – unnamed array references and unnamed hash references can be constructed on the fly by bounding a comma-separated set of elements in square brackets ([]) or curly brackets ({}) respectively. Line 07 of the game script demonstrates an unnamed (anonymous) array reference being constructed and assigned to the MARKS constant.

Notice that the import subroutine at the end of the above Perl module (chip2.pm) is assigning to some of the same names in the calling namespace as the previous module (chip1.pm). This is intentional. The hal_move and complain aliases created by chip1’s import subroutine will simply be overridden by the identically named aliases created by chip2’s import subroutine (assuming chip2.pm is loaded after chip1.pm in the calling namespace). Only the aliases are updated/overridden. The original subroutines from chip1 will still exist and can still be called with their full names – chip1::hal_move and chip1::complain.

Posted on Leave a comment

Docker and Fedora 32

With the release of Fedora 32, regular users of Docker have been confronted by a small challenge. At the time of writing, Docker is not supported on Fedora 32. There are alternatives, like Podman and Buildah, but for many existing users, switching now might not be the best time. As such, this article can help you set up your Docker environment on Fedora 32.

Step 0: Removing conflicts

This step is for any user upgrading from Fedora 30 or 31. If this is a fresh installation of Fedora 32, you can move on to step 1.

To remove docker and all its related components:

sudo dnf remove docker-*
sudo dnf config-manager --disable docker-*

Step 1: System preparation

With the last two versions of Fedora, the operating system has moved to two new technologies: CGroups and NFTables for the Firewall. While the details of these new technologies is behind the scope of this tutorial, it’s a sad fact that docker doesn’t support them yet. As such, you’ll have to make some changes to facilitate Docker on Fedora.

Enable old CGroups

The previous implementation of CGroups is still supported and it can be enabled using the following command.

sudo grubby --update-kernel=ALL --args="systemd.unified_cgroup_hierarchy=0"

Whitelist docker in firewall

To allow Docker to have network access, two commands are needed.

sudo firewall-cmd --permanent --zone=trusted --add-interface=docker0
sudo firewall-cmd --permanent --zone=FedoraWorkstation --add-masquerade

The first command will add the Docker-interface to the trusted environment which allows Docker to make remote connections. The second command will allow docker to make local connections. This is particularly useful when multiple Docker containers are in as a development environment.

Step 2: installing Moby

Moby is the open-source, white label version of Docker. It’s based on the same code but it does not carry the trademark. It’s included in the main Fedora repository, which makes it easy to install.

sudo dnf install moby-engine docker-compose

This installs moby-engine, docker-compose, containerd and some other related libraries. Once installed, you’ll have to enable the system-wide daemon to run docker.

sudo systemctl enable docker

Step 3: Restart and test

To ensure that all systems and settings are properly processed, you’ll now have to reboot your machine.

sudo systemctl reboot

After that, you can validate your installation using the Docker hello-world package.

sudo docker run hello-world

You are then greeted by the Hello from Docker! unless something went wrong.

Running as admin

Optionally, you can now also add your user to the group account of Docker, so that you can start docker images without typing sudo.

sudo groupadd docker
sudo usermod -aG docker $USER

Logout and login for the change to take effect. If the thought of running containers with administrator privileges concerns you, then you should look into Podman.

In summary

From this point on, Docker will work how you’re used to, including docker-compose and all docker-related tools. Don’t forget to check out the official documentation which can help you in many cases where something isn’t quite right.

The current state of Docker on Fedora 32 is not ideal. The lack of an official package might bother some, and there is an issue upstream where this is discussed. The missing support for both CGroups and NFTables is more technical, but you can check their progress in their public issues.

These instruction should allow you to continue working like nothing has happened. If this has not satisfied your needs, don’t forget to address your technical issues at the Moby or Docker Github pages, or take a look at Podman which might prove more robust in the long-term future.

Posted on Leave a comment

Fedora 32: Simple Local File-Sharing with Samba

Sharing files with Fedora 32 using Samba is cross-platform, convenient, reliable, and performant.

What is ‘Samba’?

Samba is a high-quality implementation of Server Message Block protocol (SMB). Originally developed by Microsoft for connecting windows computers together via local-area-networks, it is now extensively used for internal network communications.

Apple used to maintain it’s own independent file sharing called “Apple Filing Protocol (AFP)“, however in recent times, it also has also switched to SMB.

In this guide we provide the minimal instructions to enable:

  • Public Folder Sharing (Both Read Only and Read Write)
  • User Home Folder Access
Note about this guide: The convention '~]$' for a local user command prompt, and '~]#' for a super user prompt will be used.

Public Sharing Folder

Having a shared public place where authenticated users on an internal network can access files, or even modify and change files if they are given permission, can be very convenient. This part of the guide walks through the process of setting up a shared folder, ready for sharing with Samba.

Please Note: This guide assumes the public sharing folder is on a Modern Linux Filesystem; other filesystems such as NTFS or FAT32 will not work. Samba uses POSIX Access Control Lists (ACLs). For those who wish to learn more about Access Control Lists, please consider reading the documentation: "Red Hat Enterprise Linux 7: System Administrator's Guide: Chapter 5. Access Control Lists", as it likewise applies to Fedora 32. In General, this is only an issue for anyone who wishes to share a drive or filesystem that was created outside of the normal Fedora Installation process. (such as a external hard drive). It is possible for Samba to share filesystem paths that do not support POSIX ACLs, however this is out of the scope of this guide.

Create Folder

For this guide the /srv/public/ folder for sharing will be used.

The /srv/ directory contains site-specific data served by a Red Hat Enterprise Linux system. This directory gives users the location of data files for a particular service, such as FTP, WWW, or CVS. Data that only pertains to a specific user should go in the /home/ directory.

Red Hat Enterprise Linux 7, Storage Administration Guide: Chapter 2. File System Structure and Maintenance: 2.1.1.8. The /srv/ Directory

Make the Folder (will provide an error if the folder already exists).
~]# mkdir --verbose /srv/public Verify folder exists:
~]$ ls --directory /srv/public Expected Output:
/srv/public

Set Filesystem Security Context

To have read and write access to the public folder the public_content_rw_t security context will be used for this guide. Those wanting read only may use: public_content_t.

Label files and directories that have been created with the public_content_rw_t type to share them with read and write permissions through vsftpd. Other services, such as Apache HTTP Server, Samba, and NFS, also have access to files labeled with this type. Remember that booleans for each service must be enabled before they can write to files labeled with this type.

Red Hat Enterprise Linux 7, SELinux User’s and Administrator’s Guide: Chapter 16. File Transfer Protocol: 16.1. Types: public_content_rw_t

Add /srv/public as “public_content_rw_t” in the system’s local filesystem security context customization registry:

Add new security filesystem security context:
~]# semanage fcontext --add --type public_content_rw_t "/srv/public(/.*)?" Verifiy new security filesystem security context:
~]# semanage fcontext --locallist --list Expected Output: (should include)
/srv/public(/.*)? all files system_u:object_r:public_content_rw_t:s0

Now that the folder has been added to the local system’s filesystem security context registry; The restorecon command can be used to ‘restore’ the context to the folder:

Restore security context to the /srv/public folder:
$~]# restorecon -Rv /srv/public Verify security context was correctly applied:
~]$ ls --directory --context /srv/public/ Expected Output:
unconfined_u:object_r:public_content_rw_t:s0 /srv/public/

User Permissions

Creating the Sharing Groups

To allow a user to either have read only, or read and write accesses to the public share folder create two new groups that govern these privileges: public_readonly and public_readwrite.

User accounts can be granted access to read only, or read and write by adding their account to the respective group (and allow login via Samba creating a smb password). This process is demonstrated in the section: “Test Public Sharing (localhost)”.

Create the public_readonly and public_readwrite groups:
~]# groupadd public_readonly
~]# groupadd public_readwrite Verify successful creation of groups:
~]$ getent group public_readonly public_readwrite Expected Output: (Note: x:1...: number will probability differ on your System)
public_readonly:x:1009:
public_readwrite:x:1010:

Set Permissions

Now set the appropriate user permissions to the public shared folder:

Set User and Group Permissions for Folder:
~]# chmod --verbose 2700 /srv/public
~]# setfacl -m group:public_readonly:r-x /srv/public
~]# setfacl -m default:group:public_readonly:r-x /srv/public
~]# setfacl -m group:public_readwrite:rwx /srv/public
~]# setfacl -m default:group:public_readwrite:rwx /srv/public Verify user permissions have been correctly applied:
~]$ getfacl --absolute-names /srv/public Expected Output:
file: /srv/public
owner: root
group: root
flags: -s-
user::rwx
group::---
group:public_readonly:r-x
group:public_readwrite:rwx
mask::rwx
other::---
default:user::rwx
default:group::---
default:group:public_readonly:r-x
default:group:public_readwrite:rwx
default:mask::rwx
default:other::---

Samba

Installation

~]# dnf install samba

Hostname (systemwide)

Samba will use the name of the computer when sharing files; it is good to set a hostname so that the computer can be found easily on the local network.

View Your Current Hostname:
~]$ hostnamectl status

If you wish to change your hostname to something more descriptive, use the command:

Modify your system's hostname (example):
~]# hostnamectl set-hostname "simple-samba-server"
For a more complete overview of the hostnamectl command, please read the previous Fedora Magazine Article: "How to set the hostname on Fedora".

Firewall

Configuring your firewall is a complex and involved task. This guide will just have the most minimal manipulation of the firewall to enable Samba to pass through.

For those who are interested in learning more about configuring firewalls; please consider reading the documentation: "Red Hat Enterprise Linux 8: Securing networks: Chapter 5. Using and configuring firewall", as it generally applies to Fedora 32 as well.
Allow Samba access through the firewall:
~]# firewall-cmd --add-service=samba --permanent
~]# firewall-cmd --reload Verify Samba is included in your active firewall:
~]$ firewall-cmd --list-services Output (should include):
samba

Configuration

Remove Default Configuration

The stock configuration that is included with Fedora 32 is not required for this simple guide. In particular it includes support for sharing printers with Samba.

For this guide make a backup of the default configuration and create a new configuration file from scratch.

Create a backup copy of the existing Samba Configuration:
~]# cp --verbose --no-clobber /etc/samba/smb.conf /etc/samba/smb.conf.fedora0 Empty the configuration file:
~]# > /etc/samba/smb.conf

Samba Configuration

Please Note: This configuration file does not contain any global definitions; the defaults provided by Samba are good for purposes of this guide.
Edit the Samba Configuration File with Vim:
~]# vim /etc/samba/smb.conf

Add the following to /etc/samba/smb.conf file:

# smb.conf - Samba Configuration File # The name of the share is in square brackets [],
# this will be shared as //hostname/sharename # There are a three exceptions:
# the [global] section;
# the [homes] section, that is dynamically set to the username;
# the [printers] section, same as [homes], but for printers. # path: the physical filesystem path (or device)
# comment: a label on the share, seen on the network.
# read only: disable writing, defaults to true. # For a full list of configuration options,
# please read the manual: "man smb.conf". [global] [public]
path = /srv/public
comment = Public Folder
read only = No

Write Permission

By default Samba is not granted permission to modify any file of the system. Modify system’s security configuration to allow Samba to modify any filesystem path that has the security context of public_content_rw_t.

For convenience, Fedora has a built-in SELinux Boolean for this purpose called: smbd_anon_write, setting this to true will enable Samba to write in any filesystem path that has been set to the security context of public_content_rw_t.

For those who are wishing Samba only have a read-only access to their public sharing folder, they may choose skip this step and not set this boolean.

There are many more SELinux boolean that are available for Samba. For those who are interested, please read the documentation: "Red Hat Enterprise Linux 7: SELinux User's and Administrator's Guide: 15.3. Samba Booleans", it also apply to Fedora 32 without any adaptation.
Set SELinux Boolean allowing Samba to write to filesystem paths set with the security context public_content_rw_t:
~]# setsebool -P smbd_anon_write=1 Verify bool has been correctly set:
$ getsebool smbd_anon_write Expected Output:
smbd_anon_write --> on

Samba Services

The Samba service is divided into two parts that we need to start.

Samba ‘smb’ Service

The Samba “Server Message Block” (SMB) services is for sharing files and printers over the local network.

Manual: “smbd – server to provide SMB/CIFS services to clients

Enable and Start Services

For those who are interested in learning more about configuring, enabling, disabling, and managing services, please consider studying the documentation: "Red Hat Enterprise Linux 7: System Administrator's Guide: 10.2. Managing System Services".
Enable and start smb and nmb services:
~]# systemctl enable smb.service
~]# systemctl start smb.service Verify smb service:
~]# systemctl status smb.service

Test Public Sharing (localhost)

To demonstrate allowing and removing access to the public shared folder, create a new user called samba_test_user, this user will be granted permissions first to read the public folder, and then access to read and write the public folder.

The same process demonstrated here can be used to grant access to your public shared folder to other users of your computer.

The samba_test_user will be created as a locked user account, disallowing normal login to the computer.

Create 'samba_test_user', and lock the account.
~]# useradd samba_test_user
~]# passwd --lock samba_test_user Set a Samba Password for this Test User (such as 'test'):
~]# smbpasswd -a samba_test_user

Test Read Only access to the Public Share:

Add samba_test_user to the public_readonly group:
~]# gpasswd --add samba_test_user public_readonly Login to the local Samba Service (public folder):
~]$ smbclient --user=samba_test_user //localhost/public First, the ls command should succeed,
Second, the mkdir command should not work,
and finally, exit:
smb: \> ls
smb: \> mkdir error
smb: \> exit Remove samba_test_user from the public_readonly group:
gpasswd --delete samba_test_user public_readonly

Test Read and Write access to the Public Share:

Add samba_test_user to the public_readwrite group:
~]# gpasswd --add samba_test_user public_readwrite Login to the local Samba Service (public folder):
~]$ smbclient --user=samba_test_user //localhost/public First, the ls command should succeed,
Second, the mkdir command should work,
Third, the rmdir command should work,
and finally, exit:
smb: \> ls
smb: \> mkdir success
smb: \> rmdir success
smb: \> exit Remove samba_test_user from the public_readwrite group:
~]# gpasswd --delete samba_test_user public_readwrite

After testing is completed, for security, disable the samba_test_user‘s ability to login in via samba.

Disable samba_test_user login via samba:
~]# smbpasswd -d samba_test_user

Home Folder Sharing

In this last section of the guide; Samba will be configured to share a user home folder.

For example: If the user bob has been registered with smbpasswd, bob’s home directory /home/bob, would become the share //server-name/bob.

This share will only be available for bob, and no other users.

This is a very convenient way of accessing your own local files; however naturally it carries at a security risk.

Setup Home Folder Sharing

Give Samba Permission for Public Folder Sharing

Set SELinux Boolean allowing Samba to read and write to home folders:
~]# setsebool -P samba_enable_home_dirs=1 Verify bool has been correctly set:
$ getsebool samba_enable_home_dirs Expected Output:
samba_enable_home_dirs --> on

Add Home Sharing to the Samba Configuration

Append the following to the systems smb.conf file:

# The home folder dynamically links to the user home. # If 'bob' user uses Samba:
# The homes section is used as the template for a new virtual share: # [homes]
# ... (various options) # A virtual section for 'bob' is made:
# Share is modified: [homes] -> [bob]
# Path is added: path = /home/bob
# Any option within the [homes] section is appended. # [bob]
# path = /home/bob
# ... (copy of various options) # here is our share,
# same as is included in the Fedora default configuration. [homes] comment = Home Directories valid users = %S, %D%w%S browseable = No read only = No inherit acls = Yes

Reload Samba Configuration

Tell Samba to reload it's configuration:
~]# smbcontrol all reload-config

Test Home Directory Sharing

Switch to samba_test_user and create a folder in it's home directory:
~]# su samba_test_user
samba_test_user:~]$ cd ~
samba_test_user:~]$ mkdir --verbose test_folder
samba_test_user:~]$ exit Enable samba_test_user to login via Samba:
~]# smbpasswd -e samba_test_user Login to the local Samba Service (samba_test_user home folder):
$ smbclient --user=samba_test_user //localhost/samba_test_user Test (all commands should complete without error):
smb: \> ls
smb: \> ls test_folder
smb: \> rmdir test_folder
smb: \> mkdir home_success
smb: \> rmdir home_success
smb: \> exit Disable samba_test_user from login in via Samba:
~]# smbpasswd -d samba_test_user
Posted on Leave a comment

Using Fedora to implement REST API in JavaScript: part 2

In part 1 previously, you saw how to quickly create a simple API service using Fedora Workstation, Express, and JavaScript. This article shows you the simplicity of how to create a new API. This part shows you how to:

  • Install a DB server
  • Build a new route
  • Connect a new datasource
  • Use Fedora terminal to send and receive data

Generating an app

Please refer to the previous article for more details. But to make things simple, change to your work directory and generate an app skeleton.

 
$ cd our-work-directory
$ npx express-generator –no-view –git /myApp
$ cd myApp
$ npm i

Installing a database server

In this part, we’ll install MariaDB database. MariaDB is the Fedora default database.

$ dnf module list mariadb | sort -u ## lists the streams available
$ sudo dnf module install mariadb:10.3 ##10.4 is the latest

Note: the default profile is mariadb/server.

For those who need to spin up a Docker container a ready made container with Fedora 31 is available.

$ docker pull registry.fedoraproject.org/f31/mariadb
$ docker run -d --name mariadb_database -e MYSQL_USER=user -e MYSQL_PASSWORD=pass -e MYSQL_DATABASE=db -p 3306:3306 registry.fedoraproject.org/f31/mariadb

Now start the MariaDB service.

$ sudo systemctl start mariadb

If you’d like the service to start at boot, you can also enable it in systemd:

$ sudo systemctl enable mariadb ## start at boot

Next, setup the database as needed:

$ mysql -u root -p ## root password is blank
MariaDB> CREATE DATABASE users;
MariaDB> create user dbuser identified by ‘123456‘;
MariaDB> grant select, insert, update, create, drop on users.* to dbuser;
MariaDB> show grants for dbuser;
MariaDB> \q

A database connector is needed to use the database with Node.js.

$ npm install mariadb ## installs MariaDB Node.js connector

We’ll leverage Sequelize in this sample API. Sequelize is a promise-based Node.js ORM (Object Relational Mapper) for Postgres, MySQL, MariaDB, SQLite and Microsoft SQL Server.

$ npm install sequelize ## installs Sequelize

Connecting a new datasource

Now, create a new db folder and create a new file sequelize.js there:

const Sequelize = require('sequelize'), sequelize = new Sequelize(process.env.db_name || 'users', process.env.db_user || 'dbuser', process.env.db_pass || '123456', { host: 'localhost', dialect: 'mariadb', ssl: true
}) module.exports = sequelize

Note: For the sake of completeness I‘m including a link to the related Github repo: https://github.com/vaclav18/express-api-mariadb

Let‘s create a new file models/user.js. A nice feature of a Sequelize model is that it helps us to create the necessary tables and colums automatically. The code snippet responsible for doing this is seen below:

sequelize.sync({
force: false
})

Note: never switch to true with a production database – it would drop your tables at app start!

We will refer to the earlier created sequelize.js this way:

const sequelize = require('../db/sequelize')

Building new routes

Next, you’ll create a new file routes/user.js. You already have routes/users.js from the previous article. You can copy and paste the code in and proceed with editing it.

You’ll also need a reference to the previously created model.

const User = require('../models/user')

Change the route path to /users and also create a new post method route.

Mind the async – await keywords there. An interaction with a database will take some time and this one will do the trick. Yes, an async function returns a promise and this one makes promises easy to use.

Note: This code is not production ready, since it would also need to include an authentication feature.

We‘ll make the new route working this way:

const userRouter = require('./routes/user')
app.use(userRouter)

Let‘s also remove the existing usersRouter. The routes/users.js can be deleted too.

$ npm start

With the above command, you can launch your new app.

Using the terminal to send and retrieve data

Let’s create a new database record through the post method:

$ curl -d 'name=Adam' http://localhost:3000/users

To retrieve the data created through the API, do an HTTP GET request:

$ curl http://localhost:3000/users

The console output of the curl command is a JSON array containing data of all the records in the Users table.

Note: This is not really the usual end result — an application consumes the API finally. The API will usually also have endpoints to update and remove data.

More automation

Let‘s assume we might want to create an API serving many tables. It‘s possible and very handy to automatically generate models for Sequelize from our database. Sequelize-auto will do the heavy lifting for us. The resulting files (models.js) would be placed and imported within the /models directory.

$ npm install sequelize-auto

A node.js connector is needed to use this one and we have it already installed for MariaDB.

Conclusion

It‘s possible to develop and run an API using Fedora, Fedora default MariaDB, JavaScript and efficiently develop a solution like with a noSQL database. For those used to working with MongoDB or a similar noSQL database, Fedora and MariaDB are important open-source enablers.


Photo by Mazhar Zandsalimi on Unsplash.

Posted on Leave a comment

How to manage network services with firewall-cmd

In a previous article, you explored how to control the firewall at the command line in Fedora.

Now you are going to see how to see how add, remove, and list services, protocols and ports in order to block or allow them.

A short recap

First, it’s a good idea to check the status of your firewall, see if it’s running or not. You do this, as we previously learned, by using the state option (firewall-cmd ‐‐state).

The next step is to get the zone for the desired network interface. For example, I use a desktop that has two network interfaces: a physical interface (enp0s3), representing my actual network card and a virtual interface (virbr0) used by virtualization software like KVM. To see what zones are active, run firewall-cmd ‐‐get-active-zones.

Now that you know what zone you’re interested in, you can list the rules for the zone with firewall-cmd ‐‐info-zone=FedoraWorkstation.

Reading zone information

To display information for a particular zone, run firewall-cmd ‐‐zone=ZoneName ‐‐list-all, or simply display information for the default zone with:

[dan@localhost ~]$ firewall-cmd --list-all
FedoraWorkstation (active)
target: default
icmp-block-inversion: no
interfaces: enp0s3
sources:
services: dhcpv6-client mdns samba-client ssh
ports: 1025-65535/udp 1025-65535/tcp
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:

Now, let’s explore the output. The first line is showing which zone the following information applies to and if that zone is currently in use.

The target : default simply tells us this is the default zone. This can be set or retrieved via the ‐‐set-default-zone=ZoneName and ‐‐get-default-zone.

icmp-block-inversion, indicates if ICMP requests are blocked. For example if the machine responds to ping requests from other machines on the network. The interfaces field shows all interfaces that adopt this zone.

Handling services, ports, and protocols

Now focus on the services, ports, and protocols rows. By default, the firewall will block all ports, services and protocols. Only the listed ones will be allowed.

You can see the allowed services are very basic client services in this case. For example, accessing a shared folder on the network (samba-client), to talk to a DNS server or connect to a machine via SSH (the ssh service). You can think of a service as a protocol in combination to a port, for instance the ssh service is using the SSH protocol and, by convention, port 22. By allowing the ssh service, what you’re really doing is allowing incoming connections that use the ssh protocol at default port 22.

Notice, services that have the client word in their name, as a rule of thumb, refer to outgoing connections, i.e. connections that you make with your IP as source going to the outside, as opposed to the SSH service, for example, that will accept incoming connections (listening to connection coming from outside at you).

You can look up services in the file /etc/services. For example if you wish to know what port and protocol these service uses:

[dan@localhost ~]$ cat /etc/services | grep ssh
ssh 22/tcp # The Secure Shell (SSH) Protocol
ssh 22/udp # The Secure Shell (SSH) Protocol

You can see SSH uses both TCP and UDP port 22. Also, if you wish to see all available services, just use firewall-cmd ‐‐get-services.

Opening a port

If you want to block a port, service, or protocol, all you have to do if make sure it’s not listed here. By extension, if you want to allow a service, you need add it to your list.

Let’s say you want to open the port 5000 for TCP connection. To do this, run:

sudo firewall-cmd --zone=FedorwaWorkstation --permanent --add-port=5000/tcp

Notice that you need to specify the zone for which the rule applies. When you add the rule, you also need to specify if it is a TCP or UDP port via as indicated above. The permanent parameter sets the rule to persist even after a system reboot.

Look at the information for your zone again:

[dan@localhost ~]$ firewall-cmd --list-all
FedoraWorkstation (active)
target: default
icmp-block-inversion: no
interfaces: enp0s3
sources:
services: dhcpv6-client mdns samba-client ssh
ports: 1025-65535/udp 1025-65535/tcp 5000/tcp
protocols:
masquerade: no
forward-ports:
source-ports:
icmp-blocks:
rich rules:

Similarly, if you wish to remove this port from the list, run:

sudo firewall-cmd --zone=FedorwaWorkstation --permanent --remove-port=5000/tcp

The very same remove (‐‐remove-protocol, ‐‐remove-service) and add (‐‐add-protocol, ‐‐add-service) options are also available for services and protocols.


Photo by T. Kaiser on Unsplash.

Posted on Leave a comment

Using Fedora to quickly implement REST API with JavaScript

Fedora Workstation uses GNOME Shell by default and this one was mainly written in JavaScript. JavaScript is famous as a language of front-end development but this time we’ll show its usage for back-end.

We’ll implement a new API using the following technologies: JavaScript, Express and Fedora Workstation. A web browser is being used to call the service (eg. Firefox from the default Fedora WS distro).

Installing of necessary packages

Check: What’s already installed?

$ npm -v
$ node -v

You may already have both the necessary packages installed and can skip the next step. If not, install nodejs:

$ sudo dnf install nodejs

A new simple service (low-code style)

Let‘s navigate to our working directory (work) and create a new directory for our new sample back-end app.

$ cd work
$ mkdir newApp
$ cd newApp
$ npx express-generator

The above command generates an application skeleton for us.

$ npm i

The above command installs dependencies. Please mind the security warnings – never use this one for production.

Crack open the routes/users.js

Modify line #6 to:

res.send(data);

Insert this code block below var router:

let data = { '1':'Ann', '2': 'Bruno', '3': 'Celine' }

Save
the modified file.

We modified a route and added a new variable data. This one could be declared as a const as we didn‘t modify it anywhere. The result:

Running the service on your local Fedora workstation machine

$ npm start

Note: The application entry point is bin/www. You may want to change the port number there.

Calling our new service

Let‘s launch our Firefox browser and type-in:

http://localhost:3000/users

Output

It‘s also possible to leverage the Developer tools. Hit F12 and in the Network tab, select the related GET request and look at the side bar response tab to check the data.

Conclusion

Now we have got a service and and an unnecessary index accessible through localhost:3000. To get quickly rid of this:

  1. Remove the views directory
  2. Remove the public directory
  3. Remove the routes/index.js file
  4. Inside the app.js file, modify the line 37 to:
    res.status(err.status || 500).end();
  5. Remove the next line res.render(‘error’)

Then restart the service:

$ npm start

Posted on Leave a comment

Announcing the release of Fedora 32 Beta

The Fedora Project is pleased to announce the immediate availability of Fedora 32 Beta, the next step towards our planned Fedora 32 release at the end of April.

Download the prerelease from our Get Fedora site:

Or, check out one of our popular variants, including KDE Plasma, Xfce, and other desktop environments, as well as images for ARM devices like the Raspberry Pi 2 and 3:

Beta Release Highlights

Fedora Workstation

New in Fedora 32 Workstation Beta is EarlyOOM enabled by default. EarlyOOM enables users to more quickly recover and regain control over their system in low-memory situations with heavy swap usage. Fedora 32 Workstation Beta also enables the fs.trim timer by default, which improves performance and wear leveling for solid state drives.

Fedora 32 Workstation Beta includes GNOME 3.36, the newest release of the GNOME desktop environment. It is full of performance enhancements and improvements. GNOME 3.36 adds a Do Not Disturb button in the notifications, improved setup for parental controls and virtualization, and tweaks to Settings. For a full list of GNOME 3.36 highlights, see the release notes.

Other updates

Fedora 32 Beta includes updated versions of many popular packages like Ruby, Python, and Perl. It also includes version 10 of the popular GNU Compiler Collection (GCC). We also have the customary updates to underlying infrastructure software, like the GNU C Library. For a full list, see the Change set on the Fedora Wiki.

Testing needed

Since this is a Beta release, we expect that you may encounter bugs or missing features. To report issues encountered during testing, contact the Fedora QA team via the mailing list or in the #fedora-qa channel on IRC Freenode. As testing progresses, common issues are tracked on the Common F32 Bugs page.

For tips on reporting a bug effectively, read how to file a bug.

What is the Beta Release?

A Beta release is code-complete and bears a very strong resemblance to the final release. If you take the time to download and try out the Beta, you can check and make sure the things that are important to you are working. Every bug you find and report doesn’t just help you, it improves the experience of millions of Fedora users worldwide! Together, we can make Fedora rock-solid. We have a culture of coordinating new features and pushing fixes upstream as much as we can. Your feedback improves not only Fedora, but Linux and free software as a whole.

More information

For more detailed information about what’s new on Fedora 32 Beta release, you can consult the Fedora 32 Change set. It contains more technical information about the new packages and improvements shipped with this release.


Photo by Josh Calabrese on Unsplash.

Posted on Leave a comment

Demonstrating PERL with Tic-Tac-Toe, Part 2

The astute observer may have noticed that PERL is misspelled. In a March 1, 1999 interview with Linux Journal, Larry Wall explained that he originally intended to include the letter “A” from the word “And” in the title “Practical Extraction And Report Language” such that the acronym would correctly spell the word PEARL. However, before he released PERL, Larry heard that another programming language had already taken that name. To resolve the name collision, he dropped the “A”. The acronym is still valid because title case and acronyms allow articles, short prepositions and conjunctions to be omitted (compare for example the acronym LASER).

Name collisions happen when distinct commands or variables with the same name are merged into a single namespace. Because Unix commands share a common namespace, two commands cannot have the same name.

The same problem exists for the names of global variables and subroutines within programs written in languages like PERL. This is an especially significant problem when programmers try to collaborate on large software projects or otherwise incorporate code written by other programmers into their own code base.

Starting with version 5, PERL supports packages. Packages allow PERL code to be modularized with unique namespaces so that the global variables and functions of the modularized code will not collide with the variables and functions of another script or module.

Shortly after its release, PERL5 software developers all over the world began writing software modules to extend PERL’s core functionality. Because many of those developers (currently about 15,000) have made their work freely available on the Comprehensive Perl Archive Network (CPAN), you can easily extend the functionality of PERL on your PC so that you can perform very advanced and complex tasks with just a few commands.

The remainder of this article builds on the previous article in this series by demonstrating how to install, use and create PERL modules on Fedora Linux.

An example PERL program

See the example program from the previous article below, with a few lines of code added to import and use some modules named chip1, chip2 and chip3. It is written in such a way that the program should work even if the chip modules cannot be found. Future articles in this series will build on the below script by adding the additional modules named chip2 and chip3.

You should be able to copy and paste the below code into a plain text file and use the same one-liner that was provided in the previous article to strip the leading numbers.

00 #!/usr/bin/perl
01 02 use strict;
03 use warnings;
04 05 use feature 'state';
06 07 use constant MARKS=>[ 'X', 'O' ];
08 use constant HAL9K=>'O';
09 use constant BOARD=>'
10 ┌───┬───┬───┐
11 │ 1 │ 2 │ 3 │
12 ├───┼───┼───┤
13 │ 4 │ 5 │ 6 │
14 ├───┼───┼───┤
15 │ 7 │ 8 │ 9 │
16 └───┴───┴───┘
17 ';
18 19 use lib 'hal';
20 use if -e 'hal/chip1.pm', 'chip1';
21 use if -e 'hal/chip2.pm', 'chip2';
22 use if -e 'hal/chip3.pm', 'chip3';
23 24 sub get_mark {
25 my $game = shift;
26 my @nums = $game =~ /[1-9]/g;
27 my $indx = (@nums+1) % 2;
28 29 return MARKS->[$indx];
30 }
31 32 sub put_mark {
33 my $game = shift;
34 my $mark = shift;
35 my $move = shift;
36 37 $game =~ s/$move/$mark/;
38 39 return $game;
40 }
41 42 sub get_move {
43 return (<> =~ /^[1-9]$/) ? $& : '0';
44 }
45 46 PROMPT: {
47 no strict;
48 no warnings;
49 50 state $game = BOARD;
51 52 my $mark;
53 my $move;
54 55 print $game;
56 57 if (defined &get_victor) {
58 my $victor = get_victor $game, MARKS;
59 if (defined $victor) {
60 print "$victor wins!\n";
61 complain if ($victor ne HAL9K);
62 last PROMPT;
63 }
64 }
65 66 last PROMPT if ($game !~ /[1-9]/);
67 68 $mark = get_mark $game;
69 print "$mark\'s move?: ";
70 71 if ($mark eq HAL9K and defined &hal_move) {
72 $move = hal_move $game, $mark, MARKS;
73 print "$move\n";
74 } else {
75 $move = get_move;
76 }
77 $game = put_mark $game, $mark, $move;
78 79 redo PROMPT;
80 }

Once you have the above code downloaded and working, create a subdirectory named hal under the same directory that you put the above program. Then copy and paste the below code into a plain text file and use the same procedure to strip the leading numbers. Name the version without the line numbers chip1.pm and move it into the hal subdirectory.

00 # basic operations chip
01 02 package chip1;
03 04 use strict;
05 use warnings;
06 07 use constant MAGIC=>'
08 ┌───┬───┬───┐
09 │ 2 │ 9 │ 4 │
10 ├───┼───┼───┤
11 │ 7 │ 5 │ 3 │
12 ├───┼───┼───┤
13 │ 6 │ 1 │ 8 │
14 └───┴───┴───┘
15 ';
16 17 use List::Util 'sum';
18 use Algorithm::Combinatorics 'combinations';
19 20 sub get_moves {
21 my $game = shift;
22 my $mark = shift;
23 my @nums;
24 25 while ($game =~ /$mark/g) {
26 push @nums, substr(MAGIC, $-[0], 1);
27 }
28 29 return @nums;
30 }
31 32 sub get_victor {
33 my $game = shift;
34 my $marks = shift;
35 my $victor;
36 37 TEST: for (@$marks) {
38 my $mark = $_;
39 my @nums = get_moves $game, $mark;
40 41 next unless @nums >= 3;
42 for (combinations(\@nums, 3)) {
43 my @comb = @$_;
44 if (sum(@comb) == 15) {
45 $victor = $mark;
46 last TEST;
47 }
48 }
49 }
50 51 return $victor;
52 }
53 54 sub hal_move {
55 my $game = shift;
56 my @nums = $game =~ /[1-9]/g;
57 my $rand = int rand @nums;
58 59 return $nums[$rand];
60 }
61 62 sub complain {
63 print "Daisy, Daisy, give me your answer do.\n";
64 }
65 66 sub import {
67 no strict;
68 no warnings;
69 70 my $p = __PACKAGE__;
71 my $c = caller;
72 73 *{ $c . '::get_victor' } = \&{ $p . '::get_victor' };
74 *{ $c . '::hal_move' } = \&{ $p . '::hal_move' };
75 *{ $c . '::complain' } = \&{ $p . '::complain' };
76 }
77 78 1;

The first thing that you will probably notice when you try to run the program with chip1.pm in place is an error message like the following (emphasis added):

$ Can't locate Algorithm/Combinatorics.pm in @INC (you may need to install the Algorithm::Combinatorics module) (@INC contains: hal /usr/local/lib64/perl5/5.30 /usr/local/share/perl5/5.30 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5) at hal/chip1.pm line 17.
BEGIN failed--compilation aborted at hal/chip1.pm line 17.
Compilation failed in require at /usr/share/perl5/if.pm line 15.
BEGIN failed--compilation aborted at game line 18.

When you see an error like the one above, just use the dnf command to search Fedora’s package repository for the name of the system package that provides the needed PERL module as shown below. Note that the module name and path from the above error message have been prefixed with */ and then surrounded with single quotes.

$ dnf provides '*/Algorithm/Combinatorics.pm'
...
perl-Algorithm-Combinatorics-0.27-17.fc31.x86_64 : Efficient generation of combinatorial sequences
Repo : fedora
Matched from:
Filename : /usr/lib64/perl5/vendor_perl/Algorithm/Combinatorics.pm

Hopefully it will find the needed package which you can then install:

$ sudo dnf install perl-Algorithm-Combinatorics

Once you have all the needed modules installed, the program should work.

How it works

This example is admittedly quite contrived. Nothing about Tic-Tac-Toe is complex enough to need a CPAN module. To demonstrate installing and using a non-standard module, the above program uses the combinations library routine from the Algorithm::Combinatorics module to generate a list of the possible combinations of three numbers from the provided set. Because the board numbers have been mapped to a 3×3 magic square, any set of three numbers that sum to 15 will be aligned on a column, row or diagonal and will therefore be a winning combination.

Modules are imported into a program with the use and require commands. The only difference between them is that the use command automatically calls the import subroutine (if one exists) in the module being imported. The require command does not automatically call any subroutines.

Modules are just files with a .pm extension that contain PERL subroutines and variables. They begin with the package command and end with 1;. But otherwise, they look like any other PERL script. The file name should match the package name. Package and file names are case sensitive.

Beware that when you are reading online documentation about PERL modules, the documentation often veers off into topics about classes. Classes are built on modules, but a simple module does not have to adhere to all the restrictions that apply to classes. When you start seeing words like method, inheritance and polymorphism, you are reading about classes, not modules.

There are two subroutine names that are reserved for special use in modules. They are import and unimport and they are called by the use and no directives respectively.

The purpose of the import and unimport subroutines is typically to alias and unalias the module’s subroutines in and out of the calling namespace respectively. For example, line 17 of chip1.pm shows the sum subroutine being imported from the List::Util module.

The constant module, as used on lines 07 of chip1.pm, is also altering the caller’s namespace (chip1), but rather than importing a predefined subroutine, it is creating a special type of variable.

All the identifiers immediately following the use keywords in the above examples are modules. On my system, many of them can be found under the /usr/share/perl5 directory.

Notice that the above error message states “@INC contains:” followed by a list of directories. INC is a special PERL variable that lists, in order, the directories from which modules should be loaded. The first file found with a matching name will be used.

As demonstrated on line 19 of the Tic-Tac-Toe game, the lib module can be used to update the list of directories in the INC variable.

The chip1 module above provides an example of a very simple import subroutine. In most cases you will want to use the import subroutine that is provided by the Exporter module rather than implementing your own. A custom import subroutine is used in the above example to demonstrate the basics of what it does. Also, the custom implementation makes it easy to override the subroutine definitions in later examples.

The import subroutine shown above reveals some of the hidden magic that makes packages work. All variables that are both globally scoped (that is, created outside of any pair of curly brackets) and dynamically scoped (that is, not prefixed with the keywords my or state) and all global subroutines are automatically prefixed with a package name. The default package name if no package command has been issued is main.

By default, the current package is assumed when an unqualified variable or subroutine is used. When get_move is called from the PROMPT block in the above example, main::get_move is assumed because the PROMPT block exists in the main package. Likewise, when get_moves is called from the get_victor subroutine, chip1::get_moves is assumed because get_victor exists in the chip1 package.

If you want to access a variable or subroutine that exists in a different package, you either have to use its fully qualified name or create a local alias that refers to the desired subroutine.

The import subroutine shown above demonstrates how to create subroutine aliases that refer to subroutines in other packages. On lines 73-75, the fully qualified names for the subroutines are being constructed and then the symbol table name for the subroutine in the calling namespace (the package in which the use statement is being executed) is being assigned the reference of the subroutine in the local package (the package in which the import subroutine is defined).

Notice that subroutines, like variables, have sigils. The sigil for subroutines is the ampersand symbol (&). In most contexts, the sigil for subroutines is optional. When working with references (as shown on lines 73-75 of the import subroutine) and when checking if a subroutine is defined (as shown on lines 57 and 71 of the PROMPT block), the sigil for subroutines is required.

The import subroutine shown above is just a bare minimum example. There is a lot that it doesn’t do. In particular, a proper import subroutine would not automatically import any subroutines or variables. Normally, the user would be expected to provide a list of the routines to be imported on the use line and that list is available to the import subroutine in the @_ array.

Final notes

Lines 25-27 of chip1.pm provide a good example of PERL’s dense notation problem. With just a couple of lines code, the board numbers on which a given mark has been placed can be determined. But does the statement within the conditional clause of the while loop perform the search from the beginning of the game variable on each iteration? Or does it continue from where it left off each time? PERL correctly guesses that I want it to provide the position ($-[0]) of the next mark, if any exits, on each iteration. But exactly what it will do can be very difficult to determine just by looking at the code.

The last things of note in the above examples are the strict and warnings directives. They enable extra compile-time and runtime debugging messages respectively. Many PERL programmers recommend always including these directives so that programming errors are more likely to be spotted. The downside of having them enabled is that some complex code will sometimes cause the debugger to erroneously generate unwanted output. Consequently, the strict and/or warnings directives may need to be disabled in some code blocks to get your program to run correctly as demonstrated on lines 67 and 68 of the example chip1 module. The strict and warnings directives have nothing to do with the program and they can be omitted. Their only purpose is to provide feedback to the program developer.