Posted on Leave a comment

How to Install and Update Fedora Linux on Android using Termux

If you’re interested in running Linux on your Android device, you’re in luck! It’s possible to install Fedora Linux on Android using Termux. Termux is a terminal emulator for Android that allows you to run Linux commands and utilities on your phone or tablet. It does not replace Android. In this article, we’ll walk you through the process of installing Fedora Linux on Android using Termux and show you how to keep it up to date with the latest versions.

Step by step process

Step 1: Install Termux

To get started, you need to install Termux from the Google Play Store. Once you have Termux installed, open it up and type the following command to update the package list:

pkg update

Note: Termux requires Android >= 7 to run. Support for Android 5 and 6 was dropped at v0.83 on 2020-01-01, but you can find old builds on archive.org ( https://archive.org/details/termux-repositories-legacy/ ) if needed.

Step 2: Install Proot-Distro

Next, you’ll need to install Proot-Distro. Proot-Distro is a tool that allows you to install and run Linux distributions in a chroot environment. To install Proot-Distro, run the following command:

pkg install proot-distro

Step 3: Install Fedora

With Proot-Distro installed, you can now use it to install Fedora. To install Fedora, run the following command:

proot-distro install fedora

This will download and install the latest version of Fedora.

Step 4: Configure dnf

Now that you have Fedora installed, you’ll need to configure dnf, Fedora’s package manager. By default, dnf may try to install SELinux packages, which won’t work properly in a chroot environment. To prevent this, exclude SELinux packages installation by editing the dnf configuration file. Run the following command to open the dnf configuration file for editing :

cd ../usr/var/lib/proot-distro/installed-rootfs/fedora/etc/dnf
vi dnf.conf

You may substitute the nano editor for vi, if it is more to your liking. Once you’re in the file, find the line that says excludepkgs= and add *selinux* to the end of the line, like so:

excludepkgs=*selinux*

It may be necessary to add the excludepkgs line. Save these changes and exit the editor.

Step 5: Install a Desktop Environment (Optional)

Fedora comes with a number of desktop environments to choose from. If you’d like to install a desktop environment, you can do so with the following commands:

proot-distro login fedora
dnf groupinstall "Fedora Workstation" --skip-broken

This will switch from termux into the chroot Fedora installation and install the GNOME desktop environment, along with a number of other packages. If you prefer a different desktop environment, you can replace Fedora Workstation with the name of the group for your preferred environment.

Step 6: Install VNC Server (Optional)

If you plan on using your Fedora installation with a graphical interface, you’ll need to install a VNC server. This will allow you to connect to the Fedora desktop from another computer or device. To install the TigerVNC server, run the following command:

dnf install tigervnc-server.aarch64 -y

This will install the VNC server, along with any necessary dependencies.

Step 7: Upgrading Fedora

Now that you have Fedora installed, you’ll want to keep it up to date with the latest versions. To upgrade Fedora, run the following commands:

sudo dnf upgrade --refresh
sudo dnf install dnf-plugin-system-upgrade
sudo dnf system-upgrade download –releasever=37
export DNF_SYSTEM_UPGRADE_NO_REBOOT=1
sudo -E dnf system-upgrade reboot
sudo -E dnf system-upgrade upgrade
sudo dnf upgrade --refresh

First command sudo dnf upgrade –refresh refreshes the package cache and updates any installed packages.

The second command sudo dnf install dnf-plugin-system-upgrade installs the dnf-plugin-system-upgrade package, needed for the upgrade process.

The third command sudo dnf system-upgrade download –releasever=37 downloads the necessary packages for the upgrade to version 37 of Fedora. Replace 37 with the desired release version.

The fourth command export DNF_SYSTEM_UPGRADE_NO_REBOOT=1 sets an environment variable to prevent the system from rebooting after the upgrade.

The fifth command sudo -E dnf system-upgrade reboot reboots the system to start the upgrade process. Make sure to save any important work before running this command.

The sixth command sudo -E dnf system-upgrade upgrade performs the upgrade process.

Finally, the seventh command sudo dnf upgrade –refresh updates any remaining packages and ensures that your system is fully up to date.

Errors Encountered

During the installation and upgrade process, you may encounter errors. Two common errors are described below, along with their solutions.

Error 1: sudo: /etc/sudo.conf is owned by uid 1001, should be 0
Solution: This error occurs when the ownership of the sudo.conf file is incorrect. To fix this, run the following command:

chmod +s /usr/bin/sudo

This sets the setuid bit on the sudo command, which allows it to run with root privileges.

Error 2: filesystem package didn’t get upgraded post OS upgrade
Solution: This error occurs when the filesystem package is not upgraded during the upgrade process. To fix this, run the following commands:

sudo rpm -e --nodeps filesystem
dnf download filesystem

The first command removes the filesystem package, and the second command downloads the latest version of the package. If you encounter any errors during the upgrade process, you can use rpmrebuild to rebuild the package with any necessary modifications.

Conclusion

In this article, we’ve shown you how to install Fedora Linux on Android using Termux and how to keep it up to date with the latest versions. While there may be some errors to overcome during the installation and upgrade process, following the steps outlined in this article should help you get Fedora up and running on your Android device in no time.

Posted on Leave a comment

Docker and Fedora 37: Migrating to Podman

In previous installments (Fedora 32, Fedora 35), there was a strong focus on making things work with Docker on Fedora Linux. This article will focus on the final stage of this long journey. It will focus on migrating a cross-platform production set-up from Docker to Podman.

Background

Docker and Podman use the same open standard for containers. On top of this container standard, there are multiple ways of organizing containers together. Docker-Compose and Kubernetes are the two main technologies for this, although tools like Ansible are also popular.

On the business side though, there are strong differences. Docker is distributed with a non-free application called Docker Desktop, while Podman historically never had a UI. Docker started live in 2013 and had its rise to prominence in 2016. Podman started in 2018 and it has only become more popular in the last two years.

Podman was certainly not the first on the scene, and it has been fighting an uphill battle. Still, in many ways, this has been an opportunity. Podman can avoid some of the architectural errors that Docker made, and it can integrate with other tools that didn’t exist yet when Docker started.

Personal background

The previous articles about Docker and Fedora are based on the author’s professional life. At the company were I work, we heavily relied on Docker when I came on board. This meant that I needed Docker, and I started to document my struggles which ultimately lead to the first article. The second article was a follow-up to inform readers that most hurdles from the past were no longer a problem.

Podman Destkop

The game-changer in this whole story is Podman Desktop. It is a cross-platform UI that allows teams on Linux, macOS and Windows to collaborate. It works the same way as Docker Desktop, including a bundled VM and WSL support. This also means that Podman now offers a complete package for software companies. While software developers on Linux could use Podman in the past, it’s now possible to migrate an entire team across environments!

Migrating Docker

So, let’s start migrating from Docker to Podman. First, you’ll need to make sure that you have podman and podman-compose installed. You can easily download Podman Desktop from Flathub.

Image files

Image files are good as they are! They are identical because of the open standards behind containers.

One thing that you will see now is that there are a plethora of companies and groups that offer their own image-repositories.

  • hub.docker.com (alias, docker.io) is the offering from Docker, which their tooling conveniently defaults to.
  • registry.gitlab.com is the registry of GitLab’s commercial offering. Community editions follow this same syntax resulting in, for example: registry.gitlab.gnome.org
  • registry.fedoraproject.org is Fedora’s Registry. This registry is also used for flatpaks from the Fedora repository.
  • Quay.io is the offering from Red Hat, which contains all of Podman’s tooling, but also CentOS images.

The biggest change that you’ll have to adapt to, when switching from Docker to Podman, is that you’ll be encouraged to write full image addresses instead of just stubs: `postgres:14-alpine` becomes `docker.io/library/postgres:14-alpine`.

Docker-Compose files

Compose files are Docker specific and they can’t be used with Podman. What you can use, though, is podman-compose. Better yet, you can start your docker-based platform and then use Podman Desktop to export your current configuration to a Kubernetes file.

$ podman-compose -f ./docker-compose-platform.yaml up --detach

Once you start podman-compose with your old docker-compose .yaml file, you’ll see that you have a number of containers running in one ‘compose’ group. This is how things translate into the world of Podman. From here, you can select the containers and create a Pod. A Pod is a collection of containers that run in their own network.

Once you inspect the Pod, you have a Kube file that represents this container collection. Save it somewhere and give it another critical look. You can likely remove some stuff without impacting the functioning of the system. After all, auto-generated documents will have some artifacts.

All three files from the demonstration can be seen here:

That’s it. You have now migrated from Docker to Podman. To start up Podman with the Kubernetes file simply do:

$ podman play kube podman-kube-platform-cleanup.yaml --replace

GitLab CI/CD

GitLab has a great set of open source and commercial offerings that allow you to automatically deploy and test your system. In the past, people working with Docker inside GitLab had to resort to a Docker-in-Docker solution. That gives many engineers headaches. A migration from Docker to Podman will resolve that problem.

For example, you can use Podman’s official image to easily build your own product image:

runner-setup: image: quay.io/podman/stable:latest stage: setup script: - podman login registry.gitlab.com -u ${COMPANY_CI_USERNAME} -p ${COMPANY_CI_PASSWORD} - podman build --pull --no-cache -t registry.gitlab.com/company/platform:latest -f ./distribute/image . - podman push registry.gitlab.com/company/platform:latest

In this example we use the official Podman stable image based on Fedora Linux 37. We use that to build the latest version of our platform based on the ./distribute/image file. We can do this all without ever having to set up Docker.

Tooling and integrations

Finally, we have to talk about certain tooling. Not all tooling will work equally well from the start. For example, the login that Amazon’s AWS CLI provides is hardcoded for Docker. Still, you can easily login to AWS by doing this:

$ aws ecr get-login-password --region $REGION | podman login --username AWS --password-stdin $AWS_REPO_NAME

Similarly, you can cache your registry credentials for both Podman and Docker. Do this with a single command like:

$ podman login registry.gitlab.com –authfile=${HOME}/.docker/config.json

Alternatives/Workarounds

Perhaps all of this sounds good, but you need more time to convince your team and company that embracing open source tools is great. In that case, you can add the following snippet to .bashrc and use Podman without changing the tooling of your team.

#Ensure that these aliases also affect other scripts
shopt -s expand_aliases alias docker=podman
alias docker-compose=podman-compose

This also offers you a chance to test the set-up that you have, in case of technical incompatibilities. You can also use the package podman-docker (available via dnf) to automatically convert Docker commands into Podman commands.

Company experience

The migration from Docker to Podman has been well received within my development team. The desktop experience for macOS and Windows users has improved since they no longer have to struggle with a tool that is closed source. The improvements to the CI system also help in maintaining the pipeline and it makes the integration tests run faster.

In day to day work, the team is really enthusiastic about the ease with which they can inspect running containers, manage images, and clean temporary volumes.

In the big picture, the migration from Docker to Podman further aids the company in limiting financial liabilities. Developers on macOS and Windows are no longer dependent on a closed-source product. Finally, it also means that the team gets some experience in Kubernetes, which will certainly pay off in the future.

Summary

The gains from switching to Podman really outweigh the bit of time it takes to set up and to migrate. The future is bright for Podman and Podman Desktop, and it offers a great solution to the problems that come with Docker.

Finally, for us Fedora Linux users, there is another great benefit. There is some beautiful tooling in development that can make our lives so much easier. The following screenshots are of the application Pods. This is currently in active development but will certainly prove to be a useful tool in the future.

This article has been made possible by my employer, Bold Security Technologies. Got your own migration stories to share? Let us know in the comments.

Posted on Leave a comment

Anaconda Web UI preview image now public!

We are excited to announce the first public preview image of the new Anaconda web interface!  Our vision is to reimagine and modernize our installer’s user experience (see our blog post “Anaconda is getting a new suit”). We are doing this by redesigning the user experience on all fronts to make it more easy and approachable for everyone to use.

Today, we would like to introduce our plans for the public preview release, as our new project has already reached a point where core code functionality is already developed and the new interface can be used for real installations. 

So, we’re giving you something to play with! 🙂

Installation progress of the preview image

Why public preview image?

By giving you a working ISO as soon as we can, you have the opportunity to help us to define this new UI. This task allows us to rethink what we have and find new ways to overcome the challenges of the UI instead of re-creating what we had already. Please take this opportunity and reach us with your feedback to help us to create the best OS installer ever!

Please let us know what you require from Anaconda. What features are important to you and why are these important? That will allow us to prioritize our focus on development and design. See below for how to contact us.

How to get public preview image?

Download the Anaconda preview image here

Thanks a lot to the Image Builder team for providing us with a way to build ISO with the Fedora 37 Workstation GA content. We are planning to provide additional images with an updated installer to give you the newest features and fixes with the link above. There are no updates to the installation payload (installed system data) yet. We will announce important updates of the ISO image by sending mail to [email protected] with CC to [email protected]. Please subscribe to either of these to get information about the news. This way we will be able to iterate on your feedback.

What you will get with the preview ISO

The ISO will allow you to install the system and let you get a taste of the new UI, so you can provide us early feedback. However, it is pretty early in the development cycle. We advise you to not use this ISO to install critical infrastructure or machines where you have important data.

Let’s go to the more interesting part of what you can do with the ISO:

  • Choose installation language
  • Select your disks
  • Automatically partition the disks. BEWARE! This will erase everything on the selected disks.
  • Automatically install Fedora 37 GA Workstation system
  • Basic review screen of your selections
  • Installation progress screen
  • Built-in help (on Installation destination screen only)

Known issues:

  • In the bootloader menu you’ll see “Install Fedora 38”, it’s expected because the installation environment is from Rawhide. However, the content installed will be Fedora 37 GA, so don’t worry.
  • Virtual Box on Mac might have resolution issues. We are working on resolving this issue.
  • Aspect ratio and window handling. We know we need to solve this better, feedback is welcome.

How to provide feedback?

Your feedback is critical to have a project which you and we can be proud of, so please share it with us. To give us feedback:

Please take your time to play with the UI and tell us what you think. What works great, what is not working and what you would like to have. Ideally, follow future updates and tell us if the situation is better or worse. 

We are really counting on your feedback and we are thankful to have you all supporting us in this journey!

Posted on Leave a comment

Five common mistakes when using automation

As automation expands to cover more aspects of IT, more administrators are learning automation skills and applying them to ease their workload. Automation can ease the burden of repetitive tasks and add a level of conformity to infrastructure. But when IT workers deploy automation, there are common mistakes that can wreak havoc on infrastructures large and small. Five common mistakes are typically seen in automation deployments.

Lack of testing

A beginner’s mistake that is commonly made is that automation scripts are not thoroughly tested. A simple shell script can have adverse affects on a server due to typos or logic errors. Multiply that mistake by the number of servers in your infrastructure, and you can have a big mess to clean up. Always test your automation scripts before deploying in large scale.

Unexpected server load

The second mistake that frequently occurs is not predicting the system load the script may put on other resources. Running a script that downloads a file or installs a package from a repository may be fine when the target is a dozen servers. Scripts are often run against hundreds or thousands of servers. This load can bring supporting services to a stand still or crash them entirely. Don’t forget to consider end point impact or set a reasonable concurrency rate.

Run away scripts

One use of automation tools is to ensure compliance to standard settings. Automation can make it easy to ensure that every server in a group has exactly the same settings. Problems may arise if a server in that group needs to be altered from that baseline, and the administrator is not aware of the compliance standard. Unneeded and unwanted services can be installed and enabled leading to possible security concerns.

Lack of documentation

A constant duty for administrators should be to document their work. Companies can have frequent new employees in IT departments due to contracts ending or promotions or regular employee turnover. It is also not uncommon for work groups within a company to be siloed from each other. For these reasons it is important to document what automation is in place. Unlike user run scripts, automation may continue long after the person who created it leaves the group. Administrators can find themselves facing strange behaviors in their infrastructure from automation left unchecked.

Lack of experience

The last mistake on the list is when administrators do not know enough about the systems they are automating. Too often admins are hired to work positions where they do not have adequate training and no one to learn from. This has been especially relevant since COVID when companies are struggling to fill vacancies. Admins are then forced to deal with infrastructure they didn’t set up and may not fully understand. This can lead to very inefficient scripts that waste resources or misconfigured servers.

Conclusion

More and more admins are learning automation to help them in their everyday tasks. As a result, automation is being applied to more areas of technology. Hopefully this list will help prevent new users from making these mistakes and urge seasoned admins to re-evaluate their IT strategies. Automation is meant to ease the burden of repetitive tasks, not cause more work for the end user.

Posted on Leave a comment

MLCube and Podman

MLCube is a new open source container based infrastructure specification introduced to enable reproducibility in Python based machine learning workflows. It can utilize tools such as Podman, Singularity and Docker. Execution on remote platforms is also supported. One of the chairs of the MLCommons Best Practices working group that is developing MLCube is Diane Feddema from Red Hat. This introductory article explains how to run the hello world MLCube example using Podman on Fedora Linux.

Yazan Monshed has written a very helpful introduction to Podman on Fedora which gives more details on some of the steps used here.

First install the necessary dependencies.

sudo dnf -y update
sudo dnf -y install podman git virtualenv \ policycoreutils-python-utils

Then, following the documentation, setup a virtual environment and get the example code. To ensure reproducibility, use a specific commit as the project is being actively improved.

virtualenv -p python3 ./env_mlcube source ./env_mlcube/bin/activate
git clone https://github.com/mlcommons/mlcube_examples.git cd ./mlcube_examples/hello_world
git checkout 5fe69bd
pip install mlcube mlcube-docker
mlcube describe

Now change the runner command from docker to podman by editing the file $HOME/mlcube.yaml so that the line

docker: docker

becomes

docker: podman

If you are on a computer with x86_64 architecture, you can get the container using

mlcube configure --mlcube=. --platform=docker

You will see a number of options

? Please select an image: ▸ registry.fedoraproject.org/mlcommons/hello_world:0.0.1 registry.access.redhat.com/mlcommons/hello_world:0.0.1 docker.io/mlcommons/hello_world:0.0.1 quay.io/mlcommons/hello_world:0.0.1

Choose docker.io/mlcommons/hello_world:0.0.1 to obtain the container.

If you are not on a computer with x86_64 architecture, you will need to build the container. Change the file $HOME/mlcube.yaml so that the line

build_strategy: pull

becomes

build_strategy: auto

and then build the container using

mlcube configure --mlcube=. --platform=docker

To run the tests, you may need to set SELinux permissions in the directories appropriately. You can check that SELinux is enabled by typing

sudo sestatus

which should give you output similar to

SELinux status: enabled
...

Josphat Mutai, Christopher Smart and Daniel Walsh explain that you need to be careful in setting appropriate SELinux policies for files used by containers. Here, you will allow the container to read and write to the workspace directory.

sudo semanage fcontext -a -t container_file_t "$PWD/workspace(/.*)?"
sudo restorecon -Rv $PWD/workspace

Now check the directory policy by checking that

ls -Z

gives output similar to

unconfined_u:object_r:user_home_t:s0 Dockerfile
unconfined_u:object_r:user_home_t:s0 README.md
unconfined_u:object_r:user_home_t:s0 mlcube.yaml
unconfined_u:object_r:user_home_t:s0 requirements.txt
unconfined_u:object_r:container_file_t:s0 workspace

Now run the example

mlcube run --mlcube=. --task=hello --platform=docker
mlcube run --mlcube=. --task=bye --platform=docker

Finally, check that the output

cat workspace/chats/chat_with_alice.txt

has text similar to

Hi, Alice! Nice to meet you.
Bye, Alice! It was great talking to you.

You can create your own MLCube as described here. Contributions to the MLCube examples repository are welcome. Udica is a new project that promises more fine grained SELinux policy controls for containers that are easy for system administrators to apply. Active development of these projects is ongoing. Testing and providing feedback on them would help make secure data management on systems with SELinux easier and more effective.

Posted on Leave a comment

Samba as AD and Domain Controller

Having a server with Samba providing AD and Domain Controller functionality will provide you with a very mature and professional way to have a centralized place with all users and groups information. It will free you from the burden of having to manage users and groups on each server. This solution is useful for authenticating applications such as WordPress, FTP servers, HTTP servers, you name it.

This step-by-step tutorial about setting up Samba as an AD and Domain Controller will demonstrate to you how you can achieve this solution for your network, servers, and applications.

Pre-requisites

A fresh Fedora Linux 35 server installation.

Definitions

Hostname: dc1
Domain: onda.org
IP: 10.1.1.10/24

Considerations

  • Once the domain was chosen, you can’t change it, be wise;
  • In the /etc/hosts file, the server name can’t be on 127.0.0.1 line, it must be on its IP address line;
  • Use a fixed IP address for the server, as a result, the server’s IP won’t change;
  • Once you provision the DC server, do not provision another one, join other ones to the domain instead;
  • For the DNS server, we will choose SAMBA_INTERNAL, so we can have the DNS forwarding feature;
  • It is necessary to have a time synchronization service running in the server, like chrony or ntp, so you can avoid numerous problems from not having the server and clients synchronized with the same time;

Samba installation

Let’s install the required software to get through this guide. It will provide all the applications you will need.

sudo dnf install samba samba-dc samba-client heimdal-workstation
Samba installation process
Samba installation

Configurations

For setting up Samba as an AD and Domain Controller, you will have to prepare the environment with a functional configuration before you start using it.

Firewall

You will need to allow some UDP and TCP ports through the firewall so that clients will be able to connect to the Domain Controller.

I will show you two methods to add them. Choose the one that suits you best.

First method

This is the most straightforward method, firewalld comes with a service with all ports needed to open Samba DC, which is called samba-dc. Add it to the firewall rules:

Add the service:

sudo firewall-cmd --permanent --add-service samba-dc

Second method

Alternatively, you can add the rules from the command line:

sudo firewall-cmd --permanent --add-port={53/udp,53/tcp,88/udp,88/tcp,123/udp,135/tcp,137/udp,138/udp,139/tcp,389/udp,389/tcp,445/tcp,464/udp,464/tcp,636/tcp,3268/tcp,3269/tcp,49152-65535/tcp}

Reload firewalld:

sudo firewall-cmd --reload

For more information about firewalld, check the following article: Control the firewall at the command line

SELinux

To run a Samba DC and running with SELinux in enforcing mode, it is necessary to set some samba booleans for SELinux to on. After these booleans are set, it should not be necessary to disable SELinux.

sudo setsebool -P samba_create_home_dirs=on samba_domain_controller=on samba_enable_home_dirs=on samba_portmapper=on use_samba_home_dirs=on

Restore the default SELinux security contexts for files:

sudo restorecon -Rv /

Samba

First, remove the /etc/samba/smb.conf file if it exists:

sudo rm /etc/samba/smb.conf

Samba uses its own DNS service, and for that reason, the service won’t start if systemd-resolved is running, that is why it is necessary to edit its configuration to stop listening on port 53 and use Samba’s DNS.

Create the directory /etc/systemd/resolved.conf.d/ if it does not exist:

sudo mkdir /etc/systemd/resolved.conf.d/

Create the file /etc/systemd/resolved.conf.d/custom.conf that contains the custom config:

[Resolve]
DNSStubListener=no
Domains=onda.org
DNS=10.1.1.10

Remember to change the DNS and Domains entries to be your Samba DC server.

Restart the systemd-resolved service:

sudo systemctl restart systemd-resolved

Finally, provision the Samba configuration. samba-tool provides every step needed to make Samba an AD server.

Using the samba-tool, provision the Samba configuration:

sudo samba-tool domain provision --server-role=dc --use-rfc2307 --dns-backend=SAMBA_INTERNAL --realm=ONDA.ORG --domain=ONDA --adminpass=sVbOQ66iCD3hHShg
Using samba-tool to provision a domain
Samba domain provisioning

The ‐‐use-rfc2307 argument provides POSIX attributes to Active Directory, which stores Unix user and group information on LDAP (rfc2307.txt).

Make sure that you have the correct dns forwarder address set in /etc/samba/smb.conf. Concerning this tutorial, it should be different from the server’s own IP address 10.1.1.10, in my case I set to 8.8.8.8, however your mileage may vary:

Changing the dns forwarder value on /etc/samba/smb.conf file
Changing the dns forwarder value on /etc/samba/smb.conf file

After changing the dns forwarder value, restart samba service:

sudo systemctl restart samba

Kerberos

After Samba installation, it was provided a krb5.conf file that we will use:

sudo cp /usr/share/samba/setup/krb5.conf /etc/krb5.conf.d/samba-dc

Edit /etc/krb5.conf.d/samba-dc content to match your organization information:

[libdefaults]
default_realm = ONDA.ORG
dns_lookup_realm = false
dns_lookup_kdc = true

[realms]
ONDA.ORG = {
default_domain = ONDA
}

[domain_realm]
dc1.onda.org = ONDA.ORG

Starting and enabling Samba on boot time

To make sure that Samba will start on system initialization, enable and start it:

sudo systemctl enable samba
sudo systemctl start samba

Testing

Connectivity

$ smbclient -L localhost -N

As a result of smbclient command, shows that connection was successful.

Anonymous login successful
        Sharename       Type      Comment
        ---------       ----      -------
        sysvol          Disk
        netlogon        Disk
        IPC$            IPC       IPC Service (Samba 4.15.6)
SMB1 disabled -- no workgroup available
Testing connection with smbclient tool
smbclient connection test

Now, test the Administrator login to netlogon share:

$ smbclient //localhost/netlogon -UAdministrator -c 'ls'
Password for [ONDA\Administrator]:
  .                              D        0  Sat Mar 26 05:45:13 2022
  ..                             D        0  Sat Mar 26 05:45:18 2022

                8154588 blocks of size 1024. 7307736 blocks available
smbclient Administrator connection test
smbclient Administrator connection test

DNS test

To test if the name resolution is working, execute the following commands:

$ host -t SRV _ldap._tcp.onda.org.
_ldap._tcp.onda.org has SRV record 0 100 389 dc1.onda.org.
$ host -t SRV _kerberos._udp.onda.org.
_kerberos._udp.onda.org has SRV record 0 100 88 dc1.onda.org.
$ host -t A dc1.onda.org.
dc1.onda.org has address 10.1.1.10

If you get the error:

-bash: host: command not found 

Install the bind-utils package:

sudo dnf install bind-utils

Kerberos test

Testing Kerberos is important because it generates the required tickets to let clients authenticate with encryption. It heavily relies on correct time.

It can’t be stressed enough to have date and time set correctly, and that is why it is so important to have a time synchronization service running on both clients and servers.

$ /usr/lib/heimdal/bin/kinit administrator
$ /usr/lib/heimdal/bin/klist
Kerberos ticket validation
Kerberos ticket validation

Adding a user to the Domain

samba-tool provides us an interface for executing Domain administration tasks, so we can add a user to the Domain easily.

The samba-tool help is very comprehensive:

$ samba-tool user add --help

Adding user danielk to the domain:

sudo samba-tool user add danielk --unix-home=/home/danielk --login-shell=/bin/bash --gecos 'Daniel K.' --given-name=Daniel --surname='Kühl' --mail-address='[email protected]'
Adding user to the Domain using samba-tool
Adding user to the Domain

To list the users on Domain:

sudo samba-tool user list

Wrap up and conclusion

We started out by installing Samba and required applications in a fresh Fedora Linux 35 installation. We’ve also explained the problems that this solution solves. Thereafter, we did an initial configuration that prepares the environment to be ready to Samba to operate as an AD and Domain Controller.

Then, we proceeded to cover how to have Samba up and running alongside Fedora Linux security features, like having it working with firewalld and SELinux enabled. We did some important testing to make sure everything was fine and ended by showing a bit on how to administrate users using samba-tool.

To summarize, if you want to establish a robust solution for centralizing authentication across your network, servers (If one wanted to, one could even join a Windows 10 client to this Samba domain [tested with Windows 10 Professional version 20H2]) and services, consider using this approach as part of your infrastructure.

Now that you know how to have a Samba as AD and Domain Controller solution, what would you like to see covered next? Share your thoughts in the comments below.

Posted on Leave a comment

Choose between Btrfs and LVM-ext4

Fedora 33 introduced a new default filesystem in desktop variants, Btrfs. After years of Fedora using ext4 on top of Logical Volume Manager (LVM) volumes, this is a big shift. Changing the default file system requires compelling reasons. While Btrfs is an exciting next-generation file system, ext4 on LVM is well established and stable. This guide aims to explore the high-level features of each and make it easier to choose between Btrfs and LVM-ext4.

In summary

The simplest advice is to stick with the defaults. A fresh Fedora 33 install defaults to Btrfs and upgrading a previous Fedora release continues to use whatever was initially installed, typically LVM-ext4. For an existing Fedora user, the cleanest way to get Btrfs is with a fresh install. However, a fresh install is much more disruptive than a simple upgrade. Unless there is a specific need, this disruption could be unnecessary. The Fedora development team carefully considered both defaults, so be confident with either choice.

What about all the other file systems?

There are a large number of file systems for Linux systems. The number explodes after adding in combinations of volume managers, encryption methods, and storage mechanisms . So why focus on Btrfs and LVM-ext4? For the Fedora audience these two setups are likely to be the most common. Ext4 on top of LVM became the default disk layout in Fedora 11, and ext3 on top of LVM came before that.

Now that Btrfs is the default for Fedora 33, the vast majority of existing users will be looking at whether they should stay where they are or make the jump forward. Faced with a fresh Fedora 33 install, experienced Linux users may wonder whether to use this new file system or fall back to what they are familiar with. So out of the wide field of possible storage options, many Fedora users will wonder how to choose between Btrfs and LVM-ext4.

Commonalities

Despite core differences between the two setups, Btrfs and LVM-ext4 actually have a lot in common. Both are mature and well-tested storage technologies. LVM has been in continuous use since the early days of Fedora Core and ext4 became the default in 2009 with Fedora 11. Btrfs merged into the mainline Linux kernel in 2009 and Facebook uses it widely. SUSE Linux Enterprise 12 made it the default in 2014. So there is plenty of production run time there as well.

Both systems do a great job preventing file system corruption due to unexpected power outages, even though the way they accomplish it is different. Supported configurations include single drive setups as well as spanning multiple devices, and both are capable of creating nearly instant snapshots. A variety of tools exist to help manage either system, both with the command line and graphical interfaces. Either solution works equally well on home desktops and on high-end servers.

Advantages of LVM-ext4

Show the relationship of LVM-ext4 filesystem to hard-drive partitions and mounted directories.
Structure of ext4 on LVM

The ext4 file system focuses on high-performance and scalability, without a lot of extra frills. It is effective at preventing fragmentation over extended periods of time and provides nice tools for when it does happen. Ext4 is rock solid because it built on the previous ext3 file system, bringing with it all the years of in-system testing and bug fixes.

Most of the advanced capabilities in the LVM-ext4 setup come from LVM itself. LVM sits “below” the file system, which means it supports any file system. Logical volumes (LV) are generic block devices so virtual machines can use them directly. This flexibility allows each logical volume to use the right file system, with the right options, for a variety of situations. This layered approach also honors the Unix philosophy of small tools working together.

The volume group (VG) abstraction from the hardware allows LVM to create flexible logical volumes. Each LV pulls from the same storage pool but has its own configuration. Resizing volumes is a lot easier than resizing physical partitions as there are no limitation of ordered placement of the data. LVM physical volumes (PV) can be any number of partitions and can even move between devices while the system is running.

LVM supports read-only and read-write snapshots, which make it easy to create consistent backups from active systems. Each snapshot has a defined size, and a change to the source or snapshot volume use space from there. Alternately, logical volumes can also be part of a thinly provisioned pool. This allows snapshots to automatically use data from a pool instead of consuming fixed sized chunks defined at volume creation.

Multiple devices with LVM

LVM really shines when there are multiple devices. It has native support for most RAID levels and each logical volume can have a different RAID level. LVM will automatically choose appropriate physical devices for the RAID configuration or the user can specify it directly. Basic RAID support includes data striping for performance (RAID0) and mirroring for redundancy (RAID1). Logical volumes can also use advanced setups like RAID5, RAID6, and RAID10. LVM RAID support is mature because under the hood LVM uses the same device-mapper (dm) and multiple-device (md) kernel support used by mdadm.

Logical volumes can also be cached volumes for systems with both fast and slow drives. A classic example is a combination of SSD and spinning-disk drives. Cached volumes use faster drives for more frequently accessed data (or as a write cache), and the slower drive for bulk data.

The large number of stable features in LVM and the reliable performance of ext4 are a testament to how long they have been in use. Of course, with more features comes complexity. It can be challenging to find the right options for the right feature when configuring LVM. For single drive desktop systems, features of LVM like RAID and cache volumes don’t apply. However, logical volumes are more flexible than physical partitions and snapshots are useful. For normal desktop use, the complexity of LVM can also be a barrier to recovering from issues a typical user might encounter.

Advantages of Btrfs

Show the relationship of Btrfs filesystem to hard-drive partitions and mounted directories.
Btrfs Structure

Lessons learned from previous generations guided the features built into Btrfs. Unlike ext4, it can directly span multiple devices, so it brings along features typically found only in volume managers. It also has features that are unique in the Linux file system space (ZFS has a similar feature set, but don’t expect it in the Linux kernel).

Key Btrfs features

Perhaps the most important feature is the checksumming of all data. Checksumming, along with copy-on-write, provides the key method of ensuring file system integrity after unexpected power loss. More uniquely, checksumming can detect errors in the data itself. Silent data corruption, sometimes referred to as bitrot, is more common that most people realize. Without active validation, corruption can end up propagating to all available backups. This leaves the user with no valid copies. By transparently checksumming all data, Btrfs is able to immediately detect any such corruption. Enabling the right dup or raid option allows the file system to transparently fix the corruption as well.

Copy-on-write (COW) is also a fundamental feature of Btrfs, as it is critical in providing file system integrity and instant subvolume snapshots. Snapshots automatically share underlying data when created from common subvolumes. Additionally, after-the-fact deduplication uses the same technology to eliminate identical data blocks. Individual files can use COW features by calling cp with the reflink option. Reflink copies are especially useful for copying large files, such as virtual machine images, that tend to have mostly identical data over time.

Btrfs supports spanning multiple devices with no volume manager required. Multiple device support unlocks data mirroring for redundancy and striping for performance. There is also experimental support for more advanced RAID levels, such as RAID5 and RAID6. Unlike standard RAID setups, the Btrfs raid1 option actually allows an odd number of devices. For example, it can use 3 devices, even if they are are different sizes.

All RAID and dup options are specified at the file system level. As a consequence, individual subvolumes cannot use different options. Note that using the RAID1 option with multiple devices means that all data in the volume is available even if one device fails and the checksum feature maintains the integrity of the data itself. That is beyond what current typical RAID setups can provide.

Additional features

Btrfs also enables quick and easy remote backups. Subvolume snapshots can be sent to a remote system for storage. By leveraging the inherent COW meta-data in the file system, these transfers are efficient by only sending incremental changes from previously sent snapshots. User applications such as snapper make it easy to manage these snapshots.

Additionally, a Btrfs volume can have transparent compression and chattr +c will mark individual files or directories for compression. Not only does compression reduce the space consumed by data, but it helps extend the life of SSDs by reducing the volume of write operations. Compression certainly introduces additional CPU overhead, but a lot of options are available to dial in the right trade-offs.

The integration of file system and volume manager functions by Btrfs means that overall maintenance is simpler than LVM-ext4. Certainly this integration comes with less flexibility, but for most desktop, and even server, setups it is more than sufficient.

Btrfs on LVM

Btrfs can convert an ext3/ext4 file system in place. In-place conversion means no data to copy out and then back in. The data blocks themselves are not even modified. As a result, one option for an existing LVM-ext4 systems is to leave LVM in place and simply convert ext4 over to Btrfs. While doable and supported, there are reasons why this isn’t the best option.

Some of the appeal of Btrfs is the easier management that comes with a file system integrated with a volume manager. By running on top of LVM, there is still some other volume manager in play for any system maintenance. Also, LVM setups typically have multiple fixed sized logical volumes with independent file systems. While Btrfs supports multiple volumes in a given computer, many of the nice features expect a single volume with multiple subvolumes. The user is still stuck manually managing fixed sized LVM volumes if each one has an independent Btrfs volume. Though, the ability to shrink mounted Btrfs filesystems does make working with fixed sized volumes less painful. With online shrink there is no need to boot a live image.

The physical locations of logical volumes must be carefully considered when using the multiple device support of Btrfs. To Btrfs, each LV is a separate physical device and if that is not actually the case, then certain data availability features might make the wrong decision. For example, using raid1 for data typically provides protection if a single drive fails. If the actual logical volumes are on the same physical device, then there is no redundancy.

If there is a strong need for some particular LVM feature, such as raw block devices or cached logical volumes, then running Btrfs on top of LVM makes sense. In this configuration, Btrfs still provides most of its advantages such as checksumming and easy sending of incremental snapshots. While LVM has some operational overhead when used, it is no more so with Btrfs than with any other file system.

Wrap up

When trying to choose between Btrfs and LVM-ext4 there is no single right answer. Each user has unique requirements, and the same user may have different systems with different needs. Take a look at the feature set of each configuration, and decide if there is something compelling about one over the other. If not, there is nothing wrong with sticking with the defaults. There are excellent reasons to choose either setup.

Posted on Leave a comment

Deploy Fedora CoreOS servers with Terraform

Fedora CoreOS is a lightweight, secure operating system optimized for running containerized workloads. A YAML document is all you need to describe the workload you’d like to run on a Fedora CoreOS server.

This is wonderful for a single server, but how would you describe a fleet of cooperating Fedora CoreOS servers? For example, what if you wanted a set of servers running load balancers, others running a database cluster and others running a web application? How can you get them all configured and provisioned? How can you configure them to communicate with each other? This article looks at how Terraform solves this problem.

Getting started

Before you start, decide whether you need to review the basics of Fedora CoreOS. Check out this previous article on the Fedora Magazine:

Terraform is an open source tool for defining and provisioning infrastructure. Terraform defines infrastructure as code in files. It provisions infrastructure by calculating the difference between the desired state in code and observed state and applying changes to remove the difference.

HashiCorp, the company that created and maintains Terraform, offers an RPM repository to install Terraform.

sudo dnf config-manager --add-repo \ https://rpm.releases.hashicorp.com/fedora/hashicorp.repo
sudo dnf install terraform

To get yourself familiar with the tools, start with a simple example. You’re going to create a single Fedora CoreOS server in AWS. To follow along, you need to install awscli and have an AWS account. awscli can be installed from the Fedora repositories and configured using the aws configure command

sudo dnf install -y awscli
aws configure

Please note, AWS is a paid service. If executed correctly, participants should expect less than $1 USD in charges, but mistakes may lead to unexpected charges.

Configuring Terraform

In a new directory, create a file named config.yaml. This file will hold the contents of your Fedore CoreOS configuration. The configuration simply adds an SSH key for the core user. Modify the authorized_ssh_key section to use your own.

variant: fcos
version: 1.2.0
passwd: users: - name: core authorized_ssh_keys: - "ssh-ed25519 AAAAC3....... user@hostname"

Next, create a file main.tf to contain your Terraform specification. Take a look at the contents section by section. It begins with a block to specify the versions of your providers.

terraform { required_providers { ct = { source = "poseidon/ct" version = "0.7.1" } aws = { source = "hashicorp/aws" version = "~> 3.0" } }
}

Terraform uses providers to control infrastructure. Here it uses the AWS provider to provision EC2 servers, but it can provision any kind of AWS infrastructure. The ct provider from Poseidon Labs stands for config transpiler. This provider will transpile Fedora CoreOS configurations into Ignition configurations. As a result, you do not need to use fcct to transpile your configurations. Now that your provider versions are specified, initialize them.

provider "aws" { region = "us-west-2"
} provider "ct" {}

The AWS region is set to us-west-2 and the ct provider requires no configuration. With the providers configured, you’re ready to define some infrastructure. Use a data source block to read the configuration.

data "ct_config" "config" { content = file("config.yaml") strict = true
}

With this data block defined, you can now access the transpiled Ignition output as data.ct_config.config.rendered. To create an EC2 server, use a resource block, and pass the Ignition output as the user_data attribute.

resource "aws_instance" "server" { ami = "ami-0699a4456969d8650" instance_type = "t3.micro" user_data = data.ct_config.config.rendered
}

This configuration hard-codes the virtual machine image (AMI) to the latest stable image of Fedora CoreOS in the us-west-2 region at time of writing. If you would like to use a different region or stream, you can discover the correct AMI on the Fedora CoreOS downloads page.

Finally, you’d like to know the public IP address of the server once it’s created. Use an output block to define the outputs to be displayed once Terraform completes its provisioning.

output "instance_ip_addr" { value = aws_instance.server.public_ip
}

Alright! You’re ready to create some infrastructure. To deploy the server simply run:

terraform init # Installs the provider dependencies
terraform apply # Displays the proposed changes and applies them

Once completed, Terraform prints the public IP address of the server, and you can SSH to the server by running ssh core@{public ip here}. Congratulations — you’ve provisioned your first Fedora CoreOS server using Terraform!

Updates and immutability

At this point you can modify the configuration in config.yaml however you like. To deploy your change simply run terraform apply again. Notice that each time you change the configuration, when you run terraform apply it destroys the server and creates a new one. This aligns well with the Fedora CoreOS philosophy: Configuration can only happen once. Want to change that configuration? Create a new server. This can feel pretty alien if you’re accustomed to provisioning your servers once and continuously re-configuring them with tools like Ansible, Puppet or Chef.

The benefit of always creating new servers is that it is significantly easier to test that newly provisioned servers will act as expected. It can be much more difficult to account for all of the possible ways in which updating a system in place may break. Tooling that adheres to this philosophy typically falls under the heading of Immutable Infrastructure. This approach to infrastructure has some of the same benefits seen in functional programming techniques, namely that mutable state is often a source of error.

Using variables

You can use Terraform input variables to parameterize your infrastructure. In the previous example, you might like to parameterize the AWS region or instance type. This would let you deploy several instances of the same configuration with differing parameters. What if you want to parameterize the Fedora CoreOS configuration? Do so using the templatefile function.

As an example, try parameterizing the username of your user. To do this, add a username variable to the main.tf file:

variable "username" { type = string description = "Fedora CoreOS user" default = "core"
}

Next, modify the config.yaml file to turn it into a template. When rendered, the ${username} will be replaced.

variant: fcos
version: 1.2.0
passwd: users: - name: ${username} authorized_ssh_keys: - "ssh-ed25519 AAAAC3....... user@hostname"

Finally, modify the data block to render the template using the templatefile function.

data "ct_config" "config" { content = templatefile( "config.yaml", { username = var.username } ) strict = true
}

To deploy with username set to jane, run terraform apply -var=”username=jane”. To verify, try to SSH into the server with ssh jane@{public ip address}.

Leveraging the dependency graph

Passing variables from Terraform into Fedora CoreOS configuration is quite useful. But you can go one step further and pass infrastructure data into the server configuration. This is where Terraform and Fedora CoreOS start to really shine.

Terraform creates a dependency graph to model the state of infrastructure and to plan updates. If the output of one resource (e.g the public IP address of a server) is passed as the input of another service (e.g the destination in a firewall rule), Terraform understands that changes in the former require recreating or modifying the later. If you pass infrastructure data into a Fedora CoreOS configuration, it will participate in the dependency graph. Updates to the inputs will trigger creation of a new server with the new configuration.

Consider a system of one load balancer and three web servers as an example.

The goal is to configure the load balancer with the IP address of each web server so that it can forward traffic to them.

Web server configuration

First, create a file web.yaml and add a simple Nginx configuration with a templated message.

variant: fcos
version: 1.2.0
systemd: units: - name: nginx.service enabled: true contents: | [Unit] Description=Nginx Web Server After=network-online.target Wants=network-online.target [Service] ExecStartPre=-/bin/podman kill nginx ExecStartPre=-/bin/podman rm nginx ExecStartPre=/bin/podman pull nginx ExecStart=/bin/podman run --name nginx -p 80:80 -v /etc/nginx/index.html:/usr/share/nginx/html/index.html:z nginx [Install] WantedBy=multi-user.target
storage: directories: - path: /etc/nginx files: - path: /etc/nginx/index.html mode: 0444 contents: inline: | <html> <h1>Hello from Server ${count}</h1> </html>

In main.tf, you can create three web servers using this template with the following blocks:

data "ct_config" "web" { count = 3 content = templatefile( "web.yaml", { count = count.index } ) strict = true
} resource "aws_instance" "web" { count = 3 ami = "ami-0699a4456969d8650" instance_type = "t3.micro" user_data = data.ct_config.web[count.index].rendered
}

Notice the use of count = 3 and the count.index variable. You can use count to make many copies of a resource. Here, it creates three configurations and three web servers. The count.index variable is used to pass the first configuration to the first web server and so on.

Load balancer configuration

The load balancer will be a basic HAProxy load balancer that forwards to each server. Place the configuration in a file named lb.yaml:

variant: fcos
version: 1.2.0
systemd: units: - name: haproxy.service enabled: true contents: | [Unit] Description=Haproxy Load Balancer After=network-online.target Wants=network-online.target [Service] ExecStartPre=-/bin/podman kill haproxy ExecStartPre=-/bin/podman rm haproxy ExecStartPre=/bin/podman pull haproxy ExecStart=/bin/podman run --name haproxy -p 80:8080 -v /etc/haproxy/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro haproxy [Install] WantedBy=multi-user.target
storage: directories: - path: /etc/haproxy files: - path: /etc/haproxy/haproxy.cfg mode: 0444 contents: inline: | global log stdout format raw local0 defaults mode tcp log global option tcplog frontend http bind *:8080 default_backend http backend http balance roundrobin
%{ for name, addr in servers ~} server ${name} ${addr}:80 check
%{ endfor ~}

The template expects a map with server names as keys and IP addresses as values. You can create that using the zipmap function. Use the ID of the web servers as keys and the public IP addresses as values.

data "ct_config" "lb" { content = templatefile( "lb.yaml", { servers = zipmap( aws_instance.web.*.id, aws_instance.web.*.public_ip ) } ) strict = true
} resource "aws_instance" "lb" { ami = "ami-0699a4456969d8650" instance_type = "t3.micro user_data = data.ct_config.lb.rendered
}

Finally, add an output block to display the IP address of the load balancer.

output "load_balancer_ip" { value = aws_instance.lb.public_ip
}

All right! Run terraform apply and the IP address of the load balancer displays on completion. You should be able to make requests to the load balancer and get responses from each web server.

$ export LB={{load balancer IP here}}
$ curl $LB
<html> <h1>Hello from Server 0</h1>
</html>
$ curl $LB
<html> <h1>Hello from Server 1</h1>
</html>
$ curl $LB
<html> <h1>Hello from Server 2</h1>
</html>

Now you can modify the configuration of the web servers or load balancer. Any changes can be realized by running terraform apply once again. Note in particular that any change to the web server IP addresses will cause Terraform to recreate the load balancer (changing the count from 3 to 4 is a simple test). Hopefully this emphasizes that the load balancer configuration is indeed a part of the Terraform dependency graph.

Clean up

You can destroy all the infrastructure using the terraform destroy command. Simply navigate to the folder where you created main.tf and run terraform destroy.

Where next?

Code for this tutorial can be found at this GitHub repository. Feel free to play with examples and contribute more if you find something you’d love to share with the world. To learn more about all the amazing things Fedora CoreOS can do, dive into the docs or come chat with the community. To learn more about Terraform, you can rummage through the docs, checkout #terraform on freenode, or contribute on GitHub.

Posted on Leave a comment

Using pods with Podman on Fedora

This article shows the reader how easy it is to get started using pods with Podman on Fedora. But what is Podman? Well, we will start by saying that Podman is a container engine developed by Red Hat, and yes, if you thought about Docker when reading container engine, you are on the right track. A whole new revolution of containerization started with Docker, and Kubernetes added the concept of pods in the area of container orchestration when dealing with containers that share some common resources. But hold on! Do you really think it is worth sticking with Docker alone by assuming it’s the only effective way of containerization? Podman can also manage pods on Fedora as well as the containers used in those pods.

Podman is a daemonless, open source, Linux native tool designed to make it easy to find, run, build, share and deploy applications using Open Containers Initiative (OCI) Containers and Container Images.

From the official Podman documentation at http://docs.podman.io/en/latest/

Why should we switch to Podman?

Podman is a daemonless container engine for developing, managing, and running OCI Containers on your Linux System. Containers can either be run as root or in rootless mode. Podman directly interacts with an image registry, containers and image storage.

Install Podman:

sudo dnf -y install podman

Creating a Pod:

To start using the pod we first need to create it and for that we have a basic command structure

 
$ podman pod create

The command above contains no arguments and hence it will create a pod with a randomly generated name. You might however, want to give your pod a relevant name. For that you just need to modify the above command a bit.

 
$ podman pod create --name climoiselle

The pod will be created and will report back to you the ID of the pod. In the example shown the pod was given the name ‘climoiselle’. To view the newly created pod is easy by using the command shown below:

 
$ podman pod list
Newly created pods have been deployed

As you can see, there are two pods listed here, one named darshna and the one created from the example named climoiselle. No doubt you notice that both pods already include one container, yet we sisn’t deploy a container to the pods yet.
What is that extra container inside the pod? This randomly generated container is an infra container. Every podman pod includes this infra container and in practice these containers do nothing but go to sleep. Their purpose is to hold the namespaces associated with the pod and to allow Podman to connect other containers to the pod. The other purpose of the infra container is to allow the pod to keep running when all associated containers have been stopped.

You can also view the individual containers within a pod with the command:

 
$ podman ps -a --pod

Add a container

The cool thing is, you can add more containers to your newly deployed pod. Always remember the name of your pod. It’s important as you’ll need that name in order to deploy the container in that pod. We’ll use the official ubuntu image and deploy a container using it running the top command.

 
$ podman run -dt --pod climoiselle ubuntu top

Everything in a Single Command:

Podman has an agile characteristic when it comes to deploying a container in a pod which you created. You can create a pod and deploy a container to the said pod with a single command using Podman. Let’s say you want to deploy an NGINX container, exposing external port 8080 to internal port 80 to a new pod named test_server.

 
$ podman run -dt --pod new:test_server -p 8080:80 nginx
Created a new pod and deployed a container together

Let’s check all pods that have been created and the number of containers running in each of them …

 
$ podman pod list
List of the containers, their state and number of containers running into them

Do you want to know a detailed configuration of the pods which are running? Just type in the command shown below:

 
podman pod inspect [pod's name/id]

Make it stop!

To stop the pods, we need to use the name or ID of the pod. With the information from podman’s pod list command, we can view the pods and their infra id. Simply use podman with the command stop and give the particular name/infra id of the pod.

 
$ podman pod stop climoiselle

Hey take a look!

My pod climoiselle stopped

After following this short tutorial, you can see how quickly you can use pods with podman on fedora. It’s an easy and convenient way to use containers that share resources and interact together.

Further reading

The fedora Classrom article https://fedoramagazine.org/fedora-classroom-containers-101-podman/. A good starting point for beginners https://developers.redhat.com/blog/2018/02/22/container-terminology-practical-introduction/. An article on capabilities and podman https://fedoramagazine.org/podman-with-capabilities-on-fedora/. Podman’s documentation site http://docs.podman.io/en/latest/.

Posted on Leave a comment

Getting started with Stratis – up and running

When adding storage to a Linux server, system administrators often use commands like pvcreate, vgcreate, lvcreate, and mkfs to integrate the new storage into the system. Stratis is a command-line tool designed to make managing storage much simpler. It creates, modifies, and destroys pools of storage. It also allocates and deallocates filesystems from the storage pools.

Instead of an entirely in-kernel approach like ZFS or Btrfs, Stratis uses a hybrid approach with components in both user space and kernel land. It builds on existing block device managers like device mapper and existing filesystems like XFS. Monitoring and control is performed by a user space daemon.

Stratis tries to avoid some ZFS characteristics like restrictions on adding new hard drives or replacing existing drives with bigger ones. One of its main design goals is to achieve a positive command-line experience.

Install Stratis

Begin by installing the required packages. Several Python-related dependencies will be automatically pulled in. The stratisd package provides the stratisd daemon which creates, manages, and monitors local storage pools. The stratis-cli package provides the stratis command along with several Python libraries.

# yum install -y stratisd stratis-cli

Next, enable the stratisd service.

# systemctl enable --now stratisd

Note that the “enable –now” syntax shown above both permanently enables and immediately starts the service.

After determining what disks/block devices are present and available, the three basic steps to using Stratis are:

  1. Create a pool of the desired disks.
  2. Create a filesystem in the pool.
  3. Mount the filesystem.

In the following example, four virtual disks are available in a virtual machine. Be sure not to use the root/system disk (/dev/vda in this example)!

# sfdisk -s
/dev/vda: 31457280
/dev/vdb:   5242880
/dev/vdc:   5242880
/dev/vdd:   5242880
/dev/vde:   5242880
total: 52428800 blocks

Create a storage pool using Stratis

# stratis pool create testpool /dev/vdb /dev/vdc
# stratis pool list
Name Total Physical Size  Total Physical Used
testpool 10 GiB 56 MiB

After creating the pool, check the status of its block devices:

# stratis blockdev list
Pool Name   Device Node Physical Size   State  Tier
testpool  /dev/vdb            5 GiB  In-use  Data
testpool  /dev/vdc            5 GiB  In-use  Data

Create a filesystem using Stratis

Next, create a filesystem. As mentioned earlier, Stratis uses the existing DM (device mapper) and XFS filesystem technologies to create thinly-provisioned filesystems. By building on these existing technologies, large filesystems can be created and it is possible to add physical storage as storage needs grow.

# stratis fs create testpool testfs
# stratis fs list
Pool Name  Name  Used Created        Device            UUID
testpool  testfs 546 MiB  Apr 18 2020 09:15 /stratis/testpool/testfs  095fb4891a5743d0a589217071ff71dc

Note that “fs” in the example above can optionally be written out as “filesystem”.

Mount the filesystem

Next, create a mount point and mount the filesystem.

# mkdir /testdir
# mount /stratis/testpool/testfs /testdir
# df -h | egrep 'stratis|Filesystem'
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/stratis-1-3e8e[truncated]71dc  1.0T  7.2G 1017G   1% /testdir

The actual space used by a filesystem is shown using the stratis fs list command demonstrated previously. Notice how the testdir filesystem has a virtual size of 1.0T. If the data in a filesystem approaches its virtual size, and there is available space in the storage pool, Stratis will automatically grow the filesystem. Note that beginning with Fedora 34, the form of device path will be /dev/stratis/<pool-name>/<filesystem-name>.

Add the filesystem to fstab

To configure automatic mounting of the filesystem at boot time, run following commands:

# UUID=`lsblk -n -o uuid /stratis/testpool/testfs`
# echo "UUID=${UUID} /testdir xfs defaults 0 0" >> /etc/fstab

After updating fstab, verify that the entry is correct by unmounting and mounting the filesystem:

# umount /testdir
# mount /testdir
# df -h | egrep 'stratis|Filesystem'
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/stratis-1-3e8e[truncated]71dc  1.0T  7.2G 1017G   1% /testdir

Adding cache devices with Stratis

Suppose /dev/vdd is an available SSD (solid state disk). To configure it as a cache device and check its status, use the following commands:

# stratis pool add-cache testpool  /dev/vdd
# stratis blockdev
Pool Name   Device Node Physical Size  State   Tier
testpool   /dev/vdb            5 GiB  In-use   Data
testpool   /dev/vdc            5 GiB  In-use   Data
testpool   /dev/vdd            5 GiB  In-use  Cache

Growing the storage pool

Suppose the testfs filesystem is close to using all the storage capacity of testpool. You could add an additional disk/block device to the pool with commands similar to the following:

# stratis pool add-data testpool /dev/vde
# stratis blockdev
Pool Name Device Node Physical Size   State   Tier
testpool   /dev/vdb           5 GiB  In-use   Data
testpool   /dev/vdc           5 GiB  In-use   Data
testpool   /dev/vdd           5 GiB  In-use  Cache
testpool   /dev/vde           5 GiB  In-use   Data

After adding the device, verify that the pool shows the added capacity:

# stratis pool
Name      Total Physical Size   Total Physical Used
testpool             15 GiB           606 MiB

Conclusion

Stratis is a tool designed to make managing storage much simpler. Creating a filesystem with enterprise functionalities like thin-provisioning, snapshots, volume management, and caching can be accomplished quickly and easily with just a few basic commands.

See also Getting Started with Stratis Encryption.