Posted on Leave a comment

Use FastAPI to build web services in Python

FastAPI is a modern Python web framework that leverage the latest Python improvement in asyncio. In this article you will see how to set up a container based development environment and implement a small web service with FastAPI.

Getting Started

The development environment can be set up using the Fedora container image. The following Dockerfile prepares the container image with FastAPI, Uvicorn and aiofiles.

FROM fedora:32
RUN dnf install -y python-pip \ && dnf clean all \ && pip install fastapi uvicorn aiofiles
WORKDIR /srv
CMD ["uvicorn", "main:app", "--reload"]

After saving this Dockerfile in your working directory, build the container image using podman.

$ podman build -t fastapi .
$ podman images
REPOSITORY TAG IMAGE ID CREATED SIZE
localhost/fastapi latest 01e974cabe8b 18 seconds ago 326 MB

Now let’s create a basic FastAPI program and run it using that container image.

from fastapi import FastAPI app = FastAPI() @app.get("/")
async def root(): return {"message": "Hello Fedora Magazine!"}

Save that source code in a main.py file and then run the following command to execute it:

$ podman run --rm -v $PWD:/srv:z -p 8000:8000 --name fastapi -d fastapi
$ curl http://127.0.0.1:8000
{"message":"Hello Fedora Magazine!"

You now have a running web service using FastAPI. Any changes to main.py will be automatically reloaded. For example, try changing the “Hello Fedora Magazine!” message.

To stop the application, run the following command.

$ podman stop fastapi

Building a small web service

To really see the benefits of FastAPI and the performance improvement it brings (see comparison with other Python web frameworks), let’s build an application that manipulates some I/O. You can use the output of the dnf history command as data for that application.

First, save the output of that command in a file.

$ dnf history | tail --lines=+3 > history.txt

The command is using tail to remove the headers of dnf history which are not needed by the application. Each dnf transaction can be represented with the following information:

  • id : number of the transaction (increments every time a new transaction is run)
  • command : the dnf command run during the transaction
  • date: the date and time the transaction happened

Next, modify the main.py file to add that data structure to the application.

from fastapi import FastAPI
from pydantic import BaseModel app = FastAPI() class DnfTransaction(BaseModel): id: int command: str date: str

FastAPI comes with the pydantic library which allow you to easily build data classes and benefit from type annotation to validate your data.

Now, continue building the application by adding a function that will read the data from the history.txt file.

import aiofiles from fastapi import FastAPI
from pydantic import BaseModel app = FastAPI() class DnfTransaction(BaseModel): id: int command: str date: str async def read_history(): transactions = [] async with aiofiles.open("history.txt") as f: async for line in f: transactions.append(DnfTransaction( id=line.split("|")[0].strip(" "), command=line.split("|")[1].strip(" "), date=line.split("|")[2].strip(" "))) return transactions

This function makes use of the aiofiles library which provides an asyncio API to manipulate files in Python. This means that opening and reading the file will not block other requests made to the server.

Finally, change the root function to return the data stored in the transactions list.

@app.get("/")
async def read_root(): return await read_history()

To see the output of the application, run the following command

$ curl http://127.0.0.1:8000 | python -m json.tool
[
{ "id": 103, "command": "update", "date": "2020-05-25 08:35"
},
{ "id": 102, "command": "update", "date": "2020-05-23 15:46"
},
{ "id": 101, "command": "update", "date": "2020-05-22 11:32"
},
....
]

Conclusion

FastAPI is gaining a lot a popularity in the Python web framework ecosystem because it offers a simple way to build web services using asyncio. You can find more information about FastAPI in the documentation.

The code of this article is available in this GitHub repository.


Photo by Jan Kubita on Unsplash.

Posted on Leave a comment

Develop GUI apps using Flutter on Fedora

When it comes to app development frameworks, Flutter is the latest and greatest. Google seems to be planning to take over the entire GUI app development world with Flutter, starting with mobile devices, which are already perfectly supported. Flutter allows you to develop cross-platform GUI apps for multiple targets — mobile, web, and desktop — from a single codebase.

This post will go through how to install the Flutter SDK and tools on Fedora, as well as how to use them both for mobile development and web/desktop development.

Installing Flutter and Android SDKs on Fedora

To get started building apps with Flutter, you need to install

  • the Android SDK;
  • the Flutter SDK itself; and,
  • optionally, an IDE and its Flutter plugins.

Installing the Android SDK

Flutter requires the installation of the Android SDK with the entire Android Studio suite of tools. Google provides a tar.gz archive. The Android Studio executable can be found in the android-studio/bin directory and is called studio.sh. To run it, open a terminal, cd into the aforementioned directory, and then run:

$ ./studio.sh

Installing the Flutter SDK

Before you install Flutter you may want to consider what release channel you want to be on.

The stable channel is least likely to give you a headache if you just want to build a mobile app using mainstream Flutter features.

On the other hand, you may want to use the latest features, especially for desktop and web app development. In that case, you might be better off installing either the latest version of the beta or even the dev channel.

Either way, you can switch between channels after you install using the flutter channel command explained later in the article.

Head over to the official SDK archive page and download the latest installation bundle for the release channel most appropriate for your use case.

The installation bundle is simply a xz-compressed tarball (.tar.xz extension). You can extract it wherever you want, given that you add the flutter/bin subdirectory to the PATH environment variable.

Installing the IDE plugins

To install the plugin for Visual Studio Code, you need to search for Flutter in the Extensions tab. Installing it will also install the Dart plugin.

The same will happen when you install the plugin for Android Studio by opening the Settings, then the Plugins tab and installing the Flutter plugin.

Using the Flutter and Android CLI Tools on Fedora

Now that you’ve installed Flutter, here’s how to use the CLI tool.

Upgrading and Maintaining Your Flutter Installations

The flutter doctor command is used to check whether your installation and related tools are complete and don’t require any further action.

For example, the output you may get from flutter doctor right after installing on Fedora is:

Doctor summary (to see all details, run flutter doctor -v): [✓] Flutter (Channel stable, v1.12.13+hotfix.5, on Linux, locale it_IT.UTF-8) [!] Android toolchain - develop for Android devices (Android SDK version 29.0.2) ✗ Android licenses not accepted. To resolve this, run: flutter doctor --android-licenses [!] Android Studio (version 3.5) ✗ Flutter plugin not installed; this adds Flutter specific functionality. ✗ Dart plugin not installed; this adds Dart specific functionality. [!] Connected device ! No devices available ! Doctor found issues in 3 categories.

Of course the issue with the Android toolchain has to be resolved in order to build for Android. Run this command to accept the licenses:

$ flutter doctor --android-licenses

Use the flutter channel command to switch channels after installation. It’s just like switching branches on Git (and that’s actually what it does). You use it in the following way:

$ flutter channel <channel_name>

…where you’d replace <channel_name> with the release channel you want to switch to.

After doing that, or whenever you feel the need to do it, you need to update your installation. You might consider running this every once in a while or when a major update comes out if you follow Flutter news. Run this command:

$ flutter upgrade

Building for Mobile

You can build for Android very easily: the flutter build command supports it by default, and it allows you to build both APKs and newfangled app bundles.

All you need to do is to create a project with flutter create, which will generate some code for an example app and the necessary android and ios folders.

When you’re done coding you can either run:

  • flutter build apk or flutter build appbundle to generate the necessary app files to distribute, or
  • flutter run to run the app on a connected device or emulator directly.

When you run the app on a phone or emulator with flutter run, you can use the R button on the keyboard to use stateful hot reload. This feature updates what’s displayed on the phone or emulator to reflect the changes you’ve made to the code without requiring a full rebuild.

If you input a capital R character to the debug console, you trigger a hot restart. This restart doesn’t preserve state and is necessary for bigger changes to the app.

If you’re using a GUI IDE, you can trigger a hot reload using the bolt icon button and a hot restart with the typical refresh button.

Building for the Desktop

To build apps for the desktop on Fedora, use the flutter-desktop-embedding repository. The flutter create command doesn’t have templates for desktop Linux apps yet. That repository contains examples of desktop apps and files required to build on desktop, as well as examples of plugins for desktop apps.

To build or run apps for Linux, you also need to be on the master release channel and enable Linux desktop app development. To do this, run:

$ flutter config --enable-linux-desktop

After that, you can use flutter run to run the app on your development workstation directly, or run flutter build linux to build a binary file in the build/ directory.

If those commands don’t work, run this command in the project directory to generate the required files to build in the linux/ directory:

$ flutter create .

Building for the Web

Starting with Flutter 1.12, you can build Web apps using Flutter with the mainline codebase, without having to use the flutter_web forked libraries, but you have to be running on the beta channel.

If you are (you can switch to it using flutter channel beta and flutter upgrade as we’ve seen earlier), you need to enable web development by running flutter config –enable-web.

After doing that, you can run flutter run -d web and a local web server will be started from which you can access your app. The command returns the URL at which the server is listening, including the port number.

You can also run flutter build web to build the static website files in the build/ directory.

If those commands don’t work, run this command in the project directory to generate the required files to build in the web/ directory:

$ flutter create .

Packages for Installing Flutter

Other distributions have packages or community repositories to install and update in a more straightforward and intuitive way. However, at the time of writing, no such thing exists for Flutter. If you have experience packaging RPMs for Fedora, consider contributing to this GitHub repository for this COPR package.

The next step is learning Flutter. You can do that in a number of ways:

  • Read the good API reference documentation on the official site
  • Watching some of the introductory video courses available online
  • Read one of the many books out there today. [Check out the author’s bio for a suggestion! — Ed.]

Photo by Randall Ruiz on Unsplash.

Posted on Leave a comment

A quick introduction to Toolbox on Fedora

Toolbox allows you to sort and manage your development environments in containers without requiring root privileges or manually attaching volumes. It creates a container where you can install your own CLI tools, without installing them on the base system itself. You can also utilize it when you do not have root access or cannot install programs directly. This article gives you an introduction to toolbox and what it does.

Installing Toolbox

Silverblue includes Toolbox by default. For the Workstation and Server editions, you can grab it from the default repositories using dnf install toolbox.

Creating Toolboxes

Open your terminal and run toolbox enter. The utility will automatically request permission to download the latest image, create your first container, and place your shell inside this container.

$ toolbox enter
No toolbox containers found. Create now? [y/N] y
Image required to create toolbox container.
Download registry.fedoraproject.org/f30/fedora-toolbox:30 (500MB)? [y/N]: y

Currently there is no difference between the toolbox and your base system. Your filesystems and packages appear unchanged. Here is an example using a repository that contains documentation source for a resume under a ~/src/resume folder. The resume is built using the pandoc tool.

$ pwd /home/rwaltr $ cd src/resume/ $ head -n 5 Makefile all: pdf html rtf text docx pdf: init pandoc -s -o BUILDS/resume.pdf markdown/* $ make pdf
bash: make: command not found
$ pandoc -v
bash: pandoc: command not found

This toolbox does not have the programs required to build the resume. You can remedy this by installing the tools with dnf. You will not be prompted for the root password, because you are running in a container.

$ sudo dnf groupinstall "Authoring and Publishing" -y && sudo dnf install pandoc make -y
... $ make all #Successful builds
mkdir -p BUILDS
pandoc -s -o BUILDS/resume.pdf markdown/*
pandoc -s -o BUILDS/resume.html markdown/*
pandoc -s -o BUILDS/resume.rtf markdown/*
pandoc -s -o BUILDS/resume.txt markdown/*
pandoc -s -o BUILDS/resume.docx markdown/*
$ ls BUILDS/
resume.docx resume.html resume.pdf resume.rtf resume.txt

Run exit at any time to exit the toolbox.

$ cd BUILDS/
$ pandoc --version || ls
pandoc 2.2.1
Compiled with pandoc-types 1.17.5.4, texmath 0.11.1.2, skylighting 0.7.5
...
for a particular purpose.
resume.docx resume.html resume.pdf resume.rtf resume.txt
$ exit logout
$ pandoc --version || ls
bash: pandoc: command not found...
resume.docx resume.html resume.pdf resume.rtf resume.txt

You retain the files created by your toolbox in your home directory. None of the programs installed in your toolbox will be available outside of it.

Tips and tricks

This introduction to toolbox only scratches the surface. Here are some additional tips, but you can also check out the official documentation.

  • Toolbox –help will show you the man page for Toolbox
  • You can have multiple toolboxes at once. Use toolbox create -c Toolboxname and toolbox enter -c Toolboxname
  • Toolbox uses Podman to do the heavy lifting. Use toolbox list to find the IDs of the containers Toolbox creates. Podman can use these IDs to perform actions such as rm and stop. (You can also read more about Podman in this Magazine article.)

Photo courtesy of Florian Richter from Flickr.

Posted on Leave a comment

Using SSH port forwarding on Fedora

You may already be familiar with using the ssh command to access a remote system. The protocol behind ssh allows terminal input and output to flow through a secure channel. But did you know that you can also use ssh to send and receive other data securely as well? One way is to use port forwarding, which allows you to connect network ports securely while conducting your ssh session. This article shows you how it works.

About ports

A standard Linux system has a set of network ports already assigned, from 0-65535. Your system reserves ports up to 1023 for system use. In many systems you can’t elect to use one of these low-numbered ports. Quite a few ports are commonly expected to run specific services. You can find these defined in your system’s /etc/services file.

You can think of a network port like a physical port or jack to which you can connect a cable. That port may connect to some sort of service on the system, like wiring behind that physical jack. An example is the Apache web server (also known as httpd). The web server usually claims port 80 on the host system for HTTP non-secure connections, and 443 for HTTPS secure connections.

When you connect to a remote system, such as with a web browser, you are also “wiring” your browser to a port on your host. This is usually a random high port number, such as 54001. The port on your host connects to the port on the remote host, such as 443 to reach its secure web server.

So why use port forwarding when you have so many ports available? Here are a couple common cases in the life of a web developer.

Local port forwarding

Imagine that you are doing web development on a remote system called remote.example.com. You usually reach this system via ssh but it’s behind a firewall that allows very little additional access, and blocks most other ports. To try out your web app, it’s helpful to be able to use your web browser to point to the remote system. But you can’t reach it via the normal method of typing the URL in your browser, thanks to that pesky firewall.

Local forwarding allows you to tunnel a port available via the remote system through your ssh connection. The port appears as a local port on your system (thus “local forwarding.”)

Let’s say your web app is running on port 8000 on the remote.example.com box. To locally forward that system’s port 8000 to your system’s port 8000, use the -L option with ssh when you start your session:

$ ssh -L 8000:localhost:8000 remote.example.com

Wait, why did we use localhost as the target for forwarding? It’s because from the perspective of remote.example.com, you’re asking the host to use its own port 8000. (Recall that any host usually can refer to itself as localhost to connect to itself via a network connection.) That port now connects to your system’s port 8000. Once the ssh session is ready, keep it open, and you can type http://localhost:8000 in your browser to see your web app. The traffic between systems now travels securely over an ssh tunnel!

If you have a sharp eye, you may have noticed something. What if we used a different hostname than localhost for the remote.example.com to forward? If it can reach a port on another system on its network, it usually can forward that port just as easily. For example, say you wanted to reach a MariaDB or MySQL service on the db.example.com box also on the remote network. This service typically runs on port 3306. So you could forward it with this command, even if you can’t ssh to the actual db.example.com host:

$ ssh -L 3306:db.example.com:3306 remote.example.com

Now you can run MariaDB commands against your localhost and you’re actually using the db.example.com box.

Remote port forwarding

Remote forwarding lets you do things the opposite way. Imagine you’re designing a web app for a friend at the office, and want to show them your work. Unfortunately, though, you’re working in a coffee shop, and because of the network setup, they can’t reach your laptop via a network connection. However, you both use the remote.example.com system at the office and you can still log in there. Your web app seems to be running well on port 5000 locally.

Remote port forwarding lets you tunnel a port from your local system through your ssh connection, and make it available on the remote system. Just use the -R option when you start your ssh session:

$ ssh -R 6000:localhost:5000 remote.example.com

Now when your friend inside the corporate firewall runs their browser, they can point it at http://remote.example.com:6000 and see your work. And as in the local port forwarding example, the communications travel securely over your ssh session.

By default the sshd daemon running on a host is set so that only that host can connect to its remote forwarded ports. Let’s say your friend wanted to be able to let people on other example.com corporate hosts see your work, and they weren’t on remote.example.com itself. You’d need the owner of the remote.example.com host to add one of these options to /etc/ssh/sshd_config on that box:

GatewayPorts yes # OR
GatewayPorts clientspecified

The first option means remote forwarded ports are available on all the network interfaces on remote.example.com. The second means that the client who sets up the tunnel gets to choose the address. This option is set to no by default.

With this option, you as the ssh client must still specify the interfaces on which the forwarded port on your side can be shared. Do this by adding a network specification before the local port. There are several ways to do this, including the following:

$ ssh -R *:6000:localhost:5000 # all networks
$ ssh -R 0.0.0.0:6000:localhost:5000 # all networks
$ ssh -R 192.168.1.15:6000:localhost:5000 # single network
$ ssh -R remote.example.com:6000:localhost:5000 # single network

Other notes

Notice that the port numbers need not be the same on local and remote systems. In fact, at times you may not even be able to use the same port. For instance, normal users may not to forward onto a system port in a default setup.

In addition, it’s possible to restrict forwarding on a host. This might be important to you if you need tighter security on a network-connected host. The PermitOpen option for the sshd daemon controls whether, and which, ports are available for TCP forwarding. The default setting is any, which allows all the examples above to work. To disallow any port fowarding, choose none, or choose only a specific host:port setting to permit. For more information, search for PermitOpen in the manual page for sshd daemon configuration:

$ man sshd_config

Finally, remember port forwarding only happens as long as the controlling ssh session is open. If you need to keep the forwarding active for a long period, try running the session in the background using the -N option. Make sure your console is locked to prevent tampering while you’re away from it.

Posted on Leave a comment

Design faster web pages, part 1: Image compression

Lots of web developers want to achieve fast loading web pages. As more page views come from mobile devices, making websites look better on smaller screens using responsive design is just one side of the coin. Browser Calories can make the difference in loading times, which satisfies not just the user but search engines that rank on loading speed. This article series covers how to slim down your web pages with tools Fedora offers.

Preparation

Before you sart to slim down your web pages, you need to identify the core issues. For this, you can use Browserdiet. It’s a browser add-on available for Firefox, Opera and Chrome and other browsers. It analyzes the performance values of the actual open web page, so you know where to start slimming down.

Next you’ll need some pages to work on. The example screenshot shows a test of getfedora.org. At first it looks very simple and responsive.

Browser Diet - values of getfedora.org

Browser Diet – values of getfedora.org

However, BrowserDiet’s page analysis shows there are 1.8MB in files downloaded. Therefore, there’s some work to do!

Web optimization

There are over 281 KB of JavaScript files, 203 KB more in CSS files, and 1.2 MB in images. Start with the biggest issue — the images. The tool set you need for this is GIMP, ImageMagick, and optipng. You can easily install them using the following command:

sudo dnf install gimp imagemagick optipng

For example, take the following file which is 6.4 KB:

First, use the file command to get some basic information about this image:

$ file cinnamon.png cinnamon.png: PNG image data, 60 x 60, 8-bit/color RGBA, non-interlaced 

The image — which is only in grey and white — is saved in 8-bit/color RGBA mode. That’s not as efficient as it could be.

Start GIMP so you can set a more appropriate color mode. Open cinnamon.png in GIMP. Then go to Image>Mode and set it to greyscale. Export the image as PNG with compression factor 9. All other settings in the export dialog should be the default.

$ file cinnamon.png cinnamon.png: PNG image data, 60 x 60, 8-bit gray+alpha, non-interlaced 

The output shows the file’s now in 8bit gray+alpha mode. The file size has shrunk from 6.4 KB to 2.8 KB. That’s already only 43.75% of the original size. But there’s more you can do!

You can also use the ImageMagick tool identify to provide more information about the image.

$ identify cinnamon2.png cinnamon.png PNG 60x60 60x60+0+0 8-bit Grayscale Gray 2831B 0.000u 0:00.000

This tells you the file is 2831 bytes. Jump back into GIMP, and export the file. In the export dialog disable the storing of the time stamp and the alpha channel color values to reduce this a little more. Now the file output shows:

$ identify cinnamon.png cinnamon.png PNG 60x60 60x60+0+0 8-bit Grayscale Gray 2798B 0.000u 0:00.000

Next, use optipng to losslessly optimize your PNG images. There are other tools that do similar things, including advdef (which is part of advancecomp), pngquant and pngcrush.

Run optipng on your file. Note that this will replace your original:

$ optipng -o7 cinnamon.png ** Processing: cinnamon.png 60x60 pixels, 2x8 bits/pixel, grayscale+alpha Reducing image to 8 bits/pixel, grayscale Input IDAT size = 2720 bytes Input file size = 2812 bytes Trying: zc = 9 zm = 8 zs = 0 f = 0 IDAT size = 1922 zc = 9 zm = 8 zs = 1 f = 0 IDAT size = 1920 Selecting parameters: zc = 9 zm = 8 zs = 1 f = 0 IDAT size = 1920 Output IDAT size = 1920 bytes (800 bytes decrease) Output file size = 2012 bytes (800 bytes = 28.45% decrease)

The option -o7 is the slowest to process, but provides the best end results. You’ve knocked 800 more bytes off the file size, which is now 2012 bytes.

To resize all of the PNGs in a directory, use this command:

$ optipng -o7 -dir=<directory> *.png

The option -dir lets you give a target directory for the output. If this option is not used, optipng would overwrite the original images.

Choosing the right file format

When it comes to pictures for the usage in the internet, you have the choice between:

JPG-LS and JPG 2000 are not widely used. Only a few digital cameras support these formats, so they can be ignored. aPNG is an animated PNG, and not widely used either.

You could save a few bytes more through changing the compression rate or choosing another file format. The first option you can’t do in GIMP, as it’s already using the highest compression rate. As there are no alpha channels in the picture, you can choose JPG as file format instead. For now use the default value of 90% quality — you could change it down to 85%, but then alias effects become visible. This saves a few bytes more:

$ identify cinnamon.jpg cinnamon.jpg JPEG 60x60 60x60+0+0 8-bit sRGB 2676B 0.000u 0:00.000 

Alone this conversion to the right color space and choosing JPG as file format brought down the file size from 23 KB to 12.3 KB, a reduction of nearly 50%.

PNG vs. JPG: quality and compression rate

So what about the rest of the images? This method would work for all the other pictures, except the Fedora “flavor” logos and the logos for the four foundations. Those are presented on a white background.

One of the main differences between PNG and JPG is that JPG has no alpha channel. Therefore it can’t handle transparency. If you rework these images by using a JPG on a white background, you can reduce the file size from 40.7 KB to 28.3 KB.

Now there are four more images you can rework: the backgrounds. For the grey background, set the mode to greyscale again. With this bigger picture, the savings also is bigger. It shrinks from 216.2 KB to 51.0 KB — it’s now barely 25% of its original size. All in all, you’ve shrunk 481.1 KB down to 191.5 KB — only 39.8% of the starting size.

Quality vs. Quantity

Another difference between PNG and JPG is the quality. PNG is a lossless compressed raster graphics format. But JPG loses size through compression, and thus affects quality. That doesn’t mean you shouldn’t use JPG, though. But you have to find a balance between file size and quality.

Achievement

This is the end of Part 1. After following the techniques described above, here are the results:

You brought image size down to 488.9 KB versus 1.2MB at the start. That’s only about a third of the size, just through optimizing with optipng. This page can probably be made to load faster still. On the scale from snail to hypersonic, it’s not reached racing car speed yet!

Finally you can check the results in Google Insights, for example:

In the Mobile area the page gathered 10 points on scoring, but is still in the Medium sector. It looks totally different for the Desktop, which has gone from 62/100 to 91/100 and went up to Good. As mentioned before, this test isn’t the be all and end all. Consider scores such as these to help you go in the right direction. Keep in mind you’re optimizing for the user experience, and not for a search engine.