5 Screen Recorders for the Linux Desktop

There are so many reasons why you might need to record your Linux desktop. The two most important are for training and for support. If you are training users, a video recording of the desktop can go a long way to help them understand what you are trying to impart. Conversely, if you’re having trouble with one aspect of your Linux desktop, recording a video of the shenanigans could mean the difference between solving the problem and not. But what tools are available for the task? Fortunately, for every Linux user (regardless of desktop), there are options available. I want to highlight five of my favorite screen recorders for the Linux desktop. Among these five, you are certain to find one that perfectly meets your needs. I will only be focusing on those screen recorders that save as video. What video format you prefer may or may not dictate which tool you select.

And, without further ado, let’s get on with the list.

Simple Screen Recorder

I’m starting out with my go-to screen recorder. I use Simple Screen Recorder on a daily basis, and it never lets me down. This particular take on the screen recorder is available for nearly every flavor of Linux and is, as the name implies, very simple to use. With Simple Screen Recorder you can select a single window, a portion of the screen, or the entire screen to record. One of the best features of Simple Screen Recorder is the ability to save profiles (Figure 1), which allows you to configure the input for a recording (including scaling, frame rate, width, height, left edge and top edge spacing, and more). By saving profiles, you can easily use a specific profile to meet a unique need, without having to go through the customization every time. This is handy for those who do a lot of screen recording, with different input variables for specific jobs.

Simple screen recorder also:

  • Records audio input

  • Allows you to pause and resume recording

  • Offers a preview during recording

  • Allows for the selection of video containers and codecs

  • Adds timestamp to file name (optional)

  • Includes hotkey recording and sound notifications

  • Works well on slower machines

  • And much more

Simple Screen Recorder is one of the most reliable screen recording tools I have found for the Linux desktop. Simple Screen Recorder can be installed from the standard repositories on many desktops, or via easy to follow instructions on the application download page.


The next entry, gtk-recordmydesktop, doesn’t give you nearly the options found in Simple Screen Recorder, but it does offer a command line component (for those who prefer not working with a GUI). The simplicity that comes along with this tool also means you are limited to a specific video output format (.ogv). That doesn’t mean gtk-recordmydesktop isn’t without appeal. In fact, there are a few features that make this option in the genre fairly appealing. First and foremost, it’s very simple to use. Second, the record window automatically gets out of your way while you record (as opposed to Simple Screen Recorder, where you need to minimize the recording window when recording full screen). Another feature found in gtk-recordmydesktop is the ability to have the recording follow the mouse (Figure 2).

Unfortunately, the follow the mouse feature doesn’t always work as expected, so chances are you’ll be using the tool without this interesting option. In fact, if you opt to go the gtk-recordmydesktop route, you should understand the GUI frontend isn’t nearly as reliable as is the command line version of the tool. From the command line, you could record a specific position of the screen like so:

recordmydesktop -x X_POS -y Y_POS --width WIDTH --height HEIGHT -o FILENAME.ogv


  • X_POS is the offset on the X axis

  • Y_POS is the offset on the Y axis

  • WIDTH is the width of the screen to be recorded

  • HEIGHT is the height of the screen to be recorded

  • FILENAME is the name of the file to be saved

To find out more about the command line options, issue the command man recordmydesktop and read through the manual page.


If you’re looking for a bit more than just a recorded screencast, you might want to give Kazam a go. Not only can you record a standard screen video (with the usual—albeit limited amount of—bells and whistles), you can also take screenshots and even broadcast video to YouTube Live (Figure 3).

Kazam falls in line with gtk-recordmydesktop, when it comes to features. In other words, it’s slightly limited in what it can do. However, that doesn’t mean you shouldn’t give Kazam a go. In fact, Kazam might be one of the best screen recorders out there for new Linux users, as this app is pretty much point and click all the way. But if you’re looking for serious bells and whistles, look away.

The version of Kazam, with broadcast goodness, can be found in the following repository:


For Ubuntu (and Ubuntu-based distributions), install with the following commands:

sudo apt-add-repository ppa:sylvain-pineau/kazam sudo apt-get update sudo apt-get install kazam -y


The Vokoscreen recording app is for new-ish users who need more options. Not only can you configure the output format and the video/audio codecs, you can also configure it to work with a  webcam (Figure 4).

As with most every screen recording tool, Vokoscreen allows you to specify what on your screen to record. You can record the full screen (even selecting which display on multi-display setups), window, or area. Vokoscreen also allows you to select a magnification level (200×200, 400×200, or 600×200). The magnification level makes for a great tool to highlight a specific section of the screen (the magnification window follows your mouse).

Like all the other tools, Vokoscreen can be installed from the standard repositories or cloned from its GitHub repository.

OBS Studio

For many, OBS Studio will be considered the mack daddy of all screen recording tools. Why? Because OBS Studio is as much a broadcasting tool as it is a desktop recording tool. With OBS Studio, you can broadcast to YouTube, Smashcast,, DailyMotion, Facebook Live,,, Twitter, and more.  In fact, OBS Studio should seriously be considered the de facto standard for live broadcasting the Linux desktop.

Upon installation (the software is only officially supported for Ubuntu Linux 14.04 and newer), you will be asked to walk through an auto-configuration wizard, where you setup your streaming service (Figure 5). This is, of course, optional; however, if you’re using OBS Studio, chances are this is exactly why, so you won’t want to skip out on configuring your default stream.

I will warn you: OBS Studio isn’t exactly for the faint of heart. Plan on spending a good amount of time getting the streaming service up and running and getting up to speed with the tool. But for anyone needing such a solution for the Linux desktop, OBS Studio is what you want. Oh … it can also record your desktop screencast and save it locally.

There’s More Where That Came From

This is a short list of screen recording solutions for Linux. Although there are plenty more where this came from, you should be able to fill all your desktop recording needs with one of these five apps.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.


Create a Fully Automated Light and Music Show for the Holidays: Part 1

This tutorial series from our archives explains how to build a fully automated holiday display with Raspberry Pi. 

Christmas has been one of my favorite festivals, and this year it’s special because I’m planning a massive project to decorate my house using open source projects. There will be WiFi-controlled lights and a music show, there will be a talking moose singing Christmas carols (powered by Raspberry Pi, Arduino, and some servo motors), there will be a magical musical Christmas tree, and much more.

I built a music-light show for Halloween, but I improved it and added more features as I worked on the Christmas project. In this series, I’ll provide comprehensive instructions to build a fully automated Christmas music/light show that turns on automatically at a given time or that you can plug and play.

Caveat: This project involves working with 110v A/C, so take on this project only if you have experience with high voltage and understand the necessary safety precautions.

I spent weeks finding just the right parts below to create my setup. You can use your own creativity when selecting that parts that you need.

What you need:

  1. Raspberry Pi 3

  2. Micro SD card 8GB

  3. 5v charger (2 units)

  4. Male to female breadboard jumper wires

  5. 1 power cable

  6. 8-channel solid state relay

  7. Four gang device box

  8. Duplex receptacle (4 pack)

  9. Receptacle wall plate

  10. Single core wire (red, white & black)

  11. Wood board

  12. Push switch

  13. Soldering rod

Get started with Pi

We need to install an operating system on our Pi, and we will be using Raspbian. First, let’s prepare the Micro SD card for Raspbian. Plug in the card into your PC and open the Terminal; we are going to format the Micro SD card as FAT32.

Run the lsblk command to list the block devices so you can get the block devices name of the micro sd card:


In my case, it was mmcblk0. Once you have the block device name, run the parted command as sudo:

sudo parted /dev/mmcblk0

Once you are inside the parted utility, you will notice parted in the command line. Now create the partition table:

mklabel msdos

Then, create one partition:

mkpart primary fat32 1Mib 100%

And exit the parted utility:


Again run the lsblk command to find the name of the partition that you just created:


In my case, the partition on the ‘mmcblk0’ block devices was ‘mmcblk0p1’. We are going to format this partition with Fat32 file system:

sudo mkfs.vfat /dev/mmcblk0p1

Our Micro SD card is ready. Let’s download the zip file of the official image of NOOBS from this page. Then unzip the content of the downloaded folder into the root of the Micro SD card. First, change directory to Micro SD card:

cd path_of_microsd_card unzip

Open the Micro SD card in a file manage to make sure that all files are in the root folder of the card.

Prepare your Pi

Connect an HDMI monitor, keyboard and mouse to the Pi. Plug in the Micro SD card and then connect the 5V power supply. NOOBS will boot up and you will see this screen:


Select Raspbian from the list and click on the install button. The system will reboot after successful installation. Once you boot into Raspbian, open the wireless settings from the top bar and connect to your wireless.

We will be using our Raspberry Pi in headless mode so that we can manage the Christmas music show remotely from a PC, laptop, or mobile device. Before enabling SSH, however, we need to know the IP address of our Pi so that we can log into it remotely. Open the Terminal app and run the following command:


Note down the IP address listed under ‘wlan0’.

Once you have the IP address, open the configuration file of Raspbian by running the following command in the Terminal:

sudo raspi-config

Go to Advanced > SSH and select ‘Yes’ to enable SSH server.

(Note: use the arrow and enter keys to navigate and select; the mouse won’t work here)


We will also change audio settings to get the audio output through the 3.5mm audio jack instead of HDMI. In Advanced Options, go to Audio and select the second option ‘Force 3.5mm (‘headphone’) jack’, then select ‘Ok’.


Select ‘Finish’ in the main window and then reboot the system.

sudo reboot

You can now unplug the HDMI monitor as we will do the rest of the installation and configuration over ssh. Open terminal app on your PC or laptop and then ssh into the Pi:


In my case it was:

ssh pi@

Then enter the password for the Pi: ‘raspberry’.

This is the default password for pi, if you want to change it you can do so from the ‘raspi-config’ file.

Now it’s time to update your system:

sudo apt-get update
sudo apt-get dist-upgrade

It’s always a good idea to reboot your system if there are any kernel updates:

sudo reboot

In the next article, I’ll show how to set up the light show portion of our project, and in part 3, we’ll wrap it all up with some sound.

For 5 more fun projects for the Raspberry Pi 3, including a holiday light display and Minecraft Server, download the free E-book today!

Read about other Raspberry Pi projects:

5 Fun Raspberry Pi Projects: Getting Started

How to Build a Minecraft Server with Raspberry Pi 3

Build Your Own Netflix and Pandora With Raspberry Pi 3

Turn Raspberry Pi 3 Into a Powerful Media Player With RasPlex


Open Source Compliance Projects Unite Under New ACT Group

As open source software releases and customer adoption continue to increase, many companies underestimate what’s involved with going open source. It’s not only a matter of volunteering for the encouraged, but optional, upstream contributions to FOSS projects, but also complying with the legal requirements of open source licenses. Software increasingly includes a diverse assortment of open source code with a variety of licenses, as well as a mix of proprietary code. Sorting it all out to can be a major hassle, but the alternative is potential legal action and damaged relations with the open source community.

The Linux Foundation has just launched an Automated Compliance Tooling (ACT) project to help companies comply with open source licensing requirements. The new group consolidates its existing FOSSology and Software Package Data Exchange (SPDX) projects and adds two new projects: Endocode’s QMSTR for integrating open source compliance toolchain within build systems and VMware’s Tern, an inspection tool for identifying open source components within containers.

Announced at this week’s Open Compliance Summit in Yokohama, Japan, the ACT umbrella organization aims to “consolidate investment in, and increase interoperability and usability of, open source compliance tooling,” says the project.

“There are numerous open source compliance tooling projects but the majority are unfunded and have limited scope to build out robust usability or advanced features,” stated Kate Stewart, Senior Director of Strategic Programs at The Linux Foundation. “We have also heard from many organizations that the tools that do exist do not meet their current needs. Forming a neutral body under The Linux Foundation to work on these issues will allow us to increase funding and support for the compliance tooling development community.” 

The four ACT projects, with links to their websites, include:

  • FOSSology — This early project for improving open source compliance was adopted by the Linux Foundation in 2015. The FOSSology project maintains and updates a FOSSology open source license compliance software system and toolkit. The software lets users quickly run license and copyright scans from the command line and generate an SPDX file — a format used to share data about software licenses and copyrights. FOSSology includes a database and web UI for easing compliance workflow, as well as license, copyright, and export scanning tools. Users include Arm, HP, HP Enterprise, Siemens, Toshiba, Wind River, and others.

  • SPDX — The Software Package Data Exchange project maintains the SPDX file format for communicating software Bill of Material (BoM) information including components, licenses, copyrights, and security references. The SPDX project was spun off from FOSSology as a Linux Foundation project in 2011 and is now reunited under ACT. In 2015, SPDX 2.0 added improved tracking of complex open source license dependencies. In 2016, SPDX 2.1 standardized the inclusion of additional data in generated files and added a syntax for accurate tagging of source files with license list identifiers. The latest 2.1.15 release offers support for deprecated license exceptions. The SPDX spec will “remain separate from, yet complementary to, ACT, while the SPDX tools that meet the spec and help users and producers of SPDX documents will become part of ACT,” says the project.

  • QMSTR — Also known as Quartermaster, QMSTR was developed by Endocode and is now hosted by ACT. QMSTR creates an open source toolchain that integrates into build systems to implement best practices for license compliance management. QMSTR identifies software products, sources, and dependencies, and can be used to verify outcomes, review problems and produce compliance reports. “By integrating into DevOps CI/CD cycles, license compliance can become a quality metric for software development,” says ACT.

  • Tern — This VMware hosted project for ensuring compliance in container technology is now part of the ACT family. Tern is an inspection tool for discovering the metadata of packages installed in container images. Tern “provides a deeper understanding of a container’s bill of materials so better decisions can be made about container based infrastructure, integration and deployment strategies,” says ACT.

The ACT project aligns with two related Linux Foundation projects: OpenChain, which just welcomed Google, Facebook, and Uber as platinum members, and the Open Compliance Program. In 2016, the OpenChain project released OpenChain 1.0 with a focus on tracking open source compliance along supply chains. The project also offers other services including OpenChain Curriculum for teaching best practices.

The Open Source Compliance group hosts the Open Compliance Summit. It also offers best practices information, legal guidance, and training courses for developers. The group helps companies understand their license requirements and “how to build efficient, frictionless and often automated processes to support compliance,” says the project.

ACT has yet to launch a separate website but has listed an email address for more information.


Cloud Foundry, Cloud Native, and Entering a Multi-Platform World with Abby Kearns

2018 has been an amazing year for Cloud Foundry, with Alibaba joining as a Gold member, and Pivotal going public with its IPO, among some of the highlights. I recently talked with Abby Kearns, Executive Director of Cloud Foundry Foundation, to reflect on these milestones and more.

Kearns has been part of the Cloud Foundry ecosystem for the past five years and, under her leadership, Cloud Foundry has grown and evolved and found its way into half of the Fortune 500 companies, with those numbers increasing daily.

All of the major public cloud vendors want to be part of the ecosystem. “This year, we saw Alibaba join as a Gold member, and Cloud Foundry is now natively available on Alibaba Cloud,” said Kearns.

In 2017, Cloud Foundry embraced Kubernetes, the hottest open source project, and created CFCR (Cloud Foundry Container Runtime). “Kubernetes is a great technology that brings tons of capabilities to containers, which are the fundamental building blocks for a lot of portability for cloud native apps,” Kearns said.

Watch the video interview at The Linux Foundation


Convincing Your Manager That Upstreaming Is In Their Best Interest

In an ideal world, everyone would implicitly understand that it just makes good business sense to upstream some of the modifications made when creating your Linux powered devices. Unfortunately, this is a long way from being common knowledge.

By Martyn Welch, Senior Software Engineer at Collabora.

In an ideal world, everyone would implicitly understand that it just makes good business sense to upstream some of the modifications made when creating your Linux powered devices. Unfortunately, this is a long way from being common knowledge, and many managers still need convincing that this is, in fact, in their best interests.

Just so that we are clear, I’m not suggesting here that your next Linux powered device should be an entirely open design. We live in the real world and unless your explicit aim is to produce a completely open platform, doing so is unlikely to be good for your companies’ profitabilty. What does make sense however is to protect the parts of your product that drive your value proposition, while looking for ways to reduce costs in places which don’t drive the value add or unique selling point. This is where upstreaming and open source can offer you a massive advantage, if done right.

Say you have a new product in development, with a number of cool features to implement that you hope will drive customers to your door. You also have a new hardware design, thanks to the hardware guys that have discovered some funky new devices that optimise and improve this new design. You’ve also picked up the SoC vendors’ slightly outdated kernel tree and discovered that a number of these devices already have some support in the kernel, awesome. For others there is no support, either in the vendors tree or in the mainline tree, so backporting isn’t an option, and you’re looking to write some drivers. You’ve heard something about upstreaming and would like to give it a go, but you’re wondering if this is a good idea. Is this going to help my company? Well, the answer is generally “Yes”.

Uptreaming is the process of submitting the changes that you have made, typically, to existing open source projects so that they become part of the main (or upstream) codebase. This may be changes to support specific hardware (usually kernel level changes), changes to fix bugs that you’ve exposed via your specific use case or additional features that may extend existing libraries that you use in your project.

Upstreaming provides you with a number of tangible advantages which can be used as rationale to help convince your management:

  • You gain at least one 3rd party review, by a domain expert, giving you confidence in the quality of your changes.
  • You decrease your delta with the upstream codebase, reducing the maintenace burden of your product (you do security updates, right?), providing product updates and potentialy when creating the next version of your product.
  • Community suggested improvements, providing you with ways to reduce your code size whilst simultanously increasing available features.

Let’s use the Linux kernel as an example (one which many product developers will likely need to modify) of how these benefits manifest, as this is the project that I am familiar with.

Changes submitted to open source projects are not blindly accepted. Projects need to take some care that the changes are not going to negatively impact other users of the project that may have other use cases, and must also ensure that the changes are both sensible and done in a way that safeguards how the project can be maintained in the future. As a result, changes may need to be altered before being accepted, but such changes are likely to have a positive impact on your modifications.

The reviewer (who is very likely to be an expert in the area in which you are making changes) may be able to point out existing infrastructure that can be used to reduce code length and increase code reuse, or recommend changes that may remove potential race conditions or fix bugs that may not have been triggered during your testing. As the kernel (like most projects) expects a specific code style, there may be requests to change code to meet these requirements, as a consitent code style makes maintenance of the code easier. Once merged, the maintainer will be taking on the burden of maintaining this code, so he will want to ensure this can be done efficiently.

The upstream Linux kernel code base is being modified at a very fast pace, with a change being merged at a rate of one every 7 minutes. Different parts of the kernel develop at different rates however, some seeing a higher rate of change while others undergo little to no change at all. Should you have local patches, there is an increasing likelihood over time that these will be incompatible with the ever-evolving kernel.

This means your developers will need to spend time making modifications to the local patches when updating the software stack on an existing product, or when attempting to re-use these patches on a new product. Conversely, when local patches are applied upstream, existing code will be changed when APIs change, generally resulting in the modifications continuing to function as required in your use case without any engineering effort on your behalf.

Once a driver is added to the kernel, for example, others may add features to the driver that weren’t of immediate use to you. As your requirements change and grow for updates and subsequent revisions however, such changes may prove very useful to you and would be available with minimal work. A well documented example of this is Microsoft’s submission of hyper-V support. This large body of work was initially added to the “staging” area, an area where drivers that aren’t ready for full inclusion in the kernel can be put to enable them to be modified and improved with the help of the community. Whilst in the staging area the drivers were greatly improved, the drivers were modified to comply with the Linux Driver Model, reducing the code line count by 60% whilst simultaneously significantly improving performance and stability.

Of course, there are also less tangible reasons for contributing upstream. As a company, if you are planning to utilise Linux and other free software in your products, it is likely that you will want to hire talented, experienced developers to help you create your products. Contributions made to open source projects relevant to you are likely to be noticed by the very same developers that you hope to atract to your company, and will also reflect well on your company should they be looking for new opportunies or should these developers be asked if they have any recommendations for good places to work.

Submitting upstream, and contributing to an open source project, can also be a very rewarding experience for existing employees. By actively participating in a project on which your products are based, they not only gain direct access to the community behind the project, but also get a better understanding of the projects’ inner workings, enabling them to build future products more efficiently and confidently.


Bash Variables: Environmental and Otherwise

Bash variables, including those pesky environment variables, have been popped up several times in previous articles, and it’s high time you get to know them better and how they can help you.

So, open your terminal window and let’s get started.

Environment Variables

Consider HOME. Apart from the cozy place where you lay down your hat, in Linux it is a variable that contains the path to the current user’s home directory. Try this:

echo $HOME

This will show the path to your home directory, usually /home/.

As the name indicates, variables can change according to the context. Indeed, each user on a Linux system will have a HOME variable containing a different value. You can also change the value of a variable by hand:

HOME=/home/<your username>/Documents

will make HOME point to your Documents/ folder.

There are three things to notice here:

  1. There are no spaces between the name of the variable and the = or between the = and the value you are putting into the variable. Spaces have their own meaning in the shell and cannot be used any old way you want.
  2. If you want to put a value into a variable or manipulate it in any way, you just have to write the name of the variable. If you want to see or use the contents of a variable, you put a $ in front of it.
  3. Changing HOME is risky! A lot programs rely on HOME to do stuff and changing it can have unforeseeable consequences. For example, just for laughs, change HOME as shown above and try typing cd and then [Enter]. As we have seen elsewhere in this series, you use cd to change to another directory. Without any parameters, cd takes you to your home directory. If you change the HOME variable, cd will take you to the new directory HOME points to.

Changes to environment variables like the one described in point 3 above are not permanent. If you close your terminal and open it back up, or even open a new tab in your terminal window and move there, echo $HOME will show its original value.

Before we go on to how you make changes permanent, let’s look at another environment variable that it does make sense changing.


The PATH variable lists directories that contain executable programs. If you ever wondered where your applications go when they are installed and how come the shell seems to magically know which programs it can run without you having to tell it where to look for them, PATH is the reason.

Have a look inside PATH and you will see something like this:

$ echo $PATH

Each directory is separated by a colon (:) and if you want to run an application installed in any directory other than the ones listed in PATH, you will have to tell the shell where to find it:

/home/<user name>/bin/

This will run a program calle you have copied into a bin/ directory in your home directory.

This is a common problem: you don’t want to clutter up your system’s bin/ directories, or you don’t want other users running your own personal scripts, but you don’t want to have to type out the complete path every time you need to run a script you use often. The solution is to create your own bin/ directory in your home directory:

mkdir $HOME/bin

And then tell PATH all about it:


After that, your /home//bin will show up in your PATH variable. But… Wait! We said that the changes you make in a given shell will not last and will lose effect when that shell is closed.

To make changes permanent for your user, instead of running them directly in the shell, put them into a file that gets run every time a shell is started. That file already exists and lives in your home directory. It is called .bashrc and the dot in front of the name makes it a hidden file — a regular ls won’t show it, but ls -a will.

You can open it with a text editor like kate, gedit, nano, or vim (NOT LibreOffice Writer — that’s a word processor. Different beast entirely). You will see that .bashrc is full of shell commands the purpose of which are to set up the environment for your user.

Scroll to the bottom and add the following on a new, empty line:

export PATH=$PATH:$HOME/bin

Save and close the file. You’ll be seeing what export does presently. In the meantime, to make sure the changes take effect immediately, you need to source .bashrc:

source .bashrc

What source does is execute .bashrc for the current open shell, and all the ones that come after it. The alternative would be to log out and log back in again for the changes to take effect, and who has the time for that?

From now on, your shell will find every program you dump in /home//bin without you having to specify the whole path to the file.

DYI Variables

You can, of course, make your own variables. All the ones we have seen have been written with ALL CAPS, but you can call a variable more or less whatever you want.

Creating a new variables is straightforward: just set a value within it:


And you already know how to recover a value contained within a variable:

echo $new_variable

You often have a program that will require you set up a variable for things to work properly. The variable may set an option to “on”, or help the program find a library it needs, and so on. When you run a program in Bash, the shell spawns a daughter process. This means it is not exactly the same shell that executes your program, but a related mini-shell that inherits some of the mother’s characteristics. Unfortunately, variables, by default, are not one of them. This is because, by default again, variables are local. This means that, for security reasons, a variable set in one shell cannot be read in another, even if it is a daughter shell.

To see what I mean, set a variable:

robots="R2D2 & C3PO"

… and run:


You just ran a Bash shell program within a Bash shell program.

Now see if you can read the contents of you variable with:

echo $robots

You should draw a blank.

Still inside your bash-within-bash shell, set robots to something different:

robots="These aren't the ones you are looking for"

Check robots‘ value:

$ echo $robots
These aren't the ones you are looking for

Exit the bash-within-bash shell:


And re-check the value of robots:

$ echo $robots
R2D2 & C3P0

This is very useful to avoid all sorts of messed up configurations, but this presents a problem also: if a program requires you set up a variable, but the program can’t access it because Bash will execute it in a daughter process, what can you do? That is exactly what export is for.

Try doing the prior experiment, but, instead of just starting off by setting robots="R2D2 & C3PO", export it at the same time:

export robots="R2D2 & C3PO"

You’ll notice that, when you enter the bash-within-bash shell, robots still retains the same value it had at the outset.

Interesting fact: While the daughter process will “inherit” the value of an exported variable, if the variable is changed within the daughter process, changes will not flow upwards to the mother process. In other words, changing the value of an exported variable in a daughter process does not change the value of the original variable in the mother process.

You can see all exported variables by running

export -p

The variables you create should be at the end of the list. You will also notice some other interesting variables in the list: USER, for example, contains the current user’s user name; PWD points to the current directory; and OLDPWD contains the path to the last directory you visited and since left. That’s because, if you run:

cd -

You will go back to the last directory you visited and cd gets the information from OLDPWD.

You can also see all the environment variables using the env command.

To un-export a variable, use the -n option:

export -n robots

Next Time

You have now reached a level in which you are dangerous to yourself and others. It is time you learned how to protect yourself from yourself by making your environment safer and friendlier through the use of aliases, and that is exactly what we’ll be tackling in the next episode. See you then.


Libcamera Aims to Make Embedded Cameras Easier

The V4L2 (Video for Linux 2) API has long offered an open source alternative to proprietary camera/computer interfaces, but it’s beginning to show its age. At the Embedded Linux Conference Europe in October, the V4L2 project unveiled a successor called libcamera. V4L2 co-creator and prolific Linux kernel contributor Laurent Pinchart outlined the early-stage libcamera project in a presentation called “Why Embedded Cameras are Difficult, and How to Make Them Easy.”

V4l and V4L2 were developed when camera-enabled embedded systems were far simpler. “Maybe you had a camera sensor connected to a SoC, with maybe a scaler, and everything was exposed via the API,” said Pinchart, who runs an embedded Linux firms called Ideas on Board and is currently working for Renesas. “But when hardware became more complex, we disposed of the traditional model. Instead of exposing a camera as a single device with a single API, we let userspace dive into the device and expose the technology to offer more fine-grained control.”

These improvements were extensively documented, enabling experienced developers implement more use cases than before. Yet, the spec placed much of the burden of controlling the complex API on developers, with few resources available to ease the learning curve. In other words, “V4L2 became more complex for userspace,” explained Pinchart.

The project planned to add a layer called libv4l to address this. The libv4l userspace library was designed to mimic the V4L2 kernel API and expose it to apps “so it could be completely transparent in tracking the code to libc,” said Pinchart. “The plan was to have device specific plugins provided by the vendor and it would all be part of the libv4l file, but it never happened. Even if it had, it would not have been enough.”

Libcamera, which Pinchart describes as “not only a camera library but a full camera stack in user space,” aims to ease embedded camera application development, improving both on V4L2 and libv4l. The core piece is a libcamera framework, written in C++, that exposes kernel driver APIs to userspace. On top of the framework are optional language bindings for languages such as C.

The next layer up is a libcamera application layer that translates to existing camera APIs, including V4L2, Gstreamer, and the Android Camera Framework, which Pinchart said would not contain the usual vendor specific Android HAL code. As for V4L2, “we will attempt to maintain compatibility as a best effort, but we won’t implement every feature,” said Pinchart. There will also be a native libcamera app format, as well as plans to support Chrome OS.

Libcamera keeps the kernel level hidden from the upper layers. The framework is built around the concept of a camera device, “which is what you would expect from a camera as an end user,” said Pinchart. “We will want to implement each camera’s capabilities, and we’ll also have a concept of profiles, which is a higher view of features. For example, you could choose a video or point-and-shoot profile.”

Libcamera will support multiple video streams from a single camera. “In videoconferencing, for example, you might want a different resolution and stream than what you encode over the network,” said Pinchart. “You may want to display the live stream on the screen and, at the same time, capture stills or record video, perhaps at different resolutions.”

Per-frame controls and a 3A API

One major new feature is per-frame controls. “Cameras provide controls for things like video stabilization, flash, or exposure time which may change under different lighting conditions,” said Pinchart. “V4L2 supports most of these controls but with one big limitation. Because you’re capturing a video stream with one frame after another, if you want to increase exposure time you never know precisely at what frame that will take effect. If you want to take a still image capture with flash, you don’t want to activate a flash and receive an image that is either before or after the flash.”

With libcamera’s per-frame controls, you can be more precise. “If you want to ensure you always have the right brightness and exposure time, you need to control those features in a way that is tied to the video stream,” explained Pinchart. “With per-frame controls you can modify all the frames that are being captured in a way that is synchronized with the stream.”

Libcamera also offers a novel approach to a given camera’s 3A controls, such as auto exposure, autofocus, and auto white balance. To provide a 3A control loop, “you can have a simple implementation with 100 lines of code that will give you barely usable results or an implementation based on two or three years of development by device vendors where they really try to optimize the image quality,” said Pinchart. Because most SoC vendors refuse to release the 3A algorithms that run in their ISPs with an open source license, “we want to create a framework and ecosystem in which open source re-implementations of proprietary 3A algorithms will be possible,” said Pinchart.

Libcamera will provide a 3A API that will translate between standard camera code and a vendor specific component. “The camera needs to communicate with kernel drivers, which is a security risk if the image processing code is closed source,” said Pinchart. “You’re running untrusted 3A vendor code, and even if they’re not doing something behind your back, it can be hacked. So we want to be able to isolate the closed source component and make it operate within a sandbox. The API can be marshaled and unmarshaled over IPC. We can limit the system calls that are available and prevent the sandboxed component from directly accessing the kernel driver. Sandboxing will ensure that all the controls will have to go through our API.”

The 3A API combined with libcamera’s sandboxing approach, may encourage more SoC vendors to further expose their ISPs just as some are have begun to open up their GPUs. “We want the vendors to publish open source camera drivers that expose and document every control on the device,” he said. “When you are interacting with a camera, a large part of that code is device agnostic. Vendors implement a completely closed source camera HAL and supply their own buffer management and memory location and other tasks that don’t add any value. It’s a waste of resources. We want as much code as possible that can be reused and shared with vendors.”

Pinchart went on to describe libcamera’s cam device manager, which will support hot plugging and unplugging of cameras. He also explained libcamera’s pipeline handler, which controls memory buffering and communications between MIPI-CSI or other camera receiver interfaces and the camera’s ISP.

“Our pipeline handler takes care of the details so the application doesn’t have to,” said Pinchart. “It handles scheduling, configuration, signal routing, the number of streams, and locating and passing buffers.” The pipeline handler is flexible enough to support an ISP with an integrated CSI receiver (and without a buffer pool) or other complicated ISPs that can have a direct pipeline to memory.

Watch Pinchart’s entire ELC talk below:


Demystifying Kubernetes Operators with the Operator SDK: Part 1

You may have heard about the concept of custom Operators in Kubernetes. You may have even read about the CoreOS operator-sdk, or tried walking through the setup. The concept is cool: Operators can help you extend Kubernetes functionality to include managing any stateful applications your organization uses. They can ideally help you move away from manual human intervention at runtime for things like upgrades, node recovery, and resizing a cluster. But after reading a bit on the topic, you may secretly still be mystified as to what operators exactly do, and how all the components work together.

In this article, we will demystify what an operator is, and how the CoreOS operator-sdk translates its input to the code that is then run as an operator. In this step-by-step tutorial, we will create a general example of an operator, with a few bells and whistles beyond the functionality shown in the operator-sdk user guide. By the end, you will have a solid foundation for how to build a custom operator that can be applied to real-world use cases.

Hello Operator, could you tell me what an Operator is?

To describe what an operator does, let’s go back to Kubernetes architecture for a bit. Kubernetes is essentially a desired state manager. You give it a desired state for your application (number of instances, disk space, image to use, etc.) and it attempts to maintain that state should anything get out of wack. Kubernetes uses what’s called a control plane on it’s master node. The control plane includes a number of controllers whose job is to reconcile against the desired state in the following way:

  • Monitor existing K8s objects (Pods, Deployments, etc.) to determine their state

  • Compare it to the K8s yaml spec for the object

  • If the state is not the same as the spec, the controller will attempt to remedy this

A common scenario where reconciling takes place is a Pod is defined with three replicas. One goes down, and with K8s’ controller watching, it recognizes that there should be three Pods running, not two. It then works to create a new instance of the Pod.

This simplified diagram shows the role of controllers in Kubernetes architecture as follows.

  • The Kubectl CLI sends an object spec (Pod, Deployment, etc.) to the API server on the Master Node to run on the cluster

  • The Master Node will schedule the object to run (not shown)

  • Once running, a Controller will continuously monitor the object and reconcile it against the spec

In this way, Kubernetes works great to take much of the manual work out of maintaining the runtime for stateless applications. Yet it is limited in the number of object types (Pods, Deployments, Namespaces, Services, DaemonSets, etc.) that it will natively maintain. Each of these object types has a predetermined behavior and way of reconciling against their spec should they break, without much deviation in how they are handled.

Now, what if your application has a bit more complexity and you need to perform a custom operation to bring it to a desired running state?

Think of a stateful application. You have a database application running on several nodes. If a majority of nodes go down, you’ll need to reload the database from a specific snapshot following specific steps. Using existing object types and controllers in Kubernetes, this would be impossible to achieve. Or think of scaling nodes up, or upgrading to a new version, or disaster recovery for our stateful application. These kinds of operations often need very specific steps, and typically require manual intervention.

Enter operators.

Operators extend Kubernetes by allowing you to define a Custom Controller to watch your application and perform custom tasks based on its state (a perfect fit to automate maintenance of the stateful application we described above). The application you want to watch is defined in Kubernetes as a new object: a Custom Resource (CR) that has its own yaml spec and object type (in K8s, a kind) that is understood by the API server. That way, you can define any specific criteria in the custom spec to watch out for, and reconcile the instance when it doesn’t match the spec. The way an operator’s controller reconciles against a spec is very similar to native Kubernetes’ controllers, though it is using mostly custom components.

Note the primary difference in our diagram from the previous one is that the Operator is now running the custom controller to reconcile the spec. While the API Server is aware of the custom controller, the Operator runs independently, and can run either inside or outside the cluster.


Because Operators are a powerful tool for stateful applications, we are seeing a number of pre-built operators from CoreOS and other contributors for things like ectd, Vault, and Prometheus. And while these are a great starting point, the value of your operator really depends on what you do with it: what your best practice is for failed states, and how the operator functionality may have to work alongside manual intervention.

Dial it in: Yes, I’d like to try Building an Operator

Based on the above diagram, in order to create our custom Operator, we’ll need the following:

  1. A Custom Resource (CR) spec that defines the application we want to watch, as well as an API for the CR

  2. A Custom Controller to watch our application

  3. Custom code within the new controller that dictates how to reconcile our CR against the spec

  4. An Operator to manage the Custom Controller

  5. A deployment for the Operator and Custom Resource

All of the above could be created by writing Go code and specs by hand, or using a tool like kubebuilder to generate Kubernetes APIs. But the easiest route (and the method we’ll use here) is generating the boilerplate for these components using the CoreOS operator-sdk. It allows you to generate the skeleton for the spec, the controller, and the operator, all via a convenient CLI. Once generated, you define the custom fields in the spec and write the custom code to reconcile against the spec. We’ll walk through each of these steps in the next part of the tutorial.

Toye Idowu is a Platform Engineer at Kenzan Media.


5 Minimal Web Browsers for Linux

There are so many reasons to enjoy the Linux desktop. One reason I often state up front is the almost unlimited number of choices to be found at almost every conceivable level. From how you interact with the operating system (via a desktop interface), to how daemons run, to what tools you use, you have a multitude of options.

The same thing goes for web browsers. You can use anything from open source favorites, such as Firefox and Chromium, or closed sourced industry darlings like Vivaldi and Chrome. Those options are full-fledged browsers with every possible bell and whistle you’ll ever need. For some, these feature-rich browsers are perfect for everyday needs.

There are those, however, who prefer using a web browser without all the frills. In fact, there are many reasons why you might prefer a minimal browser over a standard browser. For some, it’s about browser security, while others look at a web browser as a single-function tool (as opposed to a one-stop shop application). Still others might be running low-powered machines that cannot handle the requirements of, say, Firefox or Chrome. Regardless of the reason, Linux has you covered.

Let’s take a look at five of the minimal browsers that can be installed on Linux. I’ll be demonstrating these browsers on the Elementary OS platform, but each of these browsers are available to nearly every distribution in the known Linuxverse. Let’s dive in.


GNOME Web (codename Epiphany, which means “a usually sudden manifestation or perception of the essential nature or meaning of something”) is the default web browser for Elementary OS, but it can be installed from the standard repositories. (Note, however, that the recommended installation of Epiphany is via Flatpak or Snap). If you choose to install via the standard package manager, issue a command such as sudo apt-get install epiphany-browser -y for successful installation.  

Epiphany uses the WebKit rendering engine, which is the same engine used in Apple’s Safari browser. Couple that rendering engine with the fact that Epiphany has very little in terms of bloat to get in the way, you will enjoy very fast page-rendering speeds. Epiphany development follows strict adherence to the following guidelines:

  • Simplicity – Feature bloat and user interface clutter are considered evil.

  • Standards compliance – No non-standard features will ever be introduced to the codebase.

  • Software freedom – Epiphany will always be released under a license that respects freedom.

  • Human interface – Epiphany follows the GNOME Human Interface Guidelines.

  • Minimal preferences – Preferences are only added when they make sense and after careful consideration.

  • Target audience – Non-technical users are the primary target audience (which helps to define the types of features that are included).

GNOME Web is as clean and simple a web browser as you’ll find (Figure 1).

The GNOME Web manifesto reads:

A web browser is more than an application: it is a way of thinking, a way of seeing the world. Epiphany’s principles are simplicity, standards compliance, and software freedom.


The Netsurf minimal web browser opens almost faster than you can release the mouse button. Netsurf uses its own layout and rendering engine (designed completely from scratch), which is rather hit and miss in its rendering (Figure 2).

Although you might find Netsurf to suffer from rendering issues on certain sites, understand the Hubbub HTML parser is following the work-in-progress HTML5 specification, so there will be issues popup now and then. To ease those rendering headaches, Netsurf does include HTTPS support, web page thumbnailing, URL completion, scale view, bookmarks, full-screen mode, keyboard shorts, and no particular GUI toolkit requirements. That last bit is important, especially when you switch from one desktop to another.

For those curious as to the requirements for Netsurf, the browser can run on a machine as slow as a 30Mhz ARM 6 computer with 16MB of RAM. That’s impressive, by today’s standard.


If you’re looking for a minimal browser that uses the Qt Framework and the QtWebKit rendering engine, QupZilla might be exactly what you’re looking for. QupZilla does include all the standard features and functions you’d expect from a web browser, such as bookmarks, history, sidebar, tabs, RSS feeds, ad blocking, flash blocking, and CA Certificates management. Even with those features, QupZilla still manages to remain a very fast lightweight web browser. Other features include: Fast startup, speed dial homepage, built-in screenshot tool, browser themes, and more.
One feature that should appeal to average users is that QupZilla has a more standard preferences tools than found in many lightweight browsers (Figure 3). So, if going too far outside the lines isn’t your style, but you still want something lighter weight, QupZilla is the browser for you.

Otter Browser

Otter Browser is a free, open source attempt to recreate the closed-source offerings found in the Opera Browser. Otter Browser uses the WebKit rendering engine and has an interface that should be immediately familiar with any user. Although lightweight, Otter Browser does include full-blown features such as:

Otter Browser can be run on nearly any Linux distribution from an AppImage, so there’s no installation required. Just download the AppImage file, give the file executable permissions (with the command chmod u+x otter-browser-*.AppImage), and then launch the app with the command ./otter-browser*.AppImage.

Otter Browser does an outstanding job of rendering websites and could function as your go-to minimal browser with ease.


Let’s get really minimal. When I first started using Linux, back in ‘97, one of the web browsers I often turned to was a text-only take on the app called Lynx. It should come as no surprise that Lynx is still around and available for installation from the standard repositories. As you might expect, Lynx works from the terminal window and doesn’t display pretty pictures or render much in the way of advanced features (Figure 5). In fact, Lynx is as bare-bones a browser as you will find available. Because of how bare-bones this web browser is, it’s not recommended for everyone. But if you happen to have a gui-less web server and you have a need to be able to read the occasional website, Lynx can be a real lifesaver.

I have also found Lynx an invaluable tool when troubleshooting certain aspects of a website (or if some feature on a website is preventing me from viewing the content in a regular browser). Another good reason to use Lynx is when you only want to view the content (and not the extraneous elements).

Plenty More Where This Came From

There are plenty more minimal browsers than this. But the list presented here should get you started down the path of minimalism. One (or more) of these browsers are sure to fill that need, whether you’re running it on a low-powered machine or not.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.


Nithya Ruff on Open Source Contributions Beyond Code

Sometimes when we think about open source, we focus on the code and forget that there are other equally important ways to contribute. Nithya Ruff, Senior Director, Open Source Practice at Comcast, knows that contributions can come in many forms. “Contribution can come in the form of code or in the form of a financial support for projects. It also comes in the form of evangelizing open source; It comes in form of sharing good practices with others,” she said.

Comcast, however, does contribute code. When I sat down with Ruff at Open Source Summit to learn more, she made it clear that Comcast isn’t just a consumer; it contributes a great deal to open source. “One way we contribute is that when we consume a project and a fix or enhancement is needed, we fix it and contribute back.” The company has made roughly 150 such contributions this year alone.

Comcast also releases its own software as open source. “We have created things internally to solve our own problems, but we realized they could solve someone else’s problem, too. So, we released such internal projects as open source,” said Ruff.

Watch the video interview at The Linux Foundation