Posted on Leave a comment

Managing user accounts with Cockpit

This is the latest in a series of articles on Cockpit, the easy-to-useintegratedglanceable, and open web-based interface for your servers. In the first article, we introduced the web user interface. The second and third articles focused on how to perform storage and network tasks respectively.

This article demonstrates how to create and modify local accounts. It also shows you how to install the 389 Directory Server add-on (or plugin). Finally, you’ll see how 389 DS integrates into the Cockpit web service.

Managing local accounts

To start, click the Accounts option in the left column. The main screen provides an overview of local accounts. From here, you can create a new user account, or modify an existing account.

Accounts screen overview in Cockpit
Accounts screen overview in Cockpit

Creating a new account in Cockpit

Cockpit gives sysadmins the ability to easily create a basic user account. To begin, click the Create New Account button. A box appears, requesting basic information such as the full name, username, and password. It also provides the option to lock the account. Click Create to complete the process. The example below creates a new user named Demo User.

Creating a local account in Cockpit
Creating a local account in Cockpit

Managing accounts in Cockpit

Cockpit also provides basic management of local accounts. Some of the features include elevating the user’s permissions, password expiration, and resetting or changing the password.

Modifying an account

To modify an account, go back to the accounts page and select the user you wish to modify. Here, we can change the full name and elevate the user’s role to Server Administrator — this adds user to the wheel group. It also includes options for access and passwords.

The Access options allow admins to lock the account. Clicking Never lock account will open the “Account Expiration” box. From here we can choose to Never lock the account, or to lock it on a scheduled date.

Password management

Admins can choose to Set password and Force Change. The first option prompts you to enter a new password. The second option forces users to create a new password the next time they login.

Selecting the Never change password option opens a box with two options. The first is Never expire the password. This allows the user to keep their password without the need to change it. The second option is Require Password change every … days. This determines the amount of days a password can be used before it must be changed.

Adding public keys

We can also add public SSH keys from remote computers for password-less authentication. This is equivalent to the ssh-copy-id command. To start, click the Add Public Key (+) button. Finally, copy the public key from a remote machine and paste it into the box.

To remove the key, click the remove (-) button to the right of the key.

Terminating the session and deleting an account

Near the top right-corner are two buttons: Terminate Session, and Delete. Clicking the Terminate Session button immediately disconnects the user. Clicking the Delete button removes the user and offers to delete the user’s files with the account.

Modifying and deleting a local account with Cockpit
Modifying and deleting a local account with Cockpit

Managing 389 Directory Server

Cockpit has a plugin for managing the 389 Directory Service. To add the 389 Directory Server UI, run the following command using sudo:

$ sudo dnf install cockpit-389-ds

Because of the enormous number of settings, Cockpit provides detailed optimization of the 389 Directory Server. Some of these settings include:

  • Server Settings: Options for server configuration, tuning & limits, SASL, password policy, LDAPI & autobind, and logging.
  • Security: Enable/disable security, certificate management, and cipher preferences.
  • Database: Configure the global database, chaining, backups, and suffixes.
  • Replication: Pertains to agreements, Winsync agreements, and replication tasks.
  • Schema: Object classes, attributes, and matching rules.
  • Plugins: Provides a list of plugins associated with 389 Directory Server. Also gives admins the opportunity to enable/disable, and edit the plugin.
  • Monitoring: Shows database performance stats. View DB cache hit ratio and normalized DN cache. Admins can also configure the amount of tries, and hits. Furthermore, it provides server stats and SNMP counters.

Due to the abundance of options, going through the details for 389 Directory Server is beyond the scope of this article. For more information regarding 389 Directory Server, visit their documentation site.

Managing 389 DS with Cockpit
Managing 389 Directory Server with Cockpit

As you can see, admins can perform quick and basic user management tasks. However, the most noteworthy is the in-depth functionality of the 389 Directory Server add-on.

The next article will explore how Cockpit handles software and services.


Photo by Daniil Vnoutchkov on Unsplash.

Posted on Leave a comment

IceWM – A really cool desktop

IceWM is a very lightweight desktop. It’s been around for over 20 years, and its goals today are still the same as back then: speed, simplicity, and getting out of the users way.

I used to add IceWM to Scientific Linux, for a lightweight desktop. At the time, it was only a .5 Meg rpm. When running, it used only 5 Meg of memory. Over the years, IceWM has grown a little bit. The rpm package is now 1 Meg. When running, IceWM now uses 10 Meg of memory. Even though it literally doubled in size in the past 10 years, it is still extremely small.

What do you get in such a small package? Exactly what it says, a Window Manager. Not much else. You have a toolbar with a menu or icons to launch programs. You have speed. And finally you have themes and options. Besides the few goodies in the toolbar, that’s about it.

Installation

Because IceWM is so small, you just install the main package.

$ sudo dnf install icewm

If you want to save disk space, many of the dependencies are soft options. IceWM works just fine without them.

$ sudo dnf install icewm --setopt install_weak_deps=false

Options

The defaults for IceWM are set so that your average windows user feels comfortable. This is a good thing, because options are done manually, through configuration files.

I hope I didn’t loose you there, because it’s not as bad as it sounds. There are only 8 configuration files, and most people only use a couple. The main three config files are keys (keybinding), preferences (overall preferences), and toolbar (what is shown on the toolbar). The default config files are found in /usr/share/icewm/

To make a change, you copy the default config to you home icewm directory (~/.icewm), edit the file, and then restart IceWM. The first time you do this might be a little scary because “Restart Icewm” is found under the “Logout” menu entry. But when you restart IceWM, you just see a single flicker, and your changes are there. Any open programs are unaffected and stay as they were.

Themes

IceWM in the NanoBlue theme

If you install the icewm-themes package, you get quite a few themes. Unlike regular options, you do not need to restart IceWM to change into a new theme. Usually I wouldn’t talk much about themes, but since there are so few other features, I figured I’m mention them.

Toolbar

The toolbar is the one place where a few extra features have been added to IceWM. You will see that you can switch between workplaces. Workspaces are sometimes called Virtual Desktops. Click on the workspace to move to it. Right clicking on a windows taskbar entry allows you to move it between workspaces. If you like workspaces, this has all the functionality you will like. If you don’t like workspaces, it’s an option and can be turned off.

The toolbar also has Network/Memory/CPU monitoring graphs. Hover your mouse over the graph to get details. Click on the graph to get a window with full monitoring. These little graphs used to be on every window manager. But as those desktops matured, they have all taken the graphs out. I’m very glad that IceWM has left this nice feature alone.

Summary

If you want something lightweight, but functional, IceWM is the desktop for you. It is setup so that new linux users can use it out of the box. It is flexible so that unix users can tweak it to their liking. Most importantly, IceWM lets your programs run without getting in the way.

Posted on Leave a comment

In Fedora 31, 32-bit i686 is 86ed

The release of Fedora 31 drops the 32-bit i686 kernel, and as a result bootable images. While there may be users out there who still have hardware which will not work with the 64-bit x86_64 kernel, there are very few. However, this article gives you the whole story behind the change, and what 32-bit material you’ll still find in Fedora 31.

What is happening?

The i686 architecture essentially entered community support with the Fedora 27 release. Unfortunately, there are not enough members of the community willing to do the work to maintain the architecture. Don’t worry, though — Fedora is not dropping all 32-bit packages. Many i686 packages are still being built to ensure things like multilib, wine, and Steam will continue to work.

While the repositories are no longer being composed and mirrored out, there is a koji i686 repository which works with mock for building 32-bit packages, and in a pinch to install 32-bit versions which are not part of the x86_64 multilib repository. Of course, maintainers expect this will see limited use. Users who simply need to run a 32-bit application should be able to do so with multilib on a 64-bit system.

What to do if you’re running 32-bit

If you still run 32-bit i686 installations, you’ll continue to receive supported Fedora updates through the Fedora 30 lifecycle. This is until roughly May or June of 2020. At that point, you can either reinstall as 64-bit x86_64 if your hardware supports it, or replace your hardware with 64-bit capable hardware if possible.

There is a user in the community who has done a successful “upgrade” from 32-bit Fedora to 64-bit x86 Fedora. While this is not an intended or supported upgrade path, it should work. The Project hopes to have some documentation for users who have 64-bit capable hardware to explain the process before the Fedora 30 end of life.

If you have a 64-bit capable CPU running 32-bit Fedora due to low memory, try one of the alternate desktop spins. LXDE and others tend to do fairly well in memory constrained environments. For users running simple servers on old 32-bit hardware that was just lying around, consider one of the newer ARM boards. The power savings alone can more than pay for the new hardware in many instances. And if none of these are on option, CentOS 7 offers a 32-bit image with longer term support for the platform.

Security and you

While some users may be tempted to keep running an older Fedora release past end of life, this is highly discouraged. People constantly research software for security issues. Often times, they find these issues which have been around for years.

Once Fedora maintainers know about such issues, they typically patch for them, and make updates available to supported releases — but not to end of life releases. And of course, once these vulnerabilities are public, there will be people trying to exploit them. If you run an older release past end of life, your security exposure increases over time as a result, putting your system at ever-growing risk.


Photo by Alexandre Debiève on Unsplash.

Posted on Leave a comment

Contribute at the kernel and IoT edition Fedora test days

Fedora test days are events where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed to Fedora before, this is a perfect way to get started.

There are two upcoming test days in the upcoming week. The first, starts on Monday 30 September through Monday 07 October, is to test the Kernel 5.3. Wednesday October 02, the test day is focusing on Fedora 31 IoT Edition. Come and test with us to make the upcoming Fedora 31 even better.

Kernel test week

The kernel team is working on final integration for kernel 5.3. This version was just recently released and will arrive soon in Fedora. This version will also be the shipping kernel for Fedora 31. As a
result, the Fedora kernel and QA teams have organized a test week for
Monday, Sept 30 through Monday, October 07. Refer to the wiki page for links to the test images you’ll need to participate. The steps are clearly outlined in this document.

Fedora IoT Edition test day

Fedora Internet of Things is a variant of Fedora focused on IoT ecosystems. Whether you’re working on a home assistant, industrial gateways, or data storage and analytics, Fedora IoT provides a trusted open source platform to build on. Fedora IoT produces a monthly rolling release to help you keep your ecosystem up-to-date. The IoT and QA teams will have this test day for on Wednesday, October 02. Refer to the wiki page for links and resources to test the IoT Edition.

How do test days work?

A test day is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to download test materials (which include some large files) and then read and follow directions step by step.

Detailed information about both test days are on the wiki pages above. If you’re available on or around the days of the events, please do some testing and report your results.

Posted on Leave a comment

Fedora and CentOS Stream

From the desk of the Fedora Project Leader:

Hi everyone! You may have seen the announcement about changes over at the CentOS Project. (If not, please go ahead and take a few minutes and read it — I’ll wait!) And now you may be wondering: if CentOS is now upstream of RHEL, what happens to Fedora? Isn’t that Fedora’s role in the Red Hat ecosystem?

First, don’t worry. There are changes to the big picture, but they’re all for the better.

If you’ve been following the conference talks from Red Hat Enterprise Linux leadership about the relationship between Fedora, CentOS, and RHEL, you have heard about “the Penrose Triangle”. That’s a shape like something from an M. C. Escher drawing: it’s impossible in real life!

We’ve been thinking for a while that maybe impossible geometry is not actually the best model. 

For one thing, the imagined flow where contributions at the end would flow back into Fedora and grow in a “virtuous cycle” never actually worked that way. That’s a shame, because there’s a huge, awesome CentOS community and many great people working on it — and there’s a lot of overlap with the Fedora community too. We’re missing out.

But that gap isn’t the only one: there’s not really been a consistent flow between the projects and product at all. So far, the process has gone like this: 

  1. Some time after the previous RHEL release, Red Hat would suddenly turn more attention to Fedora than usual.
  2. A few months later, Red Hat would split off a new RHEL version, developed internally.
  3. After some months, that’d be put into the world, including all of the source — from which CentOS is built. 
  4. Source drops continue for updates, and sometimes those updates include patches that were in Fedora — but there’s no visible connection.

Each step here has its problems: intermittent attention, closed-door development, blind drops, and little ongoing transparency. But now Red Hat and CentOS Project are fixing that, and that’s good news for Fedora, too.

Fedora will remain the first upstream of RHEL. It’s where every RHEL came from, and is where RHEL 9 will come from, too. But after RHEL branches off, CentOS will be upstream for ongoing work on those RHEL versions. I like to call it “the midstream”, but the marketing folks somehow don’t, so that’s going to be called “CentOS Stream”.

We — Fedora, CentOS, and Red Hat — still need to work out all of the technical details, but the idea is that these branches will live in the same package source repository. (The current plan is to make a “src.centos.org” with a  parallel view of the same data as src.fedoraproject.org). This change gives public visibility into ongoing work on released RHEL, and a place for developers and Red Hat’s partners to collaborate at that level.

CentOS SIGs — the special interest groups for virtualization, storage, config management and so on — will do their work in shared space right next to Fedora branches. This will allow much easier collaboration and sharing between the projects, and I’m hoping we’ll even be able to merge some of our similar SIGs to work together directly. Fixes from Fedora packages can be cherry-picked into the CentOS “midstream” ones — and where useful, vice versa.

Ultimately, Fedora, CentOS, and RHEL are part of the same big project family. This new, more natural flow opens possibilities for collaboration which were locked behind artificial (and extra-dimensional!) barriers. I’m very excited for what we can now do together!

— Matthew Miller, Fedora Project Leader

Posted on Leave a comment

Managing network interfaces and FirewallD in Cockpit

In the last article, we saw how Cockpit can manage storage devices. This article will focus on the networking functionalities within the UI. We’ll see how to manage the interfaces attached to the system in Cockpit. We’ll also look at the firewall and demonstrate how to assign a zone to an interface, and allow/deny services and ports.

To access these controls, verify the cockpit-networkmanager and cockpit-firewalld packages are installed.

To start, log into the Cockpit UI and select the Networking menu option. As is consistent with the UI design we see performance graphs at the top and a summary of the logs at the bottom of the page. Between them are the sections to manage the firewall and interface(s).

Firewall

Cockpit’s firewall configuration page works with FirewallD and allows admins to quickly configure these settings. The page has options for assigning zones to specific interfaces, as well as a list of services configured to those zones.

Adding a zone

Let’s start by configuring a zone to an available interface. First, click the Add Zone button. From here you can select a pre-configured or custom zone. Selecting one of the zones will display a brief description of that zone, as well as the services, or ports, allowed, or opened, in that zone. Select the interface you want to assign the zone to. Also, there’s the option to configure the rules to apply to the Entire Subset, or you can specify a Range of IP addresses. In the example below, we add the Internal zone to an available network card. The IP range can also be configured so the rule is only applied to the specified addresses.

Adding and removing services/ports

To allow network access to services, or open ports, click the Add Services button. From here you can search (or filter) for a service, or manually enter the port(s) you would like to open. Selecting the Custom Ports option provides options to enter the port number or alias into the TCP and/or UDP fields. You can also provide an optional name to label the rule. In the example below, the Cockpit service/socket is added to the Internal zone. Once completed, click the Add Services, or Add Ports, button. Likewise, to remove the service click the red trashcan to the right, select the zone(s), and click Remove service.

For more information about using Cockpit to configure your system’s firewall, visit the Cockpit project’s Github page.

Interfaces

The interfaces section displays both physical and virtual/logical NICs assigned to the system. From the main screen we see the name of the interface, the IP address, and activity stats of the NIC. Selecting an interface will display IP related information and options to manually configure them. You can also choose to have the network card inactive after a reboot by toggling the Connect automatically option. To enable, or disable, the network interface, click the toggle switch in the top right corner of the section.

Bonding

Bonding network interfaces can help increase bandwidth availability. It can also serve as a redundancy plan in the event one of the NIC’s fail.

To start, click the Add Bond button located in the header of the Interfaces section. In the Bond Settings overlay, enter a name and select the interfaces you wish to bond in the list below. Next, select the MAC Address you would like to assign to the bond. Now select the Mode, or purpose, of the bond: Round Robin, Active Backup, Broadcast, &c. (the demo below shows a complete list of modes.)

Continue the configuration by selecting the Primary NIC, and a Link Monitoring option. You can also tweak the Monitoring Interval, and Link Up Delay and Link Down Delay options. To finish the configuration, click the Apply button. We’re taken back to the main screen, and the new bonded interface we just created is added to the list of interfaces.

From here we can configure the bond like any other interface. We can even delve deeper into the interface’s settings for the bond. As seen in the example below, selecting one of the interfaces in the bond’s settings page provides details pertaining to the interface link. There’s also an added option for changing the bond settings. To delete the bond, click the Delete button.

Teaming

Teaming, like bonding, is another method used for link aggregation. For a comparison between bonding and teaming, refer to this chart. You can also find more information about teaming on the Red Hat documentation site.

As with creating a bond, click the Add Team button. The settings are similar in the sense you can give it a name, select the interfaces, link delay, and the mode or Runner as it’s referred to here. The options are similar to the ones available for bonding. By default the Link Watch option is set to Ethtool, but also has options for ARP Ping, and NSNA Ping.

Click the Apply button to complete the setup. It will also return you to the main networking screen. For further configuration, such as IP assignment and changing the runner, click the newly made team interface. As with bonding, you can click one of the interfaces in the link aggregation. Depending on the runner, you may have additional options for the Team Port. Click the Delete button from the screen to remove the team.

Bridging

From the article, Build a network bridge with Fedora:

“A bridge is a network connection that combines multiple network adapters.”

One excellent example for a bridge is combining the physical NIC with a virtual interface, like the one created and used for KVM virtualization. Leif Madsen’s blog has an excellent article on how to achieve this in the CLI. This can also be accomplished in Cockpit with just a few clicks. The example below will accomplish the first part of Leif’s blog using the web UI. We’ll bridge the enp9s0 interface with the virbr0 virtual interface.

Click the Add Bridge button to launch the settings box. Provide a name and select the interfaces you would like to bridge. To enable Spanning Tree Protocol (STP), click the box to the right of the label. Click the Apply button to finalize the configuration.

As is consistent with teaming and bonding, selecting the bridge from the main screen will display the details of the interface. As seen in the example below, the physical device takes control and the virtual interface will adopt that device’s IP address.

Select the individual interface in the bridge’s detail screen for more options. And once again, click the Delete button to remove the bridge.

Adding VLANs

Cockpit allows admins to create VLANs, or virtual networks, using any of the interfaces on the system. Click the Add VLAN button and select an interface. Furthermore, in the Parent drop-down list, assign the VLAN ID, and if you like, give it a new name. By default the name will be the same as the parent followed by a dot and the ID. For example, interface enp11s0 with VLAN ID 9 will result in enp11s0.9). Click Apply to save the settings and to return to the networking main screen. Click the VLAN interface for further configuration. As always, click the Delete button to remove the VLAN.

As we can see, Cockpit can help admins with common network configurations when managing the system’s connectivity. In the next article, we’ll explore how Cockpit handles user management and peek into the add-on 389 Directory Servers.

Posted on Leave a comment

Announcing the release of Fedora 31 Beta

The Fedora Project is pleased to announce the immediate availability of Fedora 31 Beta, the next step towards our planned Fedora 31 release at the end of October.

Download the prerelease from our Get Fedora site:

<!–

Or, check out one of our popular variants, including KDE Plasma, Xfce, and other desktop environments, as well as images for ARM devices like the Raspberry Pi 2 and 3:

–>

Beta Release Highlights

GNOME 3.34 (almost)

The newest release of the GNOME desktop environment is full of performance enhancements and improvements. The beta ships with a prerelease, and the full 3.34 release will be available as an update. For a full list of GNOME 3.34 highlights, see the release notes.

Fedora IoT Edition

Fedora Editions address specific use-cases the Fedora Council has identified as significant in growing our userbase and community. We have Workstation, Server, and CoreOS — and now we’re adding Fedora IoT. This will be available from the main “Get Fedora” site when the final release of F31 is ready, but for now, get it from iot.fedoraproject.org.

Read more about Fedora IoT in our Getting Started docs.

Fedora CoreOS

Fedora CoreOS remains in a preview state, with a planned generally-available release planned for early next year. CoreOS is a rolling release which rebases periodically to a new underlying Fedora OS version. Right now, that version is Fedora 30, but soon there will be a “next” stream which will track Fedora 31 until that’s ready to become the “stable” stream.

Other updates

Fedora 31 Beta includes updated versions of many popular packages like Node.js, the Go language, Python, and Perl. We also have the customary updates to underlying infrastructure software, like the GNU C Library and the RPM package manager. For a full list, see the Change set on the Fedora Wiki.

Farewell to bootable i686

We’re no longer producing full media or repositories for 32-bit Intel-architecture systems. We recognize that this means newer Fedora releases will no longer work on some older hardware, but the fact is there just hasn’t been enough contributor interest in maintaining i686, and we can provide greater benefit for the majority of our users by focusing on modern architectures. (The majority of Fedora systems have been 64-bit x86_64 since 2013, and at this point that’s the vast majority.)

Please note that we’re still making userspace packages for compatibility when running 32-bit software on a 64-bit systems — we don’t see the need for that going away anytime soon.

Testing needed

Since this is a Beta release, we expect that you may encounter bugs or missing features. To report issues encountered during testing, contact the Fedora QA team via the mailing list or in #fedora-qa on Freenode. As testing progresses, common issues are tracked on the Common F31 Bugs page.

For tips on reporting a bug effectively, read how to file a bug.

What is the Beta Release?

A Beta release is code-complete and bears a very strong resemblance to the final release. If you take the time to download and try out the Beta, you can check and make sure the things that are important to you are working. Every bug you find and report doesn’t just help you, it improves the experience of millions of Fedora users worldwide! Together, we can make Fedora rock-solid. We have a culture of coordinating new features and pushing fixes upstream as much as we can. Your feedback improves not only Fedora, but Linux and free software as a whole.

More information

For more detailed information about what’s new on Fedora 31 Beta release, you can consult the Fedora 31 Change set. It contains more technical information about the new packages and improvements shipped with this release.

Posted on Leave a comment

Copying large files with Rsync, and some misconceptions

There is a notion that a lot of people working in the IT industry often copy and paste from internet howtos. We all do it, and the copy-and-paste itself is not a problem. The problem is when we run things without understanding them.

Some years ago, a friend who used to work on my team needed to copy virtual machine templates from site A to site B. They could not understand why the file they copied was 10GB on site A but but it became 100GB on-site B.

The friend believed that rsync is a magic tool that should just “sync” the file as it is. However, what most of us forget is to understand what rsync really is, and how is it used, and the most important in my opinion is, where it come from. This article provides some further information about rsync, and an explanation of what happened in that story.

About rsync

rsync is a tool was created by Andrew Tridgell and Paul Mackerras who were motivated by the following problem:

Imagine you have two files, file_A and file_B. You wish to update file_B to be the same as file_A. The obvious method is to copy file_A onto file_B.

Now imagine that the two files are on two different servers connected by a slow communications link, for example, a dial-up IP link. If file_A is large, copying it onto file_B will be slow, and sometimes not even possible. To make it more efficient, you could compress file_A before sending it, but that would usually only gain a factor of 2 to 4.

Now assume that file_A and file_B are quite similar, and to speed things up, you take advantage of this similarity. A common method is to send just the differences between file_A and file_B down the link and then use such list of differences to reconstruct the file on the remote end.

The problem is that the normal methods for creating a set of differences between two files rely on being able to read both files. Thus they require that both files are available beforehand at one end of the link. If they are not both available on the same machine, these algorithms cannot be used. (Once you had copied the file over, you don’t need the differences). This is the problem that rsync addresses.

The rsync algorithm efficiently computes which parts of a source file match parts of an existing destination file. Matching parts then do not need to be sent across the link; all that is needed is a reference to the part of the destination file. Only parts of the source file which are not matching need to be sent over.

The receiver can then construct a copy of the source file using the references to parts of the existing destination file and the original material.

Additionally, the data sent to the receiver can be compressed using any of a range of common compression algorithms for further speed improvements.

The rsync algorithm addresses this problem in a lovely way as we all might know.

After this introduction on rsync, Back to the story!

Problem 1: Thin provisioning

There were two things that would help the friend understand what was going on.

The problem with the file getting significantly bigger on the other size was caused by Thin Provisioning (TP) being enabled on the source system — a method of optimizing the efficiency of available space in Storage Area Networks (SAN) or Network Attached Storages (NAS).

The source file was only 10GB because of TP being enabled, and when transferred over using rsync without any additional configuration, the target destination was receiving the full 100GB of size. rsync could not do the magic automatically, it had to be configured.

The Flag that does this work is -S or –sparse and it tells rsync to handle sparse files efficiently. And it will do what it says! It will only send the sparse data so source and destination will have a 10GB file.

Problem 2: Updating files

The second problem appeared when sending over an updated file. The destination was now receiving just the 10GB, but the whole file (containing the virtual disk) was always transferred. Even when a single configuration file was changed on that virtual disk. In other words, only a small portion of the file changed.

The command used for this transfer was:

rsync -avS vmdk_file syncuser@host1:/destination

Again, understanding how rsync works would help with this problem as well.

The above is the biggest misconception about rsync. Many of us think rsync will simply send the delta updates of the files, and that it will automatically update only what needs to be updated. But this is not the default behaviour of rsync.

As the man page says, the default behaviour of rsync is to create a new copy of the file in the destination and to move it into the right place when the transfer is completed.

To change this default behaviour of rsync, you have to set the following flags and then rsync will send only the deltas:

--inplace update destination files in-place
--partial keep partially transferred files
--append append data onto shorter files
--progress show progress during transfer

So the full command that would do exactly what the friend wanted is:

rsync -av --partial --inplace --append --progress vmdk_file syncuser@host1:/destination

Note that the sparse flag -S had to be removed, for two reasons. The first is that you can not use –sparse and –inplace together when sending a file over the wire. And second, when you once sent a file over with –sparse, you can’t updated with –inplace anymore. Note that versions of rsync older than 3.1.3 will reject the combination of –sparse and –inplace.

So even when the friend ended up copying 100GB over the wire, that only had to happen once. All the following updates were only copying the difference, making the copy to be extremely efficient.

Posted on Leave a comment

GNOME 3.34 released — coming soon in Fedora 31

Today the GNOME project announced the release of GNOME 3.34. This latest release of GNOME will be the default desktop environment in Fedora 31 Workstation. The Beta release of Fedora 31 is currently expected in the next week or two, with the Final release scheduled for late October.

GNOME 3.34 includes a number of new features and improvements. Congratulations and thank you to the whole GNOME community for the work that went into this release! Read on for more details.

GNOME 3.34 desktop environment at work

Notable features

The desktop itself has been refreshed with a pleasing new background. You can also compare your background images to see what they’ll look like on the desktop.

There’s a new custom application folder feature in the GNOME Shell Overview. It lets you combine applications in a group to make it easier to find the apps you use.

You already know that Boxes lets you easily download an OS and create virtual machines for testing, development, or even daily use. Now you can find sources for your virtual machines more easily, as well as boot from CD or DVD (ISO) images more easily. There is also an Express Install feature available that now supports Windows versions.

Now that you can save states when using GNOME Games, gaming is more fun. You can snapshot your progress without getting in the way of the fun. You can even move snapshots to other devices running GNOME.

More details

These are not the only features of the new and improved GNOME 3.34. For an overview, visit the official release announcement. For even more details, check out the GNOME 3.34 release notes.

The Fedora 31 Workstation Beta release is right around the corner. Fedora 31 will feature GNOME 3.34 and you’ll be able to experience it in the Beta release.

Posted on Leave a comment

Firefox 69 available in Fedora

When you install the Fedora Workstation, you’ll find the world-renowned Firefox browser included. The Mozilla Foundation underwrites work on Firefox, as well as other projects that promote an open, safe, and privacy respecting Internet. Firefox already features a fast browsing engine and numerous privacy features.

A community of developers continues to improve and enhance Firefox. The latest version, Firefox 69, was released recently and you can get it for your stable Fedora system (30 and later). Read on for more details.

New features in Firefox 69

The newest version of Firefox includes Enhanced Tracking Protection (or ETP). When you use Firefox 69 with a new (or reset) settings profile, the browser makes it harder for sites to track your information or misuse your computer resources.

For instance, less scrupulous websites use scripts that cause your system to do lots of intense calculations to produce cryptocurrency results, called cryptomining. Cryptomining happens without your knowledge or permission and is therefore a misuse of your system. The new standard setting in Firefox 69 prevents sites from this kind of abuse.

Firefox 69 has additional settings to prevent sites from identifying or fingerprinting your browser for later use. These improvements give you additional protection from having your activities tracked online.

Another common annoyance is videos that start in your browser without warning. Video playback also uses extra CPU power and you may not want this happening on your laptop without permission. Firefox already stops this from happening using the Block Autoplay feature. But Firefox 69 also lets you stop videos from playing even if they start without sound. This feature prevents unwanted sudden noise. It also solves more of the real problem — having your computer’s power used without permission.

There are numerous other new features in the new release. Read more about them in the Firefox release notes.

How to get the update

Firefox 69 is available in the stable Fedora 30 and pre-release Fedora 31 repositories, as well as Rawhide. The update is provided by Fedora’s maintainers of the Firefox package. The maintainers also ensured an update to Mozilla’s Network Security Services (the nss package). We appreciate the hard work of the Mozilla project and Firefox community in providing this new release.

If you’re using Fedora 30 or later, use the Software tool on Fedora Workstation, or run the following command on any Fedora system:

$ sudo dnf --refresh upgrade firefox

If you’re on Fedora 29, help test the update for that release so it can become stable and easily available for all users.

Firefox may prompt you to upgrade your profile to use the new settings. To take advantage of new features, you should do this.