Posted on Leave a comment

An introduction to USB Device Emulation and how to take advantage of it

Introduction

Nowadays, the number of devices is getting bigger and bigger, and modern operating systems must try to support all types and several of them with every integration, with every release. Maintaining a large number of devices is difficult, expensive and also hard to test, specially for plug-and-play devices, like USB devices.

Therefore, it is necessary to create a mechanism to facilitate the maintenance and testing of old and new USB devices. And this is where USB device emulation comes in. In that way, a complete framework including a big bunch of emulated and validated USB devices will allow easier integration and release. The area of application would be very wide: earlier bug search/detection even during development, automatic tests, continuous integration, etc.

How to emulate USB devices

USB/IP project allows sharing the USB devices connected to a local machine so that they can be managed by another machine connected to the network by means of a TCP/IP connection.

Then USB/IP project consists of two parts:

  • local device support (host) to allow remote access to every necessary control events and data
  • remote control that catches every necessary control event and data to process like a normal driver

The procedure is valid for Linux and Windows, here I will focus only on Linux.

The idea behind emulation is to replace the remote device support with an application that behaves in the same way. In this way we can emulate devices with software applications that follow the commented USB/IP protocol specification.

In the following points I will describe how to configure and run the remote support and how to connect to our USB emulated device.

Remote support

Remote support is divided in two parts:

  • kernel space to control a remote device as it was local, that is, to be probed by the normal driver.
  • user space application to configure access to remote devices.

At this point, it is important to remark that the device emulators, after configuration by user space application, will communicate directly with the kernel space.

Local support has a very similar structure, but the focus of this article is device emulation.

Let’s analyze every part of remote support.

Kernel space

First of all, in order to get the functionality we need to compile the Linux Kernel with the following options:

CONFIG_USBIP_CORE=m
CONFIG_USBIP_VHCI_HCD=m

These options enable the USB/IP virtual host controller driver, which is run on the remote machine.

Normal USB drivers need to be also included because they will be probed and configured in the same way from virtual host controller drivers.

Besides there are other important configuration options:

CONFIG_USBIP_VHCI_HC_PORTS=8
CONFIG_USBIP_VHCI_NR_HCS=1

These options define the number of ports per USB/IP virtual host controller and the number of USB/IP virtual host controllers as if adding physical host controllers. These are the default values if CONFIG_USBIP_VHCI_HCD is enabled, increase if necessary.

The commented options and kernel modules are already included in some Linux distributions like Fedora Linux.

Let’s see an example of available virtual USB buses and ports that we will use later.

Default and real resources in example equipment:

$ lsusb 
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 002: ID 0627:0001 Adomax Technology Co., Ltd
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
$ lsusb -t
/: Bus 02.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/15p, 5000M
/: Bus 01.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/15p, 480M
|__ Port 1: Dev 2, If 0, Class=Human Interface Device, Driver=usbhid, 480M
$

Now, we will load the module vhci-hcd into the system (default configuration for CONFIG_USBIP_VHCI_HC_PORTS and CONFIG_USBIP_VHCI_NR_HCS):

$ sudo modprobe vhci-hcd 
$ lsusb
Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 002: ID 0627:0001 Adomax Technology Co., Ltd
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
$ lsusb -t
/: Bus 04.Port 1: Dev 1, Class=root_hub, Driver=vhci_hcd/8p, 5000M
/: Bus 03.Port 1: Dev 1, Class=root_hub, Driver=vhci_hcd/8p, 480M
/: Bus 02.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/15p, 5000M
/: Bus 01.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/15p, 480M
|__ Port 1: Dev 2, If 0, Class=Human Interface Device, Driver=usbhid, 480M
$

The remote USB/IP virtual host controller driver will only use the configured virtualized resources. Of course, emulated devices will work in the same way.

User space

The other necessary part in the USB/IP project is the user space tool usbip and needs to be used to configure the referred kernel space on both sides, although, in the same way, we only focus on the remote side, since the local side will be represented by the emulator.

That is, usbip tool will configure the USB/IP virtual controller (tcp client) in kernel space to connect to the device emulator (tcp server) in order to establish a direct connection between them for USB configuration, events, data, etc. exchange.

The tool is independent of the type of device and is able to provide information about available and reserved resources (see more information in the examples below).

The local USB/IP virtual host controller needs to specify the pair bus-port that will used for remote access, it will be the same for emulated devices, but in this case, this pair can be anything because there is no real device and resource reservation is not necessary.

This tool is found on the Linux Kernel repository in order to be totally synchronized with it.

Location of the tool on the Linux Kernel repository: ./tools/usb/usbip

In some distribution like Fedora Linux, the usbip utility can be installed by means of usbip package from repositories. If usbip utility or related package can not be found, follow the instruction in the available README file to compile and install. Suitable rpm package can also be generated from the usbip-emulator repository:

$ git clone https://github.com/jtornosm/USBIP-Virtual-USB-Device.git $ cd USBIP-Virtual-USB-Device/usbip $ make rpm ...
$

How to emulate USB devices

Emulators are generated in Python and C. I have started with C development (I will focus on this part), but the same could be done in Python.

For C development, compile emulation tools from the usbip-emulator repository:

$ git clone https://github.com/jtornosm/USBIP-Virtual-USB-Device.git $ cd USBIP-Virtual-USB-Device/c $ make ...
$

All the supported devices emulated at this moment will be generated:

  • hid-keyboard
  • hid-mouse
  • cdc-adm
  • hso
  • cdc-ether
  • bt

rpm package (usbip-emulator) can be also generated with:

$ make rpm ...
$

As examples, Vendor and Product IDs are hardcoded in the code.

Following three examples to show how emulation works. We are using the same equipment for the emulator and remote USB/IP but they could run in different equipment. Besides, we are reserving different resources so all the devices could be emulated at the same time.

Example 1: hso

From one terminal, let’s emulate the hso device:

(“1-1” is the pair bus-port for the USB device on the local machine, as we are emulating, it could be anything. It is only important because usbip tool will have to use the same name to request the emulated device)

$ hso -p 3241 -b 1-1 hso started.... server usbip tcp port: 3241 Bus-Port: 3-0:1.0 ...

From another terminal, connect to the emulator:

(localhost because emulator is running in the same equipment and the same name for pair bus-port as the emulator)

$ sudo modprobe vhci-hcd 
$ sudo usbip --tcp-port 3241 attach -r 127.0.0.1 -b 1-1
usbip: info: using port 3241 ("3241")
$

Now we can check that the new device is present:

(As we saw previously, for this example machine, bus 3 is virtualized)

$ ip addr show dev hso
0 3: hso0: <POINTOPOINT,MULTICAST,NOARP> mtu 1486 qdisc noop state DOWN group default qlen 10 link/none $ rfkill list 0: hso-0: Wireless WAN Soft blocked: no Hard blocked: no ...
$ lsusb ... Bus 003 Device 002: ID 0af0:6711 Option GlobeTrotter Express 7.2 v2 ... $ lsusb -t ...
/: Bus 03.Port 1: Dev 1, Class=root_hub, Driver=vhci_hcd/8p, 480M |__ Port 1: Dev 2, If 0, Class=Vendor Specific Class, Driver=hso, 12M ...
$

In order to release resources:

$ sudo usbip port 
Imported USB devices
====================
Port 00: <Port in Use> at Full Speed(12Mbps)
Option : GlobeTrotter Express 7.2 v2 (0af0:6711)
3-1 -> usbip://127.0.0.1:3241/1-1
-> remote bus/dev 001/002
$ sudo usbip detach -p 00
usbip: info: Port 0 is now detached!
$

And we can check that the device is released:

$ ip addr show dev hso0 Device "hso0" does not exist. $ rfkill list ...
$ lsusb ... $

After this, we can emulate again or stop the emulated device from the first terminal (i.e. with Ctrl-C).

Example 2: cdc-ether

From one terminal, let’s emulate the cdc-ether device (root permission is required because raw socket needs to bind to specified interface for data plane):

(“1-1” is the pair bus-port for the USB device on the local machine, as we are emulating, it could be anything. It is only important because usbip tool will have to use the same name to request the emulated device)

$ sudo cdc-ether -e 88:00:66:99:5b:aa -i enp1s0 -p 3242 -b 1-1 cdc-ether started.... server usbip tcp port: 3242 Bus-Port: 1-1 Ethernet address: 88:00:66:99:5b:aa Manufacturer: Temium Network interface to bind: enp1s0 ...

From another terminal connect to the emulator:

(localhost because emulator is running in the same equipment and the same name for pair bus-port as the emulator)

$ sudo modprobe vhci-hcd 
$ sudo usbip --tcp-port 3242 attach -r 127.0.0.1 -b 1-1
usbip: info: using port 3242 ("3242")
$

Now we can check that the new device is present:

(As we saw previously, for this example machine, bus 3 is virtualized)

$ ip addr show dev eth0 4: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000 link/ether 88:00:66:99:5b:aa brd ff:ff:ff:ff:ff:ff $ sudo ethtool eth0 ...
Link detected: yes $ lsusb ... Bus 003 Device 003: ID 0fe6:9900 ICS Advent ... $ lsusb -t ...
/: Bus 03.Port 1: Dev 1, Class=root_hub, Driver=vhci_hcd/8p, 480M |__ Port 2: Dev 3, If 0, Class=Communications, Driver=cdc_ether, 480M |__ Port 2: Dev 3, If 1, Class=CDC Data, Driver=cdc_ether, 480M ...
$

For this example, we can also test the data plane.

(IP forwarding is disabled in both sides)

First, we can configure the IP address in the emulated device:

$ sudo ip addr add 10.0.0.1/24 dev eth0 
$ ip addr show dev eth0
4: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000
link/ether 88:00:66:99:5b:aa brd ff:ff:ff:ff:ff:ff
inet 10.0.0.1/24 scope global eth0
valid_lft forever preferred_lft forever
$

Second, for example, from other directly Ethernet connected machine (real or virtual) we can configure a macvlan interface in the same subnet to send/receive traffic (ping, iperf, etc.):

$ sudo ip link add macvlan0 link enp1s0 type macvlan mode bridge $ sudo ip addr add 10.0.0.2/24 dev macvlan0 $ sudo ip link set macvlan0 up $ ip addr show dev macvlan0 3: macvlan0@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether d6:f1:cd:f1:cc:02 brd ff:ff:ff:ff:ff:ff inet 10.0.0.2/24 scope global macvlan0 valid_lft forever preferred_lft forever inet6 fe80::d4f1:cdff:fef1:cc02/64 scope link valid_lft forever preferred_lft forever $ ping 10.0.0.1 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=55.6 ms 64 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=2.19 ms 64 bytes from 10.0.0.1: icmp_seq=3 ttl=64 time=1.74 ms 64 bytes from 10.0.0.1: icmp_seq=4 ttl=64 time=1.76 ms 64 bytes from 10.0.0.1: icmp_seq=5 ttl=64 time=1.93 ms 64 bytes from 10.0.0.1: icmp_seq=6 ttl=64 time=1.65 ms ...

In order to release resources:

$ sudo usbip port Imported USB devices ==================== ...
Port 01: <Port in Use> at High Speed(480Mbps) ICS Advent : unknown product (0fe6:9900) 3-2 -> usbip://127.0.0.1:3245/1-1 -> remote bus/dev 001/003 $ sudo usbip detach -p 01 usbip: info: Port 1 is now detached! $

And we can check that the device is released:

$ ip addr show dev eth0 
Device "eth0" does not exist.
$ lsusb
...
$

And of course, traffic from the other machine is not working:

From 10.0.0.2 icmp_seq=167 Destination Host Unreachable From 10.0.0.2 icmp_seq=168 Destination Host Unreachable From 10.0.0.2 icmp_seq=169 Destination Host Unreachable From 10.0.0.2 icmp_seq=170 Destination Host Unreachable ...

After this, we can emulate again or stop the emulated device from the first terminal (i.e. with Ctrl-C).

Example 3: bt

From one terminal, let’s emulate the Bluetooth device:

(“1-1” is the pair bus-port for the USB device on the local machine, as we are emulating, it could be anything. It is only important because usbip tool will have to use the same name to request the emulated device)

$ bt -a aa:bb:cc:dd:ee:11 -p 3243 -b 1-1 bt started.... server usbip tcp port: 3243 Bus-Port: 1-1 BD address: aa:bb:cc:dd:ee:11 Manufacturer: Trust ...

From another terminal connect to the emulator:

(localhost because emulator is running in the same equipment and the same name for pair bus-port as the emulator)

$ sudo modprobe vhci-hcd 
$ sudo usbip --tcp-port 3243 attach -r 127.0.0.1 -b 1-1
usbip: info: using port 3243 ("3243")
$

Now we can check that the new device is present:

(As we saw previously, for this example machine, bus 3 is virtualized)

$ hciconfig -a hci0: Type: Primary Bus: USB BD Address: AA:BB:CC:DD:EE:11 ACL MTU: 310:10 SCO MTU: 64:8 UP RUNNING PSCAN ISCAN INQUIRY RX bytes:1451 acl:0 sco:0 events:80 errors:0 TX bytes:1115 acl:0 sco:0 commands:73 errors:0 Features: 0xff 0xff 0x8f 0xfe 0xdb 0xff 0x5b 0x87 Packet type: DM1 DM3 DM5 DH1 DH3 DH5 HV1 HV2 HV3 Link policy: RSWITCH HOLD SNIFF PARK Link mode: SLAVE ACCEPT Name: 'BT USB TEST - CSR8510 A10' Class: 0x000000 Service Classes: Unspecified Device Class: Miscellaneous, HCI Version: 4.0 (0x6) Revision: 0x22bb LMP Version: 3.0 (0x5) Subversion: 0x22bb Manufacturer: Cambridge Silicon Radio (10) $ rfkill list ...
1: hci0: Bluetooth Soft blocked: no Hard blocked: no $ lsusb ...
Bus 003 Device 004: ID 0a12:0001 Cambridge Silicon Radio, Ltd Bluetooth Dongle (HCI mode) ...
$ lsusb -t ...
/: Bus 03.Port 1: Dev 1, Class=root_hub, Driver=vhci_hcd/8p, 480M |__ Port 3: Dev 4, If 0, Class=Wireless, Driver=btusb, 12M |__ Port 3: Dev 4, If 1, Class=Wireless, Driver=btusb, 12M ...
$

And we can turn off and turn on the emulated Bluetooth device, detecting several fake Bluetooth devices:

(At this moment, fake Bluetooth devices are not emulated/simulated so we can not set up)

Turn Bluetooth off
Turn Bluetooth on

In order to release resources:

$ sudo usbip port Imported USB devices ==================== ...
Port 02: <Port in Use> at Full Speed(12Mbps) Cambridge Silicon Radio, Ltd : Bluetooth Dongle (HCI mode) (0a12:0001) 3-3 -> usbip://127.0.0.1:3243/1-1 -> remote bus/dev 001/002 $ sudo usbip detach -p 02 usbip: info: Port 2 is now detached! $

And we can check that the device is released:

$ hciconfig $ rfkill list ...
$ lsusb ... $

And of course, device is not detected (as before emulation):

Bluetooth is not found

After this, we can emulate again or stop the emulated device from the first terminal (i.e. with Ctrl-C).

Emulated vs real USB devices

When the real hardware and/or final device is not used to test, we can always feel insecure about the results, and this is the biggest hurdle that we will have to overcome to check the correct operation of the devices by means of emulation.

So, in order to be confident, emulation must be as close as possible to the real hardware and in order to get the most real emulation every aspect of the device must be covered (or at least the necessary ones if they are not related with other aspects). In fact, for a correct test, we must not modify the driver, that is, we must only emulate the physical layer, so that the driver is not able to know if the device is real or emulated.

Starting to test with the real hardware device is a very good idea to get a reference to build the emulator with the same features. For the case of USB devices, the device emulator building is easier because of the existing procedure to get remote control that complies with all the characteristics mentioned above.

Conclusion

USB device emulation is the best way to integrate and test the related features in an efficient, automatic and easy way. But, in order to be confident about the emulation procedure, device emulators need to be previously validated to confirm that they work in the same way as real hardware.

Of course, the USB device emulator is not the same as the real hardware device, but the commented method, thanks to the tested procedure to get remote control of the device, it’s very close to the real scenario and can help a lot to improve our release and testing processes.

Finally, I would like to comment that one of the best advantages of using software emulators is that we will be able to cause specific behaviors, in a simple way, that would be very difficult to reproduce with real hardware, and this could help to find issues and be more robust.

Posted on Leave a comment

Fedora Workstation’s State of Gaming – A Case Study of Control (2019)

Back in the day, it used to irk me as to how GNU/Linux[1] distributions could not be even considered to be in the proximity of video games enthusiasts – less because of the performance of the video games themselves and more because of how inconvenient it could be for them to set it all up. Admittedly, it had been quite a while since an avid video games fan like me did that, so it was almost a no-brainer for me to try it out and see if things have changed. What I ended up finding surprised me – I like to think that it would be just as pleasing to both enthusiasts who have been playing video games on GNU/Linux distributions and to newcomers who have been scoping this, alike.

On a testing bench using an AMD RDNA2-based[2] GPU, the video game was configured to the highest possible graphical preset[3] to really stress the hardware into performing as much as its limiting factor. If the RDNA2 architecture reminds you of something, allow me to share that it is what forms the foundation of the GPU that no other than the widely acclaimed Steam Deck[4] makes use of. For that matter, if you factor in some performance scaling with respect to the handheld nature of the device and the optimized Proton compatibility layer, this article can be representative of what the Steam Deck is capable of when you use Fedora Workstation[5] as a platform of your choice for playing your favourite video games.

Figure 1 – GNOME Software helps to install Steam conveniently

To have an apples to apples comparison, we set up two environments – one with Windows 10 21H2[6] and one with Fedora Workstation 35. On the former, I installed MSI Afterburner[7] and ensured that the graphics drivers are up-to-date while I did not have to bother doing the same on the latter as they came preinstalled. The only extra thing that I did was to configure the Lutris v7.1 runner[8] after clicking my way through installing Lutris[9] and MangoHUD[10] from GNOME Software[11]. It is downright astonishing how much you can do these days on GNU/Linux distributions without actually having to interact with the command line, making the entry barrier very low and welcoming.

Figure 2 – GNOME Software helps to install Lutris conveniently

Before we get into some actual performance testing and comparison results, let me talk a bit about the video game that is at the centre of the case study. Control[12] is an action-adventure video game developed by Remedy Entertainment[13] and published by 505 Games[14]. The video game is centred around a fictitious organization about paranormal activities and takes inspiration from the likes of the SCP Foundation[15]. It is a well-optimized video game that exhibits great graphics and is a showcase of what the underlying hardware is capable of. I ran tests on both DirectX 11[16] and DirectX 12[17] versions of the video game with their compatibility layers[18], DXVK[19] and VKD3D[20], respectively.

Figure 3 – Lutris configured to play Control (2019) using the Wine runner

Following are the results of the tests. I made use of OBS Studio[21], which is available as both an installer binary and as a package in the RPM Fusion[22] repositories, to record around 15 seconds of in-menu gameplay and around 60 seconds of in-game gameplay. As the video game does not have any intrinsic benchmarking tool, the footage had to be broken down into segments of equal time periods to be able to pick up performance statistics on CPU usage, GPU usage and framerate. Please do note, even when OBS Studio introduces a certain overhead to the performance, the comparison still remains valid as in both the platforms the recording software is configured identically.

Metrics

  • Framerate
    • In the menus
Figure 4 – Framerate in the menus
  • In the game
Figure 5 – Framerate in the game
  • CPU usage
    • In the menus
Figure 6 – CPU usage in the menus
  • In the game
Figure 7 – CPU usage in the game
  • GPU usage
    • In the menus
Figure 8 – GPU usage in the menus
  • In the game
Figure 9 – GPU usage in the game

Please feel free to let your inner enthusiast loose in the statistics and try sharing as many performance differences as you have inferred so far in the comments section below. In the meanwhile, allow me to share mine –

  • With DXVK (DirectX 11), the loss of average in-menu framerate is around 19.87% and the same for average in-game framerate is barely 6.26%. DXVK is almost at the stage where a blind test of framerate smoothness could potentially confuse anyone as to which platform runs natively.
  • With VKD3D (DirectX 12), the loss of average in-menu framerate is barely 8.67% and the same for average in-game framerate is around 24.51%. VKD3D seems to be steadily catching up and very soon enough, video games would be able to run with minimal loss of performance.
  • With DXVK, there is only 1.40% of additional average CPU usage in the menus and around 17.88% of the same in the game. Closing this gap would help save battery life on handheld devices.
  • With VKD3D, the average CPU usage in the menus is around 1.47% less than the equivalent Windows platform and the same in the game is 1.62% more. VKD3D is a great choice for handheld devices.
  • With DXVK, the average GPU usage in the menus is around 13.40% more than that on Windows and the same in the game is around 1.04% more, making it more efficient in geometry rendering and less so in sprites.
  • With VKD3D, the average GPU usage in the game is around 8.13% more than that on Windows and the same in the game is around 9.34% less, thus helping save battery on handheld devices running these video games.
  • The CPU governor[23] makes a marginal difference in performance and hence, it is something that can be left alone untweaked. The marginal difference noticed can also be considered in the margin of error.
  • Fedora Workstation uses fewer system resources out of the box and hence, can easily dedicate a huge chunk of those to the video game in question but the same is not possible in Windows 10 21H2.

For someone who looked into GNU/Linux distributions as a platform for using interactive and entertainment software applications without having any fancy hardware requirements, these results almost feel like a breath of fresh air. With Valve[24] working on strengthening Proton[25] and other communities working on great solutions like Bottles[26] and Lutris, gaming on GNU/Linux distributions is no longer an elusive dream. Things are only going to get better with a great number of video games running at near-native performance as we go on. I do not know for certain if 2022 would be the year of Linux Desktop or not, but if you ask me whether 2022 would be the year of Linux Gaming – I would answer that with a resounding yes. Let me know your thoughts down below!

Appendix

  1. Highest possible graphical preset[3]
  2. Configuration differences[27]
  3. Performance measurements in the menus[28]
  4. Performance measurements in the game[29]

References

  1. https://en.wikipedia.org/wiki/Linux
  2. https://www.amd.com/en/technologies/rdna-2
  3. https://gist.github.com/t0xic0der/e6958f9404d395705a8b67a1ab39d024#file-preset-csv
  4. https://en.wikipedia.org/wiki/Steam_Deck
  5. https://getfedora.org/
  6. https://docs.microsoft.com/en-us/windows/release-health/status-windows-10-21h2
  7. https://www.msi.com/Landing/afterburner/graphics-cards
  8. https://lutris.net/runners
  9. https://lutris.net/
  10. https://github.com/flightlessmango/MangoHud
  11. https://gitlab.gnome.org/GNOME/gnome-software
  12. https://en.wikipedia.org/wiki/Control_(video_game)
  13. https://www.remedygames.com/
  14. https://505games.com/
  15. https://scp-wiki.wikidot.com/
  16. https://en.wikipedia.org/wiki/DirectX#DirectX_11
  17. https://en.wikipedia.org/wiki/DirectX#DirectX_12
  18. https://en.wikipedia.org/wiki/Compatibility_layer
  19. https://github.com/doitsujin/dxvk
  20. https://source.winehq.org/git/vkd3d.git/
  21. https://obsproject.com/
  22. https://rpmfusion.org/
  23. https://wiki.archlinux.org/title/CPU_frequency_scaling#Scaling_governors
  24. https://www.valvesoftware.com/en/
  25. https://github.com/ValveSoftware/Proton
  26. https://usebottles.com/
  27. https://gist.github.com/t0xic0der/e6958f9404d395705a8b67a1ab39d024#file-config-csv
  28. https://gist.github.com/t0xic0der/e6958f9404d395705a8b67a1ab39d024#file-in-menu-csv
  29. https://gist.github.com/t0xic0der/e6958f9404d395705a8b67a1ab39d024#file-in-game-csv
Posted on Leave a comment

TCP window scaling, timestamps and SACK

The Linux TCP stack has a myriad of sysctl knobs that allow to change its behavior.  This includes the amount of memory that can be used for receive or transmit operations, the maximum number of sockets and optional features and protocol extensions.

There are  multiple articles that recommend to disable TCP extensions, such as timestamps or selective acknowledgments (SACK) for various “performance tuning” or “security” reasons.

This article provides background on what these extensions do, why they
are enabled by default, how they relate to one another and why it is normally a bad idea to turn them off.

TCP Window scaling

The data transmission rate that TCP can sustain is limited by several factors. Some of these are:

  • Round trip time (RTT).  This is the time it takes for a packet to get to the destination and a reply to come back. Lower is better.
  • lowest link speed of the network paths involved
  • frequency of packet loss
  • the speed at which new data can be made available for transmission
    For example, the CPU needs to be able to pass data to the network adapter fast enough. If the CPU needs to encrypt the data first, the adapter might have to wait for new data. In similar fashion disk storage can be a bottleneck if it can’t read the data fast enough.
  • The maximum possible size of the TCP receive window. The receive window determines how much data (in bytes) TCP can transmit before it has to wait for the receiver to report reception of that data. This is announced by the receiver. The receiver will constantly update this value as it reads and acknowledges reception of the incoming data. The receive windows current value is contained in the TCP header that is part of every segment sent by TCP. The sender is thus aware of the current receive window whenever it receives an acknowledgment from the peer. This means that the higher the round-trip time, the longer it takes for sender to get receive window updates.

TCP is limited to at most 64 kilobytes of unacknowledged (in-flight) data. This is not even close to what is needed to sustain a decent data rate in most networking scenarios. Let us look at some examples.

Theoretical data rate

With a round-trip-time of 100 milliseconds, TCP can transfer at most 640 kilobytes per second. With a 1 second delay, the maximum theoretical data rate drops down to only 64 kilobytes per second.

This is because of the receive window. Once 64kbyte of data have been sent the receive window is already full.  The sender must wait until the peer informs it that at least some of the data has been read by the application. 

The first segment sent reduces the TCP window by the size of that segment. It takes one round-trip before an update of the receive window value will become available. When updates arrive with a 1 second delay, this results in a 64 kilobyte limit even if the link has plenty of bandwidth available.

In order to fully utilize a fast network with several milliseconds of delay, a window size larger than what classic TCP supports is a must. The ’64 kilobyte limit’ is an artifact of the protocols specification: The TCP header reserves only 16bits for the receive window size. This allows receive windows of up to 64KByte. When the TCP protocol was originally designed, this size was not seen as a limit.

Unfortunately, its not possible to just change the TCP header to support a larger maximum window value. Doing so would mean all implementations of TCP would have to be updated simultaneously or they wouldn’t understand one another anymore. To solve this, the interpretation of the receive window value is changed instead.

The ‘window scaling option’ allows to do this while keeping compatibility to existing implementations.

TCP Options: Backwards-compatible protocol extensions

TCP supports optional extensions. This allows to enhance the protocol with new features without the need to update all implementations at once. When a TCP initiator connects to the peer, it also send a list of supported extensions. All extensions follow the same format: an unique option number followed by the length of the option and the option data itself.

The TCP responder checks all the option numbers contained in the connection request. If it does not understand an option number it skips
‘length’ bytes of data and checks the next option number. The responder omits those it did not understand from the reply. This allows both the sender and receiver to learn the common set of supported options.

With window scaling, the option data always consist of a single number.

The window scaling option

 
Window Scale option (WSopt): Kind: 3, Length: 3
    +---------+---------+---------+
    | Kind=3  |Length=3 |shift.cnt|
    +---------+---------+---------+
         1         1         1

The window scaling option tells the peer that the receive window value found in the TCP header should be scaled by the given number to get the real size.

For example, a TCP initiator that announces a window scaling factor of 7 tries to instruct the responder that any future packets that carry a receive window value of 512 really announce a window of 65536 byte. This is an increase by a factor of 128. This would allow a maximum TCP Window of 8 Megabytes.

A TCP responder that does not understand this option ignores it. The TCP packet sent in reply to the connection request (the syn-ack) then does not contain the window scale option. In this case both sides can only use a 64k window size. Fortunately, almost every TCP stack supports and enables this option by default, including Linux.

The responder includes its own desired scaling factor. Both peers can use a different number. Its also legitimate to announce a scaling factor of 0. This means the peer should treat the receive window value it receives verbatim, but it allows scaled values in the reply direction — the recipient can then use a larger receive window.

Unlike SACK or TCP timestamps, the window scaling option only appears in the first two packets of a TCP connection, it cannot be changed afterwards. It is also not possible to determine the scaling factor by looking at a packet capture of a connection that does not contain the initial connection three-way handshake.

The largest supported scaling factor is 14. This allows TCP window sizes
of up to one Gigabyte.

Window scaling downsides

It can cause data corruption in very special cases. Before you disable the option – it is impossible under normal circumstances. There is also a solution in place that prevents this. Unfortunately, some people disable this solution without realizing the relationship with window scaling. First, let’s have a look at the actual problem that needs to be addressed. Imagine the following sequence of events:

  1. The sender transmits segments: s_1, s_2, s_3, … s_n
  2.  The receiver sees: s_1, s_3, .. s_n and sends an acknowledgment for s_1.
  3.  The sender considers s_2 lost and sends it a second time. It also sends new data contained in segment s_n+1.
  4.  The receiver then sees: s_2, s_n+1, s_2: the packet s_2 is received twice.

This can happen for example when a sender triggers re-transmission too early. Such erroneous re-transmits are never a problem in normal cases, even with window scaling. The receiver will just discard the duplicate.

Old data to new data

The TCP sequence number can be at most 4 Gigabyte. If it becomes larger than this, the sequence wraps back to 0 and then increases again. This is not a problem in itself, but if this occur fast enough then the above scenario can create an ambiguity.

If a wrap-around occurs at the right moment, the sequence number s_2 (the re-transmitted packet) can already be larger than s_n+1. Thus, in the last step (4), the receiver may interpret this as: s_2, s_n+1, s_n+m, i.e. it could view the ‘old’ packet s_2 as containing new data.

Normally, this won’t happen because a ‘wrap around’ occurs only every couple of seconds or minutes even on high bandwidth links. The interval between the original and a unneeded re-transmit will be a lot smaller.

For example,with a transmit speed of 50 Megabytes per second, a
duplicate needs to arrive more than one minute late for this to become a problem. The sequence numbers do not wrap fast enough for small delays to induce this problem.

Once TCP approaches ‘Gigabyte per second’ throughput rates, the sequence numbers can wrap so fast that even a delay by only a few milliseconds can create duplicates that TCP cannot detect anymore. By solving the problem of the too small receive window, TCP can now be used for network speeds that were impossible before – and that creates a new, albeit rare problem. To safely use Gigabytes/s speed in environments with very low RTT receivers must be able to detect such old duplicates without relying on the sequence number alone.

TCP time stamps

A best-before date

In the most simple terms, TCP timestamps just add a time stamp to the packets to resolve the ambiguity caused by very fast sequence number wrap around. If a segment appears to contain new data, but its timestamp is older than the last in-window packet, then the sequence number has wrapped and the ”new” packet is actually an older duplicate. This resolves the ambiguity of re-transmits even for extreme corner cases.

But this extension allows for more than just detection of old packets. The other major feature made possible by TCP timestamps are more precise round-trip time measurements (RTTm).

A need for precise round-trip-time estimation

When both peers support timestamps,  every TCP segment carries two additional numbers: a timestamp value and a timestamp echo.

 
TCP Timestamp option (TSopt): Kind: 8, Length: 10
+-------+----+----------------+-----------------+
|Kind=8 | 10 |TS Value (TSval)|EchoReply (TSecr)|
+-------+----+----------------+-----------------+
    1      1         4                4

An accurate RTT estimate is crucial for TCP performance. TCP automatically re-sends data that was not acknowledged. Re-transmission is triggered by a timer: If it expires, TCP considers one or more packets that it has not yet received an acknowledgment for to be lost. They are then sent again.

But “has not been acknowledged” does not mean the segment was lost. It is also possible that the receiver did not send an acknowledgment so far or that the acknowledgment is still in flight. This creates a dilemma: TCP must wait long enough for such slight delays to not matter, but it can’t wait for too long either.

Low versus high network delay

In networks with a high delay, if the timer fires too fast, TCP frequently wastes time and bandwidth with unneeded re-sends.

In networks with a low delay however,  waiting for too long causes reduced throughput when a real packet loss occurs. Therefore, the timer should expire sooner in low-delay networks than in those with a high delay. The tcp retransmit timeout therefore cannot use a fixed constant value as a timeout. It needs to adapt the value based on the delay that it experiences in the network.

Round-trip time measurement

TCP picks a retransmit timeout that is based on the expected round-trip time (RTT). The RTT is not known in advance. RTT is estimated by measuring the delta between the time a segment is sent and the time TCP receives an acknowledgment for the data carried by that segment.

This is complicated by several factors.

  • For performance reasons, TCP does not generate a new acknowledgment for every packet it receives. It waits  for a very small amount of time: If more segments arrive, their reception can be acknowledged with a single ACK packet. This is called “cumulative ACK”.
  •  The round-trip-time is not constant. This is because of a myriad of factors. For example, a client might be a mobile phone switching to different base stations as its moved around. Its also possible that packet switching takes longer when link or CPU utilization increases.
  • a packet that had to be re-sent must be ignored during computation. This is because the sender cannot tell if the ACK for the re-transmitted segment is acknowledging the original transmission (that arrived after all) or the re-transmission.

This last point is significant: When TCP is busy recovering from a loss, it may only receives ACKs for re-transmitted segments. It then can’t measure (update) the RTT during this recovery phase. As a consequence it can’t adjust the re-transmission timeout, which then keeps growing exponentially. That’s a pretty specific case (it assumes that other mechanisms such as fast retransmit or SACK did not help). Nevertheless, with TCP timestamps, RTT evaluation is done even in this case.

If the extension is used, the peer reads the timestamp value from the TCP segments extension space and stores it locally. It then places this value in all the segments it sends back as the “timestamp echo”.

Therefore the option carries two timestamps: Its senders own timestamp and the most recent timestamp it received from the peer. The “echo timestamp” is used by the original sender to compute the RTT. Its the delta between its current timestamp clock and what was reflected in the “timestamp echo”.

Other timestamp uses

TCP timestamps even have other uses beyond PAWS and RTT measurements. For example it becomes possible to detect if a retransmission was unnecessary. If the acknowledgment carries an older timestamp echo, the acknowledgment was for the initial packet, not the re-transmitted one.

Another, more obscure use case for TCP timestamps is related to the TCP syn cookie feature.

TCP connection establishment on server side

When connection requests arrive faster than a server application can accept the new incoming connection, the connection backlog will eventually reach its limit. This can occur because of a mis-configuration of the system or a bug in the application. It also happens when one or more clients send connection requests without reacting to the ‘syn ack’ response. This fills the connection queue with incomplete connections. It takes several seconds for these entries to time out. This is called a “syn flood attack”.

TCP timestamps and TCP syn cookies

Some TCP stacks allow to accept new connections even if the queue is full. When this happens, the Linux kernel will print a prominent message to the system log:

Possible SYN flooding on port P. Sending Cookies. Check SNMP counters.

This mechanism bypasses the connection queue entirely. The information that is normally stored in the connection queue is encoded into the SYN/ACK responses TCP sequence number. When the ACK comes back, the queue entry can be rebuilt from the sequence number.

The sequence number only has limited space to store information. Connections established using the ‘TCP syn cookie’ mechanism can not support TCP options for this reason.

The TCP options that are common to both peers can be stored in the timestamp, however. The ACK packet reflects the value back in the timestamp echo field which allows to recover the agreed-upon TCP options as well. Else, cookie-connections are restricted by the standard 64 kbyte receive window.

Common myths – timestamps are bad for performance

Unfortunately some guides recommend disabling TCP timestamps to reduce the number of times the kernel needs to access the timestamp clock to get the current time. This is not correct. As explained before, RTT estimation is a necessary part of TCP. For this reason, the kernel always takes a microsecond-resolution time stamp when a packet is received/sent.

Linux re-uses the clock timestamp taken for the RTT estimation for the remainder of the packet processing step. This also avoids the extra clock access to add a timestamp to an outgoing TCP packet.

The entire timestamp option only requires 10 bytes of TCP option space in each packet, this is not a significant decrease in space available for packet payload.

common myths – timestamps are a security problem

Some security audit tools and (older) blog posts recommend to disable TCP
timestamps because they allegedly leak system uptime: This would then allow to estimate the patch level of the system/kernel. This was true in the past: The timestamp clock is based on a constantly increasing value that starts at a fixed value on each system boot. A timestamp value would give a estimate as to how long the machine has been running (uptime).

As of Linux 4.12 TCP timestamps do not reveal the uptime anymore. All timestamp values sent use a peer-specific offset. Timestamp values also wrap every 49 days.

In other words, connections from or to address “A” see a different timestamp than connections to the remote address “B”.

Run sysctl net.ipv4.tcp_timestamps=2 to disable the randomization offset. This makes analyzing packet traces recorded by tools like wireshark or tcpdump easier – packets sent from the host then all have the same clock base in their TCP option timestamp.  For normal operation the default setting should be left as-is.

Selective Acknowledgments

TCP has problems if several packets in the same window of data are lost. This is because TCP Acknowledgments are cumulative, but only for packets
that arrived in-sequence. Example:

  • Sender transmits segments s_1, s_2, s_3, … s_n
  • Sender receives ACK for s_2
  • This means that both s_1 and s_2 were received and the
    sender no longer needs to keep these segments around.
  • Should s_3 be re-transmitted? What about s_4? s_n?

The sender waits for a “retransmission timeout” or ‘duplicate ACKs’ for s_2 to arrive. If a retransmit timeout occurs or several duplicate ACKs for s_2 arrive, the sender transmits s_3 again.

If the sender receives an acknowledgment for s_n, s_3 was the only missing packet. This is the ideal case. Only the single lost packet was re-sent.

If the sender receives an acknowledged segment that is smaller than s_n, for example s_4, that means that more than one packet was lost. The
sender needs to re-transmit the next segment as well.

Re-transmit strategies

Its possible to just repeat the same sequence: re-send the next packet until the receiver indicates it has processed all packet up to s_n. The problem with this approach is that it requires one RTT until the sender knows which packet it has to re-send next. While such strategy avoids unnecessary re-transmissions, it can take several seconds and more until TCP has re-sent the entire window of data.

The alternative is to re-send several packets at once. This approach allows TCP to recover more quickly when several packets have been lost. In the above example TCP re-send s_3, s_4, s_5, .. while it can only be sure that s_3 has been lost.

From a latency point of view, neither strategy is optimal. The first strategy is fast if only a single packet has to be re-sent, but takes too long when multiple packets were lost.

The second one is fast even if multiple packet have to be re-sent, but at the cost of wasting bandwidth. In addition, such a TCP sender could have transmitted new data already while it was doing the unneeded re-transmissions.

With the available information TCP cannot know which packets were lost. This is where TCP Selective Acknowledgments (SACK) come in. Just like window scaling and timestamps, it is another optional, yet very useful TCP feature.

The SACK option

 
   TCP Sack-Permitted Option: Kind: 4, Length 2
   +---------+---------+
   | Kind=4  | Length=2|
   +---------+---------+

A sender that supports this extension includes the “Sack Permitted” option in the connection request. If both endpoints support the extension, then a peer that detects a packet is missing in the data stream can inform the sender about this.

 
   TCP SACK Option: Kind: 5, Length: Variable
                     +--------+--------+
                     | Kind=5 | Length |
   +--------+--------+--------+--------+
   |      Left Edge of 1st Block       |
   +--------+--------+--------+--------+
   |      Right Edge of 1st Block      |
   +--------+--------+--------+--------+
   |                                   |
   /            . . .                  /
   |                                   |
   +--------+--------+--------+--------+
   |      Left Edge of nth Block       |
   +--------+--------+--------+--------+
   |      Right Edge of nth Block      |
   +--------+--------+--------+--------+

A receiver that encounters segment_s2 followed by s_5…s_n, it will include a SACK block when it sends the acknowledgment for s_2:

 
                +--------+-------+
                | Kind=5 |   10  |
+--------+------+--------+-------+
| Left edge: s_5                 |
+--------+--------+-------+------+
| Right edge: s_n                |
+--------+-------+-------+-------+

This tells the sender that segments up to s_2 arrived in-sequence, but it also lets the sender know that the segments s_5 to s_n were also received. The sender can then re-transmit these two packets and proceed to send new data.

The mythical lossless network

In theory SACK provides no advantage if the connection cannot experience packet loss. Or the connection has such a low latency that even waiting one full RTT does not matter.

In practice lossless behavior is virtually impossible to ensure.
Even if the network and all its switches and routers have ample bandwidth and buffer space packets can still be lost:

  • The host operating system might be under memory pressure and drop
    packets. Remember that a host might be handling tens of thousands of packet streams simultaneously.
  • The CPU might not be able to drain incoming packets from the network interface fast enough. This causes packet drops in the network adapter itself.
  • If TCP timestamps are not available even a connection with a very small RTT can stall momentarily during loss recovery.

Use of SACK does not increase the size of TCP packets unless a connection experiences packet loss. Because of this, there is hardly a reason to disable this feature. Almost all TCP stacks support SACK – it is typically only absent on low-power IOT-alike devices that are not doing TCP bulk data transfers.

When a Linux system accepts a connection from such a device, TCP automatically disables SACK for the affected connection.

Summary

The three TCP extensions examined in this post are all related to TCP performance and should best be left to the default setting: enabled.

The TCP handshake ensures that only extensions that are understood by both parties are used, so there is never a need to disable an extension globally just because a peer might not support it.

Turning these extensions off results in severe performance penalties, especially in case of TCP Window Scaling and SACK. TCP timestamps can be disabled without an immediate disadvantage, however there is no compelling reason to do so anymore. Keeping them enabled also makes it possible to support TCP options even when SYN cookies come into effect.

Posted on Leave a comment

Play Stadia Games from Fedora

Do you enjoy playing games on your Fedora system? You might be interested to know that Stadia is available to play via a Google Chrome browser on your Fedora desktop. Additionally, Stadia is free for two months starting April 8th. Follow these simple steps to install the Google Chrome web browser in Fedora and enjoy the new world of cloud-based gaming on your Fedora Linux PC!

  1. Go to https://www.google.com/chrome using any available web browser and click the big blue button labeled Download Chrome.
  2. Select the 64 bit .rpm (For Fedora/openSUSE) package format and then click Accept and Install.
  3. You should be presented with a prompt asking what you want to do with the file. Choose the Open with Software Install option if you see this prompt.
  4. Click Install in the Software Install application to install Google Chrome. You may be prompted for your password to authorize the installation.

If you don’t see the Open with Software Install option at step 3, choose to save the installer to your Downloads folder instead. Once you have the installer downloaded, enter the following command in a terminal using sudo:

$ sudo dnf install ~/Downloads/google-chrome-*.rpm

Once you have Google Chrome installed, use it to browse to https://stadia.google.com/ and follow the directions there to create your user profile and try out the games.

Chrome installation demonstration

Chrome installation on Fedora 31

Additional resources


Photo by Derek Story on Unsplash.

Posted on Leave a comment

Managing JBoss EAP/Wildfly using Jcliff

Systems management can be a difficult task. Not only does one need to determine what the end state should be but, more importantly, how to ensure systems attain and remain at this state. Doing so in an automated fashion is just as critical, because there may be a large number of target instances. In regard to enterprise Java middleware application servers, these instances are typically configured using a set of XML based files. Although these files may be manually configured, most application servers have a command-line based tool or set of tools that abstracts the end user from having to worry about the underlying configuration. WebSphere Liberty includes a variety of tools to manage these resources, whereas JBoss contains the jboss-cli tool.

Although each tool accomplishes its utilitarian use case as it allows for proper server management, it does fail to adhere to one of the principles of automation and configuration management: idempotence. Ensuring the desired state does not equate to executing the same action with every iteration. Additional intelligence must be introduced. Along with idempotence, another core principle of configuration management is that values be expressed declaratively and stored in a version control system.

Jcliff is a Java-based utility that is built on top of the JBoss command-line interface and allows for the desired intent for the server configuration to be expressed declaratively, which in turn can be stored in a version control system. We’ll provide an overview of the Jcliff utility including inherent benefits, installation options, and several examples showcasing the use.

The problem space

Before beginning to work with Jcliff, one must be cognizant of the underlying JBoss architecture. The configuration of the JBoss server can be expressed using Dynamic Model Representation (DMR) notation. The following is an example of DMR:

{ "system-property" => { "foo" => "bar", "bah" => "gah" } } 

The DMR example above describes several Java system properties that will be stored within the JBoss configuration. These properties can be added using the JBoss CLI by executing the following command:

/system-property=foo:add(value=bar) /system-property=bah:add(value=gah) 

The challenge is that the same commands cannot be executed more than once. Otherwise, an error similar to the following is produced.

{ "outcome" => "failed", "failure-description" => "WFLYCTL0212: Duplicate resource [(\"system-property\" => \"foo\")]", "rolled-back" => true } 

Where Jcliff excels is that it includes the necessary intelligence to determine the current state of the JBoss configuration and then applying the appropriate configurations necessary. This is critical to adopting proper configuration management when working with JBoss.

Installing Jcliff

With a baseline understanding of Jcliff, let’s discuss the methods in which one can obtain the utility. First, Jcliff can be built from source from the project repository, given that it’s a freely open source project. Alternatively, Jcliff can be installed using popular software packaging tools in each of the primary operating system families.

Linux (rpm)

Jcliff can be consumed on a Linux package when capable of consuming an rpm using a package manager.

First, install the yum repository:

$ cat /etc/yum.repos.d/jcliff.repo [jcliff] baseurl = http://people.redhat.com/~rpelisse/jcliff.yum/ gpgcheck = 0 name = JCliff repository 

Once this repository has been added, JCliff can be installed by using Yum or Dnf:

Using yum:

$ sudo yum install jcliff 

Or using dnf:

$ sudo dnf install jcliff 

Manual installation

Jcliff can also be installed manually without the need to leverage a package manager. This is useful for those on Linux distributions that cannot consume rpm packages. Simply download and unpack the archive from the release artifact from the project repository.

$ curl -O \ https://github.com/bserdar/jcliff/releases/download/v2.12.1/jcliff-2.12.1-dist.tar.gz $ tar xzvf jcliff-2.12.1-dist.tar.gz 

Place the resulting Jcliff folder in the destination of your choosing. This location is known as JCLIFF_HOME. Add this environment variable and add it to the path as described below:

$ export JCLIFF_HOME=/path/to/jcliff-2.12.1/ $ export PATH=${JCLIFF_HOME}/bin:${PATH} 

At this point, you should be able to execute the jcliff command.

OSX

While users on OSX can take advantage of the manual installation option, those making use of the brew package manager can use this tool as well.

Execute the following commands to install Jcliff using Brew:

$ brew tap redhat-cop/redhat-cop $ brew install redhat-cop/redhat-cop/jcliff 

Windows

Fear not, Windows users; you can also make use of Jcliff without having to compile from source. Windows, in the same vein as OSX, does not have an official package manager, but Chocolatey has been given this role.

Execute the following command to install Jcliff using Chocolatey:

$ choco install jcliff 

Using Jcliff

With Jcliff properly installed on your machine, let’s walk through a simple example to demonstrate the use of the tool. As discussed previously, Jcliff makes use of files that describe the target configuration. At this point, you may have questions like: What is the format of these configurations, and where do I begin?

Let’s take a simple example and look into adding a new system property to the JBoss configuration. Launch an instance of JBoss and connect using the JBoss command-line interface:

$ /bin/jboss-cli.sh --connect [standalone@localhost:9990 /] ls /system-property [standalone@localhost:9990 /] 

Now, use Jcliff to update the configuration. First, we’ll need to create a Jcliff rule. The rule resembles DMR format and appears similar to the following:

$ cat jcliff-prop { "system-property" => { "jcliff.enabled" => "true", } } 

Then we can simply ask JCliff to run this script against our JBoss server:

$ jcliff jcliff-prop Jcliff version 2.12.4 2019-10-02 17:03:40:0008: /core-service=platform-mbean/type=runtime:read-attribute(name=system-properties) 2019-10-02 17:03:41:0974: /system-property=jcliff.enabled:add(value="true") 

Use the JBoss CLI to verify the changes have been applied.

$ “${JBOSS_HOME}/bin/jboss-cli.sh” --connect --command=/system-property=jcliff.enabled:read-resource { "outcome" => "success", "result" => {"value" => "true"} } 

To demonstrate how Jcliff handles repetitive executions, run the previous Jcliff command again. Inspect the output.

$ jcliff jcliff-prop Jcliff version 2.12.4 2019-10-02 17:05:34:0107: /core-service=platform-mbean/type=runtime:read-attribute(name=system-properties) 

Notice that only 1 command was executed against the JBoss server instead of two? This action was a result of assessing the current state within JBoss and determining that no action was necessary to reach the desired state. Mission accomplished.

Next steps

Although the preceding scenario was not overly complex, you should now have the knowledge necessary to understand the capabilities and functionality of the Jcliff utility as well as the benefits afforded through declarative configurations and automation. When building out enterprise-grade systems, however, Jcliff would not be executed manually. You would want to integrate the utility into a proper configuration management tool that employs many of the automation and configuration principles described earlier. Fortunately, Jcliff has been integrated into several popular configuration management tools.

In an upcoming article, we’ll provide an overview of the configuration management options available with Jcliff, along with examples that will allow you to quickly build out enterprise-grade confidence with Jcliff.

Share

The post Managing JBoss EAP/Wildfly using Jcliff appeared first on Red Hat Developer.

Posted on Leave a comment

Use Postfix to get email from your Fedora system

Communication is key. Your computer might be trying to tell you something important. But if your mail transport agent (MTA) isn’t properly configured, you might not be getting the notifications. Postfix is a MTA that’s easy to configure and known for a strong security record. Follow these steps to ensure that email notifications sent from local services will get routed to your internet email account through the Postfix MTA.

Install packages

Use dnf to install the required packages (you configured sudo, right?):

$ sudo -i
# dnf install postfix mailx

If you previously had a different MTA configured, you may need to set Postfix to be the system default. Use the alternatives command to set your system default MTA:

$ sudo alternatives --config mta
There are 2 programs which provide 'mta'. Selection Command
*+ 1 /usr/sbin/sendmail.sendmail 2 /usr/sbin/sendmail.postfix
Enter to keep the current selection[+], or type selection number: 2

Create a password_maps file

You will need to create a Postfix lookup table entry containing the email address and password of the account that you want to use to for sending email:

# [email protected]
# MY_EMAIL_PASSWORD=abcdefghijklmnop
# MY_SMTP_SERVER=smtp.gmail.com
# MY_SMTP_SERVER_PORT=587
# echo "[$MY_SMTP_SERVER]:$MY_SMTP_SERVER_PORT $MY_EMAIL_ADDRESS:$MY_EMAIL_PASSWORD" >> /etc/postfix/password_maps
# chmod 600 /etc/postfix/password_maps
# unset MY_EMAIL_PASSWORD
# history -c

If you are using a Gmail account, you’ll need to configure an “app password” for Postfix, rather than using your gmail password. See “Sign in using App Passwords” for instructions on configuring an app password.

Next, you must run the postmap command against the Postfix lookup table to create or update the hashed version of the file that Postfix actually uses:

# postmap /etc/postfix/password_maps

The hashed version will have the same file name but it will be suffixed with .db.

Update the main.cf file

Update Postfix’s main.cf configuration file to reference the Postfix lookup table you just created. Edit the file and add these lines.

relayhost = smtp.gmail.com:587
smtp_tls_security_level = verify
smtp_tls_mandatory_ciphers = high
smtp_tls_verify_cert_match = hostname
smtp_sasl_auth_enable = yes
smtp_sasl_security_options = noanonymous
smtp_sasl_password_maps = hash:/etc/postfix/password_maps

The example assumes you’re using Gmail for the relayhost setting, but you can substitute the correct hostname and port for the mail host to which your system should hand off mail for sending.

For the most up-to-date details about the above configuration options, see the man page:

$ man postconf.5

Enable, start, and test Postfix

After you have updated the main.cf file, enable and start the Postfix service:

# systemctl enable --now postfix.service

You can then exit your sudo session as root using the exit command or Ctrl+D. You should now be able to test your configuration with the mail command:

$ echo 'It worked!' | mail -s "Test: $(date)" [email protected]

Update services

If you have services like logwatch, mdadm, fail2ban, apcupsd or certwatch installed, you can now update their configurations so that their email notifications will go to your internet email address.

Optionally, you may want to configure all email that is sent to your local system’s root account to go to your internet email address. Add this line to the /etc/aliases file on your system (you’ll need to use sudo to edit this file, or switch to the root account first):

root: [email protected]

Now run this command to re-read the aliases:

# newaliases
  • TIP: If you are using Gmail, you can add an alpha-numeric mark between your username and the @ symbol as demonstrated above to make it easier to identify and filter the email that you will receive from your computer(s).

Troubleshooting

View the mail queue:

$ mailq

Clear all email from the queues:

# postsuper -d ALL

Filter the configuration settings for interesting values:

$ postconf | grep "^relayhost\|^smtp_"

View the postfix/smtp logs:

$ journalctl --no-pager -t postfix/smtp

Reload postfix after making configuration changes:

$ systemctl reload postfix

Photo by Sharon McCutcheon on Unsplash.

Posted on Leave a comment

Play Windows games on Fedora with Steam Play and Proton

Some weeks ago, Steam announced a new addition to Steam Play with Linux support for Windows games using Proton, a fork from WINE. This capability is still in beta, and not all games work. Here are some more details about Steam and Proton.

According to the Steam website, there are new features in the beta release:

  • Windows games with no Linux version currently available can now be installed and run directly from the Linux Steam client, complete with native Steamworks and OpenVR support.
  • DirectX 11 and 12 implementations are now based on Vulkan, which improves game compatibility and reduces performance impact.
  • Fullscreen support has been improved. Fullscreen games seamlessly stretch to the desired display without interfering with the native monitor resolution or requiring the use of a virtual desktop.
  • Improved game controller support. Games automatically recognize all controllers supported by Steam. Expect more out-of-the-box controller compatibility than even the original version of the game.
  • Performance for multi-threaded games has been greatly improved compared to vanilla WINE.

Installation

If you’re interested in trying Steam with Proton out, just follow these easy steps. (Note that you can ignore the first steps to enable the Steam Beta if you have the latest updated version of Steam installed. In that case you no longer need Steam Beta to use Proton.)

Open up Steam and log in to your account. This example screenshot shows support for only 22 games before enabling Proton.

Now click on Steam option on top of the client. This displays a drop down menu. Then select Settings.

Now the settings window pops up. Select the Account option and next to Beta participation, click on change.

Now change None to Steam Beta Update.

Click on OK and a prompt asks you to restart.

Let Steam download the update. This can take a while depending on your internet speed and computer resources.

After restarting, go back to the Settings window. This time you’ll see a new option. Make sure the check boxes for Enable Steam Play for supported titles, Enable Steam Play for all titles and Use this tool instead of game-specific selections from Steam are enabled. The compatibility tool should be Proton.

The Steam client asks you to restart. Do so, and once you log back into your Steam account, your game library for Linux should be extended.

Installing a Windows game using Steam Play

Now that you have Proton enabled, install a game. Select the title you want and you’ll find the process is similar to installing a normal game on Steam, as shown in these screenshots.

After the game is done downloading and installing, you can play it.

Some games may be affected by the beta nature of Proton. The game in this example, Chantelise, had no audio and a low frame rate. Keep in mind this capability is still in beta and Fedora is not responsible for results. If you’d like to read further, the community has created a Google doc with a list of games that have been tested.