Posted on Leave a comment

Podcast: Tackling the toughest challenges at the intersection of tech and society

Subscribe
AmazonApple | Google | iHeartRadio | Spotify | Stitcher

Episodes

Episode 6

Episode 5

Episode 4

Episode 3

Episode 2

Episode 1

Series trailer

About

Microsoft President and Vice Chair Brad Smith speaks with leaders in government, business and culture to explore the world’s most critical challenges at the intersection of technology and society.

As a 30-year veteran of an industry driven by disruption, Brad Smith hosts candid conversations with his guests that examine, reframe and explore potential solutions to the digital issues shaping our world today, including cybersecurity, privacy, digital inclusion, environmental sustainability, artificial intelligence and human rights.

Posted on Leave a comment

Contribute at the Fedora Test Week for Kernel 5.10

The kernel team is working on final integration for kernel 5.10. This version was just recently released, and will arrive soon in Fedora. As a result, the Fedora kernel and QA teams have organized a test week from Monday, January 04, 2021 through Monday, January 11, 2021. Refer to the wiki page for links to the test images you’ll need to participate. Read below for details.

How does a test week work?

A test week is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to do the following things:

  • Download test materials, which include some large files
  • Read and follow directions step by step

The wiki page for the kernel test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the day of the event, please do some testing and report your results. We have a document which provides all the steps written.

Happy testing, and we hope to see you on test day.

Posted on Leave a comment

UnrealCLR Awarded A Epic MegaGrant

The open source UnrealCLR project was just awarded an Epic MegaGrant. The Epic MegaGrant program was first announced back at GDC 2019 and consists of a $100M fund for supporting game and media development. Previous recipients have included Blender, Godot, RayLib, Laighter, ArmorPaint, Krita and more.

The UnrealCLR project (we previously covered including a small getting started tutorial) brings CLR or Common Language Runtime support to Unreal Engine. In a nutshell this enables C# and F# developers to develop in those languages in Unreal and even gives debugging and blueprint integration support. Even better it is implemented as a plugin so you do not have to build UE4 from source code. In their own words, UnrealCLR is described as:

UnrealCLR is a plugin which natively integrates .NET host into the Unreal Engine with the Common Language Runtime for direct execution of managed code to build a game/application logic using the full power of C# 9.0, F# 5.0, and .NET facilities with engine API. The project is aimed at stability, performance, and maintainability.

Details of the MegaGrant were announced on Twitter:

Congratulations to the UnrealCLR team! UnrealCLR is an open source project that is available here on GitHub under the LGPL license. You can learn more about UnrealCLR and the Epic MegaGrant program in the video below. If you want to get started with UnrealCLR we recommend you start here for more details.

Posted on Leave a comment

Godot 3.2.4 Beta 4 Released

Godot have just released a new version of Godot 3.2.4, beta 4. We have already discussed several of the recent improvements in the 3.2.4 release including 2D Sprite Batching and the new improved FBX Importer. In addition to further improvements in those areas and various bug fixes, the beta 4 release brings a few new features to the table.

Details from the Godot Engine blog:

In particular, this build adds optional GDNative support to the HTML5 target, on top of the pre-existing optional multithreading support. The HTML5 export templates now come in three flavors which you can select in the export preset: normal, threads enabled and GDNative enabled. Multithreading and dynamic linking (GDNative) can’t be used at the same time due to current WebAssembly limitations.
Note: Threads enabled and GDNative enabled templates are only available for standard builds for now, as there are other issues to solve to make them work with Mono.

Additionally, beta 4 adds support for MP3 loading and playback! Until recently, the MP3 audio format was patent-encumbered and could therefore not be included in Godot, but the last patent expired in 2017, so a MP3 loader and decoded could finally be implemented.

There are also a number of fixes to the rewritten FBX importer which should improve compatibility, so if you ran into issues with it in previous builds, make sure to retry your models!

You can learn more about Godot 3.2.4 in the video below, including a quick tutorial showing how to use MP3’s in your Godot game.

Posted on Leave a comment

Unigine 2.13 Released

The Unigine engine just released version 2.13. The new release includes an all new GPU based lightmapping tool, a new terrain generation tool, improved clouds, better lighting and a whole lot more. Since Unigine 2.11 there is a free community version available making Unigine a lot more viable for indie game developers.

Highlights of the release include:

  • GPU Lightmapper tool
  • Introducing SRAA (Subpixel Reconstruction Anti-Aliasing)
  • Upgraded 3D volumetric clouds
  • Performance optimizations for vast forest rendering
  • New iteration of the terrain generation tool with online GIS sources support (experimental)
  • Adaptive hardware tessellation for the mesh_base material
  • Project Build tool: extended functionality and a standalone console-based version
  • New samples (LiDAR sensor, night city lights, helicopter winch)
  • Introducing 3D scans library

For further information on the release be sure to check the much more in-depth release notes or watch the video below.

Posted on Leave a comment

Incremental backups with Btrfs snapshots

Snapshots are an interesting feature of Btrfs. A snapshot is a copy of a subvolume. Taking a snapshot is immediate. However, taking a snapshot is not like performing a rsync or a cp, and a snapshot doesn’t occupy space as soon as it is created.

Editors note: From the BTRFS Wiki – A snapshot is simply a subvolume that shares its data (and metadata) with some other subvolume, using Btrfs’s COW capabilities.

Occupied space will increase alongside the data changes in the original subvolume or in the snapshot itself, if it is writeable. Added/modified files, and deleted files in the subvolume still reside in the snapshots. This is a convenient way to perform backups.

Using snapshots for backups

A snapshot resides on the same disk where the subvolume is located. You can browse it like a regular directory and recover a copy of a file as it was when the snapshot was performed. By the way, a snapshot on the same disk of the snapshotted subvolume is not an ideal backup strategy: if the hard disk broke, snapshots will be lost as well. An interesting feature of snapshots is the ability to send them to another location. The snapshot can be sent to an external hard drive or to a remote system via SSH (the destination filesystems need to be formatted as Btrfs as well). To do this, the commands btrfs send and btrfs receive are used.

Taking a snapshot

In order to use the send and the receive commands, it is important to create the snapshot as read-only, and snapshots are writeable by default.

The following command will take a snapshot of the /home subvolume. Note the -r flag for readonly.

sudo btrfs subvolume snapshot -r /home /.snapshots/home-day1

Instead of day1, the snapshot name can be the current date, like home-$(date +%Y%m%d). Snapshots look like regular subdirectories. You can place them wherever you like. The directory /.snapshots could be a good choice to keep them neat and to avoid confusion.

Editors note: Snapshots will not take recursive snapshots of themselves. If you create a snapshot of a subvolume, every subvolume or snapshot that the subvolume contains is mapped to an empty directory of the same name inside the snapshot.

Backup using btrfs send

In this example the destination Btrfs volume in the USB drive is mounted as /run/media/user/mydisk/bk . The command to send the snapshot to the destination is:

sudo btrfs send /.snapshots/home-day1 | sudo btrfs receive /run/media/user/mydisk/bk

This is called initial bootstrapping, and it corresponds to a full backup. This task will take some time, depending on the size of the /home directory. Obviously, subsequent incremental sends will take a shorter time.

Incremental backup

Another useful feature of snapshots is the ability to perform the send task in an incremental way. Let’s take another snapshot.

sudo btrfs subvolume snapshot -r /home /.snapshots/home-day2

In order to perform the send task incrementally, you need to specify the previous snapshot as a base and this snapshot has to exist in the source and in the destination. Please note the -p option.

sudo btrfs send -p /.snapshot/home-day1 /.snapshot/home-day2 | sudo btrfs receive /run/media/user/mydisk/bk

And again (the day after):

sudo btrfs subvolume snapshot -r /home /.snapshots/home-day3
sudo btrfs send -p /.snapshot/home-day2 /.snapshot/home-day3 | sudo btrfs receive /run/media/user/mydisk/bk

Cleanup

Once the operation is complete, you can keep the snapshot. But if you perform these operations on a daily basis, you could end up with a lot of them. This could lead to confusion and potentially a lot of used space on your disks. So it is a good advice to delete some snapshots if you think you don’t need them anymore.

Keep in mind that in order to perform an incremental send you need at least the last snapshot. This snapshot must be present in the source and in the destination.

sudo btrfs subvolume delete /.snapshot/home-day1
sudo btrfs subvolume delete /.snapshot/home-day2
sudo btrfs subvolume delete /run/media/user/mydisk/bk/home-day1
sudo btrfs subvolume delete /run/media/user/mydisk/bk/home-day2

Note: the day 3 snapshot was preserved in the source and in the destination. In this way, tomorrow (day 4), you can perform a new incremental btrfs send.

As some final advice, if the USB drive has a bunch of space, you could consider maintaining multiple snapshots in the destination, while in the source disk you would keep only the last one.

Posted on Leave a comment

Ankur Sinha: How do you Fedora?

We recently interviewed Ankur Sinha on how he uses Fedora. This is part of a series on the Fedora Magazine. The series profiles Fedora users and how they use Fedora to get things done. Contact us on the feedback form to express your interest in becoming an interviewee.

Who is Ankur Sinha?

Ankur is a Computational Neuroscientist and has just started his first post-doctoral fellowship at University College London and a FLOSS enthusiast trying to spread the message of FOSS and evidence based science. Ankur started using Linux a decade ago, when he was introduced to Linux in a LUG doing an install fest during his undergraduate degree.

Ankur loves reading:

“I read a lot and tend to get attached to characters from books quite easily. Holmes, Poirot (I’m a detective fiction fan), Francisco D’Anconia (fan of the book Atlas Shrugged, but not so much Ayn Rand’s philosophy), lots of random characters from books I’d read. I also read lots of Hindi comics as a child—Doga, Super commando Dhruv, Naagraj, and Chacha Chaudhary—loved them all!”.

As far as all time favorite movies go, Swades comes to his mind. His favorite genre is science fiction thrillers (think “The Prestige” and ” Predestination”). When not busy working or engaging people on IRC channels, he enjoys listening to podcasts and classic rock.

Ankur’s favorite food is his mother’s Chhole Bhature. Otherwise, if he’s away from home, his go-tos are Butter chicken, Butter Naan, and Chilli Chicken from North Indian restaurants.

The Fedora Community

Ankur found about Fedora after a distro hopping phase in 2008, and since then he has been a fedora user. His first memory of the Fedora community is an IRC workshop on packaging fonts that the Fedora India community had organised back in 2008.
Talking to and meeting other community members has been one of the most exciting parts of the Fedora community for him. “I found this great bunch of people to hang out and geek out with! It was so much fun, and extremely educational both in terms of technical knowledge and the social/philosophical side of FOSS and life in general.”

When asked what he would change in the Fedora Project if he could change one thing, he said that he prefers “Smaller tweaks” since “Smaller tweaks also allow work to be spread out, and that really helps”. Specifically, he would like to see more discussion on the philosophy and nuances of FOSS in the community.

"Perhaps we all know it so well that we take it for granted and focus on the work that needs to be done. It’s so easy to get bogged down in the work, though, that I worry that we forget the bigger picture sometimes. The end for us is to promote FOSS, and everything we do is the means to this end. So, I worry that the means sometimes becomes the end for us — that we focus so
much on producing deliverables that we forget why we produce them."

Since he works in academia and science, Ankur would like the Fedora community (and FOSS in general) to get more involved with academic/scientific communities.
“I think we have an excellent platform to enable education and research. NeuroFedora is a start in this direction.”

He wishes that other people knew that the Fedora community are not just OS developers, but a global community, and he’d like folks to just hang out and communicate even if they’re not contributing in the traditional sense of the word.

Ankur tries to help wherever he can, especially if newbies are involved. Nowadays, he tries to focus more on NeuroFedora as it fits well for his day-job and there’s so much to do in this Field + Open Science.

Ankur learnt most of the things from his >10 years of experience in Fedora and FOSS. He had learned theories of software development at undergrad but got to experience practical implementations from his colleagues in the community. He is a firm believer of “No question is a stupid question”. He adds that Fedora is perfect because it gets better as you start working with it.

His piece of advice for anyone thinking of getting involved in Fedora is to just go ahead and start. One doesn’t need to know anything at all. All of it can be learned over time. Secondly, don’t focus on tasks. Yes, that’s a good way of learning, but it is far more important to get to know the people of Fedora! As one meets more people, one learns more about how Fedora works and one has way more fun working and learning!!

Just like a lot of our community members, Ankur struggles from time constraints. His new challenge is to find more time to work on FOSS and Fedora. During his college years, it was to learn more and more.

One of the challenges Ankur faces about promoting open source is to explain to non-FOSS people that Windows/Mac aren’t the only OSes present. He thinks that having Fedora shipped with Lenovo systems will give a start for the community. It makes Fedora and FOSS more "official".

What Hardware?

Ankur has three machines and runs Fedora 32 on each of them:

Ankur’s Desk
  • Thinkpad E490 laptop
  • a custom workstation that university IT set up for research work
  • a headless MacPro5,1
  • 2x Microsoft Sculpt Ergonomic keyboard/mouse/numpad
  • Netgear wifi extender
  • TP-Link TL-PA8033PKIT AV1300 3-Port Gigabit Passthrough Powerline Adapters
  • Moto g7 phone with Android 10

What Software?

Fedora 32 workstation, and server on the MacPro.

Workstation/Gnome3 with a few extensions: caffeine, pomodoro,

  syncthing.

byobu with tmux: multiple sessions: default, work, fedora

taskwarrior, vit, timewarrior, gnome-pomodoro, gnome-calendar/evolution for calendars

neomutt with msmtp + offlineimap + notmuch for e-mail

vim for *everything* possible – vimrc link

qutebrowser, weechat, zathura, vimiv

– syncthing + dropbox + git for syncing/version control

For research work:

NEST + lots of python and Gnuplot for analysis, LaTeX for writing

  (stuff from NeuroFedora!)

inkscape + gimp + dia + freemind for figures/mind mapping

jabref for bibliography management

Other bits: – occasional gamer?

Oad + endless sky + openttd!

Posted on Leave a comment

Spam Classification with ML-Pack

Introduction

ML-Pack is a small footprint C++ machine learning library that can be easily integrated into other programs. It is an actively developed open source project and released under a BSD-3 license. Machine learning has gained popularity due to the large amount of electronic data that can be collected. Some other popular machine learning frameworks include TensorFlow, MxNet, PyTorch, Chainer and Paddle Paddle, however these are designed for more complex workflows than ML-Pack. On Fedora, ML-Pack is packaged by its lead developer Ryan Curtin. In addition to a command line interface, ML-Pack has bindings for Python and Julia. Here, we will focus on the command line interface since this may be useful for system administrators to integrate into their workflows.

Installation

You can install ML-Pack on the Fedora command line using

$ sudo dnf -y install mlpack mlpack-bin

You can also install the documentation, development headers and Python bindings by using …

$ sudo dnf -y install mlpack-doc \
mlpack-devel mlpack-python3

though they will not be used in this introduction.

Example

As an example, we will train a machine learning model to classify spam SMS messages. To keep this article brief, linux commands will not be fully explained, but you can find out more about them by using the man command, for example for the command first command used below, wget

$ man wget

will give you information that wget will download files from the web and options you can use for it.

Get a dataset

We will use an example spam dataset in Indonesian provided by Yudi Wibisono

 
$ wget https://drive.google.com/file/d/1-stKadfTgJLtYsHWqXhGO3nTjKVFxm_Q/view
$ unzip dataset_sms_spam_bhs_indonesia_v1.zip

Pre-process dataset

We will try to classify a message as spam or ham by the number of occurrences of a word in a message. We first change the file line endings, remove line 243 which is missing a label and then remove the header from the dataset. Then, we split our data into two files, labels and messages. Since the labels are at the end of the message, the message is reversed and then the label removed and placed in one file. The message is then removed and placed in another file.

$ tr 'r' 'n' < dataset_sms_spam_v1.csv > dataset.txt
$ sed '243d' dataset.txt > dataset1.csv
$ sed '1d' dataset1.csv > dataset.csv
$ rev dataset.csv | cut -c1 | rev > labels.txt
$ rev dataset.csv | cut -c2- | rev > messages.txt
$ rm dataset.csv
$ rm dataset1.csv
$ rm dataset.txt

Machine learning works on numeric data, so we will use labels of 1 for ham and 0 for spam. The dataset contains three labels, 0, normal sms (ham), 1, fraud (spam), and 2 promotion (spam). We will label all spam as 1, so promotions and fraud will be labelled as 1.

$ tr '2' '1' < labels.txt > labels.csv
$ rm labels.txt

The next step is to convert all text in the messages to lower case and for simplicity remove punctuation and any symbols that are not spaces, line endings or in the range a-z (one would need expand this range of symbols for production use)

$ tr '[:upper:]' '[:lower:]' < \
messages.txt > messagesLower.txt
$ tr -Cd 'abcdefghijklmnopqrstuvwxyz n' < \ messagesLower.txt > messagesLetters.txt
$ rm messagesLower.txt

We now obtain a sorted list of unique words used (this step may take a few minutes, so use nice to give it a low priority while you continue with other tasks on your computer).

$ nice -20 xargs -n1 < messagesLetters.txt > temp.txt
$ sort temp.txt > temp2.txt
$ uniq temp2.txt > words.txt
$ rm temp.txt
$ rm temp2.txt

We then create a matrix, where for each message, the frequency of word occurrences is counted (more on this on Wikipedia, here and here). This requires a few lines of code, so the full script, which should be saved as ‘makematrix.sh’ is below

#!/bin/bash
declare -a words=()
declare -a letterstartind=()
declare -a letterstart=()
letter=" "
i=0
lettercount=0
while IFS= read -r line; do labels[$((i))]=$line let "i++"
done < labels.csv
i=0
while IFS= read -r line; do words[$((i))]=$line firstletter="$( echo $line | head -c 1 )" if [ "$firstletter" != "$letter" ] then letterstartind[$((lettercount))]=$((i)) letterstart[$((lettercount))]=$firstletter letter=$firstletter let "lettercount++" fi let "i++"
done < words.txt
letterstartind[$((lettercount))]=$((i))
echo "Created list of letters" touch wordfrequency.txt
rm wordfrequency.txt
touch wordfrequency.txt
messagecount=0
messagenum=0
messages="$( wc -l messages.txt )"
i=0
while IFS= read -r line; do let "messagenum++" declare -a wordcount=() declare -a wordarray=() read -r -a wordarray <<> wordfrequency.txt echo "Processed message ""$messagenum" let "i++"
done < messagesLetters.txt
# Create csv file
tr ' ' ',' data.csv

Since Bash is an interpreted language, this simple implementation can take upto 30 minutes to complete. If using the above Bash script on your primary workstation, run it as a task with low priority so that you can continue with other work while you wait:

$ nice -20 bash makematrix.sh

Once the script has finished running, split the data into testing (30%) and training (70%) sets:

$ mlpack_preprocess_split \ --input_file data.csv \ --input_labels_file labels.csv \ --training_file train.data.csv \ --training_labels_file train.labels.csv \ --test_file test.data.csv \ --test_labels_file test.labels.csv \ --test_ratio 0.3 \ --verbose

Train a model

Now train a Logistic regression model:

$ mlpack_logistic_regression \
--training_file train.data.csv \
--labels_file train.labels.csv --lambda 0.1 \
--output_model_file lr_model.bin

Test the model

Finally we test our model by producing predictions,

$ mlpack_logistic_regression \
--input_model_file lr_model.bin \ --test_file test.data.csv \
--output_file lr_predictions.csv

and comparing the predictions with the exact results,

$ export incorrect=$(diff -U 0 lr_predictions.csv \
test.labels.csv | grep '^@@' | wc -l)
$ export tests=$(wc -l < lr_predictions.csv)
$ echo "scale=2; 100 * ( 1 - $((incorrect)) \
/ $((tests)))" | bc

This gives approximately 90% validation rate, similar to that obtained here.

The dataset is composed of approximately 50% spam messages, so the validation rates are quite good without doing much parameter tuning. In typical cases, datasets are unbalanced with many more entries in some categories than in others. In these cases a good validation rate can be obtained by mispredicting the class with a few entries. Thus to better evaluate these models, one can compare the number of misclassifications of spam, and the number of misclassifications of ham. Of particular importance in applications is the number of false positive spam results as these are typically not transmitted. The script below produces a confusion matrix which gives a better indication of misclassification. Save it as ‘confusion.sh’

#!/bin/bash
declare -a labels
declare -a lr
i=0
while IFS= read -r line; do labels[i]=$line let "i++"
done < test.labels.csv
i=0
while IFS= read -r line; do lr[i]=$line let "i++"
done < lr_predictions.csv
TruePositiveLR=0
FalsePositiveLR=0
TrueZerpLR=0
FalseZeroLR=0
Positive=0
Zero=0
for i in "${!labels[@]}"; do if [ "${labels[$i]}" == "1" ] then let "Positive++" if [ "${lr[$i]}" == "1" ] then let "TruePositiveLR++" else let "FalseZeroLR++" fi fi if [ "${labels[$i]}" == "0" ] then let "Zero++" if [ "${lr[$i]}" == "0" ] then let "TrueZeroLR++" else let "FalsePositiveLR++" fi fi done
echo "Logistic Regression"
echo "Total spam" $Positive
echo "Total ham" $Zero
echo "Confusion matrix"
echo " Predicted class"
echo " Ham | Spam "
echo " ---------------"
echo " Actual| Ham | " $TrueZeroLR "|" $FalseZeroLR
echo " class | Spam | " $FalsePositiveLR " |" $TruePositiveLR
echo ""

then run the script

$ bash confusion.sh

You should get output similar to

Logistic Regression
Total spam 183
Total ham 159
Confusion matrix

    Predicted class
    Ham Spam
Actual class Ham 128 26
Spam 31 157

which indicates a reasonable level of classification. Other methods you can try in ML-Pack for this problem include Naive Bayes, random forest, decision tree, AdaBoost and perceptron.

To improve the error rating, you can try other pre-processing methods on the initial data set. Neural networks can give upto 99.95% validation rates, see for example here, here and here. However, using these techniques with ML-Pack cannot be done on the command line interface at present and is best covered in another post.

For more on ML-Pack, please see the documentation.

Posted on Leave a comment

Fedora Origins – Part 01

Editor’s comment: The format of this article is different from the usual article that Fedora Magazine has published: a Fedora origins story told from the point of view of a Fedora user. The author has chosen to tell a story, since to simply present the bare facts is akin to just reading the wiki page about it.

Hello World!

Hello, I am… no, I’m not going to give my real name. Let’s say I’m female, probably shorter and older than you. I used to go by the nick of Isadora, more on that later.

Here you have one of the old RH boxes

Now some context. Back in the late ’90s, internet became popular and PCs started to be a thing. However, most people didn’t have either because it was very expensive and often you could do better with the traditional methods. Yes, computers were very basic back then. I used to play with these pocket games that were fascinating at the time, but totally lame now. Monochrome screens with pixelated flat animations. Not going to dive there, just giving an idea how it was.

In the mid-90s a company named Red Hat emerged and slowly started to make a profit of its own by selling its own business-oriented distribution and software utilities. The name comes from one of its founders, Marc Ewing, who used to wear a red lacrosse in university so other students could spot him easily and ask him questions.
Of course, as it was a business-oriented distribution, and I was busy with multiple other things, I didn’t pay much attention to it. It lacked the software I needed and since I wasn’t a customer, I was nobody to ask for additions. However, it was Linux and as such Open Source. People started to package stuff for RHL and put it in repositories. I was invited to join the community project, Fedora.us. I promptly declined, misunderstanding the name. It was the second time I got invited that I asked ‘what is with the “US” there (in the name)?` Another user explained it was ‘us’ as in ‘we’ not as in the ‘United States.’ They explained a bit about how the community worked and I decided to give it a go.

Then my studies got in the way, and I had to shelve it.

Login Screen in Fedora Core

Press Return

By the time I came back to Fedora.us it had changed its name to Fedora Project and was actively being worked on from within Red Hat. Now, I wasn’t there so my direct knowledge of how this happened is a bit foggy. Some say that Fedora existed separately and Red Hat added/invited them, some say that Fedora was completely RH’s idea, some say they existed independently and at some point met or joined. Choose the version you like, I’ll put some links down there so you can know more details and decide for yourself. As far as I’m concerned, they worked together.

Well, as usual someone dropped some CDs with ISOs for me. If I had an euro for every ISO I’ve been offered, or had tossed at my desk, for me to try it, I would be rich. As a matter of fact, I’m not rich but I do have a big rack full of old distros.

Anyways

Now it’s the early 2000s and things have changed dramatically. Computers’ prices have dropped and internet speed is increasing, plus a set of new technologies make it cheaper and more reliable. Computers now can do so much more than just a decade ago, and they’re smaller too. Screens are bigger, with better colors and resolution. Laptops are starting to become popular though still expensive and less powerful than desktop PCs.

During this time, I tried both Fedora and Red Hat. Now, as has been said before, Red Hat focuses on businesses and companies. Their main concern is having exactly the software their customers need, with the features their customers need, delivered as rock solid stability and a reliable update & support cycle. A lot of customization, variety of options and many cool new features are not their main core. More software means more testing and development work and bigger chances of things failing. Yet the technology industry is constantly changing and innovating. Sticking too much to older versions or proven formulas can be fatal for a company.

So what to do? Well, they solved it with Fedora. Fedora Project would be the innovative, looking ahead test bed, and Red Hat Enterprise Linux was the more conservative, rock solid operating system for businesses. Yes, they changed the name from Red Hat Linux to Red Hat Enterprise Linux. Sounds better, doesn’t it?

Unsurprisingly, Fedora had a fame of being difficult, unstable and for “hackers only”. Whenever I said I was using Fedora, they would give me odd looks or say something like “I want something stable” or “I’m not into that” (meaning they didn’t fancy programming/hacking activities). Countless individuals suggested I might want to use one of the other, beginner-friendly distributions, without themselves even giving Fedora a try! Many would disregard Linux as a whole as an amateur thing, only valid for playing but not good for serious work and companies. To each their own, I suppose.

Note the F and the bubble already there

Yes, but why?

Those early versions were called Fedora Core and had a very uncertain release pattern. The six months cycle came much later. Fedora Core got its name because there were two repositories, Core and Extras. Core had the essentials, so to speak, and was maintained by Red Hat. Extras was, well, everything else. Any software that most users would want or need was included there, and it was maintained by a wide range of contributors.

From the beginning, one of the most powerful reasons for me to use it was the community and its core values. The Four Foundations of Fedora, Freedom, Features, First & Friends were lived and breathed and not just a catchy line on a website or a leaflet. Fedora Project strove (and still does) to deliver the newest features first, caring for freedom (of choice and software) and keeping a good open community, making friends as we contribute to the project.

I also liked the fact that Fedora, as its purpose was testing for Red Hat, delivered a lot of new software and technologies; it was like opening the window to see the future today.

The downside was its unreliable upgrade cycle. You could get a new version in a few months or next year… nobody knew, there was no agreed schedule.

Note how, despite being Fedora, RH’s logo and signature is omnipresent

What was in the box

Fedora Core kept this name up to the sixth version. From the start, it was meant to be a distribution you could use right after installing it, so it came with Gnome 2, KDE 3, OpenOffice and some browser I forgot, possibly Firefox.

I remember it being the first to introduce SELinux and SystemD by default, and to replace LILO with GRUB. I also remember the hardware requirements were something at the time, although they now sound laughable: Pentium II 400MHz, 256MB RAM (yes, you read it right) and 2GB of space in disk. It even had an option for terminal only! This would require only 64MB RAM and Pentium II 200MHz. Amazing, isn’t it?

It had codenames. Not publicly, but it had, and they were quite peculiar. Fedora Core 1 was code named «Yarrow» which is a medium size plant with yellow or white crown-like flowers. Core 2 was Tettnang which is a small town in Baden-Württemberg, Germany. Not sure about Core 3, I think it was Heidelberg, but maybe I’m mixing with later releases. Core 4 was Stentz, if I recall correctly (no idea what it means), Core 5 was a colour, I think Bordeaux, and Core 6 was Zod that I think it was a comic character but I could be wrong. If there was a method in their madness I have no idea. I thought the names amusing but didn’t give a second thought to it as they didn’t affect anything, not even the design of each release.

Ah… good ol` genetic helix

So what now?

Well, of course, Fedora Project has evolved from where we have stopped. But that’s for later articles or this one will be too long. For now, I leave you with an extract of an interview with Matthew Miller, current Project Leader and some links in case you want to know more.

Extracts to interview with Matthew Miller, Project Leader.

Matthew Miller tells about the beginnings in Eduard Lucena’s podcast (transcription here): “Fedora started about 15 years ago, really. It actually started as a thing called Fedora.us.” Back in those days, there was Red Hat Linux.” “Meanwhile, there was this thing called Fedora.us which was basically a project to make additional software available to users of Red Hat Linux. Find things that weren’t part of Red Hat Linux, and package them up, and make them available to everybody. That was started as a community project.”

“Red Hat (then) merged with this Fedora.us project to form Fedora Project that produces an upstream operating system that Red Hat Enterprise Linux is derived from but then moves on a slower pace.”

“We were then two parts, Fedora Core, which was basically inherited from the old Red Hat Linux and only Red Hat employees could do anything with and then Fedora Extras, where community could come together to add things on top of that Fedora Core. It took a little while to get off the ground but it was fairly successful”

Around the time of Fedora Core 6, those were actually merged together into one big Fedora where all of the packages were all part of the same thing. There was no more distinction of Core and Extras, and everything was all together and, more importantly, all the community was all together.

They invited the community to take ownership of the whole thing and for Red Hat to become part of the community rather than separate. That was a huge success.”

Links of interest

Fedora, a visual history
https://www.phoronix.com/scan.php?page=article&item=678&num=1

Red Hat Videos – Fedora’s anniversary
https://youtu.be/DOFXBGh6DZ0

Red Hat Videos – Default to open
https://youtu.be/vhYMRtqvMg8

Fedora’s Mission & Foundations
https://docs.fedoraproject.org/en-US/project/

A short history of Fedora
https://youtu.be/NlNlcLD2zRM

Posted on Leave a comment

Kurt DelBene’s March 4 guidance to King County employees

Hi everyone,

I wanted to update you on the latest public guidance changes for King County, the region where the Redmond area campuses are located.  The specific recommendations from King County may be found here. We are adjusting our guidance in response to these new recommendations.

These updates will go into effect at the end of day today, Pacific Standard Time, and will remain in effect through March 25th, but we will be continuously monitoring the situation and adjusting guidance as appropriate.

As always, for full list of our guidance to employees, please visit the Global Security website.

Puget Sound and Bay Area work from home updates:

  • Consistent with King County guidance, we are recommending all employees who are in a job that can be done from home should do so through March 25th. Taking these measures will ensure your safety and also make the workplace safer for those that need to be onsite. Please let your manager know that you will be working from home, so all our teams remain well coordinated.
  • If in your role it is essential to be in the office or other work environments (e.g., data center, retail, etc.), plan to continue to go to your location. We will continue to implement the CDC guidelines for cleaning and sanitizing the locations. If you are not sure whether you are in a role that requires you to be onsite, you should speak to your manager.
    • The exceptions to this new guidance are the following groups who are being advised by health authorities to avoid interaction in large groups or public settings:
      • If you are over 60
      • If you have an underlying health condition (heart disease, diabetes, etc.)
      • If you are immune system compromised
      • If you are pregnant
    • In these cases, you should work with your manager to determine leave options or other accommodations available to you.
    • If you are a caregiver of someone that is immune system compromised, please contact your health provider for input.
  • If you will be in the office or other work environments, we recommend limiting prolonged close interactions with people. Specific recommendations are below, but your manager can help implement plans that work well in your particular situation.
    • Limit prolonged interactions and try to stay more than 6 feet/1.8 meters away from others.
    • Keep in-person meetings as short as possible.
    • If you are in open office space and located 6 feet/1.8 meters away from others, you meet the current guidance for appropriate distance from others.
  • Most importantly do not come to work if you are sick. This will be clearly posted on all building entrances.

Updated Global Travel Guidance:

  • We recommend that people postpone travel to Puget Sound or Bay Area campuses unless essential for the continuity of Microsoft.
  • All non-essential business travel should be canceled in regions with active COVID-19
    • “Essential” is defined as work related to operations, sales, customer services (e.g., customer support and customer success).
    • You should discuss travel felt to be essential with your manager and get their approval.
  • You are not required to travel if you have concerns about doing so.

 If your region was not mentioned in this email or my email yesterday, it is because there are no additional updates to the current guidelines, but we will continue to keep you informed. Please look for updates from Global Security if the situation changes in your area.

If you have any questions on how the above guidance applies to your particular circumstances, please discuss with your manager.  You can also send questions to HR.

 What to do if you’re feeling symptoms or believe you’ve been exposed:

In King County, if you believe you were exposed to a confirmed case of COVID-19, contact the novel coronavirus call center: 206-477-3977 or your health care provider. For other locations, please contact your local health department hotline.

If you believe you may have symptoms, please contact your health care provider immediately.

Note for all employees globally:  If you are receiving testing or have a confirmed diagnosis of COVID-19, please confidentially inform HR – we will assist in informing your manager and taking measures that protect others.

 As a reminder, we are following the below precautionary measures from the WHO:

  • Frequently clean hands by using alcohol-based hand rub or soap and water.
  • If you are sick (e.g., flu, cold), do not come to work.
  • When coughing and sneezing cover mouth and nose with flexed elbow or tissue – throw tissue away immediately and wash hands.
  • Avoid close contact with people who are unwell or showing symptoms of illness.
  • If you have fever, cough and difficulty breathing, seek medical care early and share previous travel history with your health care provider.

We will continue to assess the situation and update you as our recommendations change. I really appreciate your patience.

Thanks,

Kurt