Posted on Leave a comment

Deploy Fedora CoreOS servers with Terraform

Fedora CoreOS is a lightweight, secure operating system optimized for running containerized workloads. A YAML document is all you need to describe the workload you’d like to run on a Fedora CoreOS server.

This is wonderful for a single server, but how would you describe a fleet of cooperating Fedora CoreOS servers? For example, what if you wanted a set of servers running load balancers, others running a database cluster and others running a web application? How can you get them all configured and provisioned? How can you configure them to communicate with each other? This article looks at how Terraform solves this problem.

Getting started

Before you start, decide whether you need to review the basics of Fedora CoreOS. Check out this previous article on the Fedora Magazine:

Terraform is an open source tool for defining and provisioning infrastructure. Terraform defines infrastructure as code in files. It provisions infrastructure by calculating the difference between the desired state in code and observed state and applying changes to remove the difference.

HashiCorp, the company that created and maintains Terraform, offers an RPM repository to install Terraform.

sudo dnf config-manager --add-repo \ https://rpm.releases.hashicorp.com/fedora/hashicorp.repo
sudo dnf install terraform

To get yourself familiar with the tools, start with a simple example. You’re going to create a single Fedora CoreOS server in AWS. To follow along, you need to install awscli and have an AWS account. awscli can be installed from the Fedora repositories and configured using the aws configure command

sudo dnf install -y awscli
aws configure

Please note, AWS is a paid service. If executed correctly, participants should expect less than $1 USD in charges, but mistakes may lead to unexpected charges.

Configuring Terraform

In a new directory, create a file named config.yaml. This file will hold the contents of your Fedore CoreOS configuration. The configuration simply adds an SSH key for the core user. Modify the authorized_ssh_key section to use your own.

variant: fcos
version: 1.2.0
passwd: users: - name: core authorized_ssh_keys: - "ssh-ed25519 AAAAC3....... user@hostname"

Next, create a file main.tf to contain your Terraform specification. Take a look at the contents section by section. It begins with a block to specify the versions of your providers.

terraform { required_providers { ct = { source = "poseidon/ct" version = "0.7.1" } aws = { source = "hashicorp/aws" version = "~> 3.0" } }
}

Terraform uses providers to control infrastructure. Here it uses the AWS provider to provision EC2 servers, but it can provision any kind of AWS infrastructure. The ct provider from Poseidon Labs stands for config transpiler. This provider will transpile Fedora CoreOS configurations into Ignition configurations. As a result, you do not need to use fcct to transpile your configurations. Now that your provider versions are specified, initialize them.

provider "aws" { region = "us-west-2"
} provider "ct" {}

The AWS region is set to us-west-2 and the ct provider requires no configuration. With the providers configured, you’re ready to define some infrastructure. Use a data source block to read the configuration.

data "ct_config" "config" { content = file("config.yaml") strict = true
}

With this data block defined, you can now access the transpiled Ignition output as data.ct_config.config.rendered. To create an EC2 server, use a resource block, and pass the Ignition output as the user_data attribute.

resource "aws_instance" "server" { ami = "ami-0699a4456969d8650" instance_type = "t3.micro" user_data = data.ct_config.config.rendered
}

This configuration hard-codes the virtual machine image (AMI) to the latest stable image of Fedora CoreOS in the us-west-2 region at time of writing. If you would like to use a different region or stream, you can discover the correct AMI on the Fedora CoreOS downloads page.

Finally, you’d like to know the public IP address of the server once it’s created. Use an output block to define the outputs to be displayed once Terraform completes its provisioning.

output "instance_ip_addr" { value = aws_instance.server.public_ip
}

Alright! You’re ready to create some infrastructure. To deploy the server simply run:

terraform init # Installs the provider dependencies
terraform apply # Displays the proposed changes and applies them

Once completed, Terraform prints the public IP address of the server, and you can SSH to the server by running ssh core@{public ip here}. Congratulations — you’ve provisioned your first Fedora CoreOS server using Terraform!

Updates and immutability

At this point you can modify the configuration in config.yaml however you like. To deploy your change simply run terraform apply again. Notice that each time you change the configuration, when you run terraform apply it destroys the server and creates a new one. This aligns well with the Fedora CoreOS philosophy: Configuration can only happen once. Want to change that configuration? Create a new server. This can feel pretty alien if you’re accustomed to provisioning your servers once and continuously re-configuring them with tools like Ansible, Puppet or Chef.

The benefit of always creating new servers is that it is significantly easier to test that newly provisioned servers will act as expected. It can be much more difficult to account for all of the possible ways in which updating a system in place may break. Tooling that adheres to this philosophy typically falls under the heading of Immutable Infrastructure. This approach to infrastructure has some of the same benefits seen in functional programming techniques, namely that mutable state is often a source of error.

Using variables

You can use Terraform input variables to parameterize your infrastructure. In the previous example, you might like to parameterize the AWS region or instance type. This would let you deploy several instances of the same configuration with differing parameters. What if you want to parameterize the Fedora CoreOS configuration? Do so using the templatefile function.

As an example, try parameterizing the username of your user. To do this, add a username variable to the main.tf file:

variable "username" { type = string description = "Fedora CoreOS user" default = "core"
}

Next, modify the config.yaml file to turn it into a template. When rendered, the ${username} will be replaced.

variant: fcos
version: 1.2.0
passwd: users: - name: ${username} authorized_ssh_keys: - "ssh-ed25519 AAAAC3....... user@hostname"

Finally, modify the data block to render the template using the templatefile function.

data "ct_config" "config" { content = templatefile( "config.yaml", { username = var.username } ) strict = true
}

To deploy with username set to jane, run terraform apply -var=”username=jane”. To verify, try to SSH into the server with ssh jane@{public ip address}.

Leveraging the dependency graph

Passing variables from Terraform into Fedora CoreOS configuration is quite useful. But you can go one step further and pass infrastructure data into the server configuration. This is where Terraform and Fedora CoreOS start to really shine.

Terraform creates a dependency graph to model the state of infrastructure and to plan updates. If the output of one resource (e.g the public IP address of a server) is passed as the input of another service (e.g the destination in a firewall rule), Terraform understands that changes in the former require recreating or modifying the later. If you pass infrastructure data into a Fedora CoreOS configuration, it will participate in the dependency graph. Updates to the inputs will trigger creation of a new server with the new configuration.

Consider a system of one load balancer and three web servers as an example.

The goal is to configure the load balancer with the IP address of each web server so that it can forward traffic to them.

Web server configuration

First, create a file web.yaml and add a simple Nginx configuration with a templated message.

variant: fcos
version: 1.2.0
systemd: units: - name: nginx.service enabled: true contents: | [Unit] Description=Nginx Web Server After=network-online.target Wants=network-online.target [Service] ExecStartPre=-/bin/podman kill nginx ExecStartPre=-/bin/podman rm nginx ExecStartPre=/bin/podman pull nginx ExecStart=/bin/podman run --name nginx -p 80:80 -v /etc/nginx/index.html:/usr/share/nginx/html/index.html:z nginx [Install] WantedBy=multi-user.target
storage: directories: - path: /etc/nginx files: - path: /etc/nginx/index.html mode: 0444 contents: inline: | <html> <h1>Hello from Server ${count}</h1> </html>

In main.tf, you can create three web servers using this template with the following blocks:

data "ct_config" "web" { count = 3 content = templatefile( "web.yaml", { count = count.index } ) strict = true
} resource "aws_instance" "web" { count = 3 ami = "ami-0699a4456969d8650" instance_type = "t3.micro" user_data = data.ct_config.web[count.index].rendered
}

Notice the use of count = 3 and the count.index variable. You can use count to make many copies of a resource. Here, it creates three configurations and three web servers. The count.index variable is used to pass the first configuration to the first web server and so on.

Load balancer configuration

The load balancer will be a basic HAProxy load balancer that forwards to each server. Place the configuration in a file named lb.yaml:

variant: fcos
version: 1.2.0
systemd: units: - name: haproxy.service enabled: true contents: | [Unit] Description=Haproxy Load Balancer After=network-online.target Wants=network-online.target [Service] ExecStartPre=-/bin/podman kill haproxy ExecStartPre=-/bin/podman rm haproxy ExecStartPre=/bin/podman pull haproxy ExecStart=/bin/podman run --name haproxy -p 80:8080 -v /etc/haproxy/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro haproxy [Install] WantedBy=multi-user.target
storage: directories: - path: /etc/haproxy files: - path: /etc/haproxy/haproxy.cfg mode: 0444 contents: inline: | global log stdout format raw local0 defaults mode tcp log global option tcplog frontend http bind *:8080 default_backend http backend http balance roundrobin
%{ for name, addr in servers ~} server ${name} ${addr}:80 check
%{ endfor ~}

The template expects a map with server names as keys and IP addresses as values. You can create that using the zipmap function. Use the ID of the web servers as keys and the public IP addresses as values.

data "ct_config" "lb" { content = templatefile( "lb.yaml", { servers = zipmap( aws_instance.web.*.id, aws_instance.web.*.public_ip ) } ) strict = true
} resource "aws_instance" "lb" { ami = "ami-0699a4456969d8650" instance_type = "t3.micro user_data = data.ct_config.lb.rendered
}

Finally, add an output block to display the IP address of the load balancer.

output "load_balancer_ip" { value = aws_instance.lb.public_ip
}

All right! Run terraform apply and the IP address of the load balancer displays on completion. You should be able to make requests to the load balancer and get responses from each web server.

$ export LB={{load balancer IP here}}
$ curl $LB
<html> <h1>Hello from Server 0</h1>
</html>
$ curl $LB
<html> <h1>Hello from Server 1</h1>
</html>
$ curl $LB
<html> <h1>Hello from Server 2</h1>
</html>

Now you can modify the configuration of the web servers or load balancer. Any changes can be realized by running terraform apply once again. Note in particular that any change to the web server IP addresses will cause Terraform to recreate the load balancer (changing the count from 3 to 4 is a simple test). Hopefully this emphasizes that the load balancer configuration is indeed a part of the Terraform dependency graph.

Clean up

You can destroy all the infrastructure using the terraform destroy command. Simply navigate to the folder where you created main.tf and run terraform destroy.

Where next?

Code for this tutorial can be found at this GitHub repository. Feel free to play with examples and contribute more if you find something you’d love to share with the world. To learn more about all the amazing things Fedora CoreOS can do, dive into the docs or come chat with the community. To learn more about Terraform, you can rummage through the docs, checkout #terraform on freenode, or contribute on GitHub.

Posted on Leave a comment

Set up single sign-on for Fedora Project services

In addition to an operating system, the Fedora Project provides services for users and developers. Services such as Ask Fedora, the Fedora Project wiki and the Fedora Project mailing lists help users learn how to best take advantage of Fedora. For developers of Fedora, there are many other services such as dist-git, Pagure, Bodhi, COPR and Bugzilla for the packaging and release process.

These services are available with a free account from the Fedora Accounts System (FAS). This account is the passport to all things Fedora! This article covers how to get set up with an account and configure Fedora Workstation for browser single sign-on.

Signing up for a Fedora account

To create a FAS account, browse to the account creation page. Here, you will fill out your basic identity data:

Account creation page

Once you enter your data, the account system sends an email to the address you provided, with a temporary password. Pick a strong password and use it.

Password reset page

Next, the account details page appears. If you want to contribute to the Fedora Project, you should complete the Contributor Agreement now. Otherwise, you are done and you can use your account to log into the various Fedora services.

Account details page

Configuring Fedora Workstation for single sign-On

Now that you have your account, you can sign into any of the Fedora Project services. Most of these services support single sign-on (SSO), so you can sign in without re-entering your username and password.

Fedora Workstation provides an easy workflow to add your Fedora credentials. The GNOME Online Accounts tool helps you quickly set up your system to access many popular services. To access it, go to the Settings menu.

Click on the option labeled Fedora. A prompt opens for you to provide your username and password for your Fedora Account.

GNOME Online Accounts stores your password in GNOME Keyring and automatically acquires your single-sign-on credentials for you when you log in.

Single sign-on with a web browser

Today, Fedora Workstation supports three web browsers out of the box with support for single sign-on with the Fedora Project services. These are Mozilla Firefox, GNOME Web, and Google Chrome.

Due to a bug in Chromium, single sign-on doesn’t work currently if you have more than one set of Kerberos (SSO) credentials active on your session. As a result, Fedora doesn’t enable this function out of the box for Chromium in Fedora.

To sign on to a service, browse to it and select the login option for that service. For most Fedora services, this is all you need to do; the browser handles the rest. Some services such as the Fedora mailing lists and Bugzilla support multiple login types. For them, select the Fedora or Fedora Account System login type.

That’s it! You can now log into any of the Fedora Project services without re-entering your password.

Special consideration for Google Chrome

To enable single sign-on out of the box for Google Chrome, Fedora takes advantage of certain features in Chrome that are intended for use in “managed” environments. A managed environment is traditionally a corporate or other organization that sets certain security and/or monitoring requirements on the browser.

Recently, Google Chrome changed its behavior and it now reports Managed by your organization or possibly Managed by fedoraproject.org under the ⋮ menu in Google Chrome. That link leads to a page that says, “If your Chrome browser is managed, your administrator can set up or restrict certain features, install extensions, monitor activity, and control how you use Chrome.” However, Fedora will never monitor your browser activity or restrict your actions.

Enter chrome://policy in the address bar to see exactly what settings Fedora has enabled in the browser. The AuthNegotiateDelegateWhitelist and AuthServerWhitelist options will be set to *.fedoraproject.org. These are the only changes Fedora makes.

Posted on Leave a comment

Fedora pastebin and fpaste updates

Fedora and EPEL users who use fpaste to paste and share snippets of text might have noticed some changes recently. Recently, an update went out which sends pastes made by fpaste to the CentOS Pastebin instead of the Modern Paste instance that Fedora was running. Don’t fear — this was an intentional change, and is part of the effort to lower the workload within the Fedora Infrastructure and Community Platform Engineering teams. Keep reading to learn more about what’s happening with pastebin and your pastes.

About the service

A pastebin lets you save text on a website for a length of time. This helps you exchange data easily with other users. For example, you can post error messages for help with a bug or other issue.

The CentOS Pastebin is a community-maintained service that keeps pastes around for up to 24 hours. It also offers syntax highlighting for a large number of programming and markup languages.

As before, you can paste files:

$ fpaste sql/010.add_owner_ip_index.sql Uploading (0.1KiB)...
https://paste.centos.org/view/6ee941cc

…or command output…

$ rpm -ql python3 | fpaste
Uploading (0.7KiB)...
https://paste.centos.org/view/44945a99

…or system information:

$ fpaste --sysinfo Gathering system info .............Uploading (8.1KiB)...
https://paste.centos.org/view/8d5bb827

What to expect from Pastebin

On December 1st, 2019, Fedora Infrastructure will turn off its Modern Paste servers. It will then redirect fpaste.org, www.fpaste.org, and paste.fedoraproject.org to paste.centos.org.

If you notice any issues with fpaste, first try updating your fpaste package. On Fedora use this command:

$ dnf update fpaste

Or, on machines that use the EPEL repository, use this command:

$ yum update fpaste

If you still run into issues, please file a bug on the fpaste issue tracker, and please be as detailed as possible. Happy pasting!


Photo by Kelly Sikkema on Unsplash.

Posted on Leave a comment

Fedora projects for Hacktoberfest

It’s October! That means its time for the annual Hacktoberfest presented by DigitalOcean and DEV. Hacktoberfest is a month-long event that encourages contributions to open source software projects. Participants who register and submit at least four pull requests to GitHub-hosted repositories during the month of October will receive a free t-shirt.

In a recent Fedora Magazine article, I listed some areas where would-be contributors could get started contributing to Fedora. In this article, I highlight some specific projects that provide an opportunity to help Fedora while you participate in Hacktoberfest.

Fedora infrastructure

  • Bodhi — When a package maintainer builds a new version of a software package to fix bugs or add new features, it doesn’t go out to users right away. First it spends time in the updates-testing repository where in can receive some real-world usage. Bodhi manages the flow of updates from the testing repository into the updates repository and provides a web interface for testers to provide feedback.
  • the-new-hotness — This project listens to release-monitoring.org (which is also on GitHub) and opens a Bugzilla issue when a new upstream release is published. This allows package maintainers to be quickly informed of new upstream releases.
  • koschei — koschei enables continuous integration for Fedora packages. It is software for running a service for scratch-rebuilding RPM packages in Koji instance when their build-dependencies change or after some time elapses.
  • MirrorManager2 — Distributing Fedora packages to a global user base requires a lot of bandwidth. Just like developing Fedora, distributing Fedora is a collaborative effort. MirrorManager2 tracks the hundreds of public and private mirrors and routes each user to the “best” one.
  • fedora-messaging — Actions within the Fedora community—from source code commits to participating in IRC meetings to…lots of things—generate messages that can be used to perform automated tasks or send notifications. fedora-messaging is the tool set that makes sending and receiving these messages possible.
  • fedocal — When is that meeting? Which IRC channel was it in again? Fedocal is the calendar system used by teams in the Fedora community to coordinate meetings. Not only is it a good Hacktoberfest project, it’s also looking for a new maintainer to adopt it.

In addition to the projects above, the Fedora Infrastructure team has highlighted good Hacktoberfest issues across all of their GitHub projects.

Community projects

  • bodhi-rs — This project provides Rust bindings for Bodhi.
  • koji-rs — Koji is the system used to build Fedora packages. Koji-rs provides bindings for Rust applications.
  • fedora-rs — This project provides a Rust library for interacting with Fedora services like other languages like Python have.
  • feedback-pipeline — One of the current Fedora Council objectives is minimization: work to reduce the installation and patching footprint of Fedora releases. feedback-pipeline is a tool developed by this team to generate reports of RPM sizes and dependencies.

And many more

The projects above are only a small sample focused on software used to build Fedora. Many Fedora packages have upstreams hosted on GitHub—too many to list here. The best place to start is with a project that’s important to you. Any contributions you make help improve the entire open source ecosystem. If you’re looking for something in particular, the Join Special Interest Group can help. Happy hacking!

Posted on Leave a comment

Fedora 29 on ARM on AWS

This week Amazon announced their new A1 arm64 EC2 Instances powered by their arm64 based Graviton Processors and, with a minor delay, the shiny new Fedora 29 for aarch64 (arm64) is now available to run there too!

Details on getting running on AWS is in this good article on using AWS tools on Fedora article and over all using Fedora on the AWS arm64 EC2 is the same as x86_64.

So while a new architecture on AWS is very exciting it’s at the same time old and boring! You’ll get the same versions of kernel, same features like SELinux and the same versions of the toolchain stacks, like the latest gcc, golang, rust etc in Fedora 29 just like all other architectures. You’ll also get all the usual container tools like podman, buildah, skopeo and kubernetes, and orchestration tools like ansible. Basically if you’re using Fedora on AWS you should be able use it in the same way on arm64.

Getting started

The initial launch of A1 aarch64 instances are available in the following four regions: US East (N. Virginia), US East (Ohio), US West (Oregon), Europe (Ireland). Direct links to launch the Fedora aarch64 AMIs directly are available here on the Fedora Cloud site.

Getting help

The Fedora support for aarch64 is very robust. It’s been widely used and tested across a number of platforms but of course with new users and new use cases will pick up issues that we’ve yet to encounter. So what is the best way to get help? If you’re having a crash in a particular application it should be reported in the usual way through RH Bugzilla, we have an ARMTracker tracker alias to block against to help identify Arm issues. For assistance with Arm specific queries and issues the Fedora Arm mailing list and we have the #fedora-arm IRC channel on Freenode.

Known issues

We have one known issue. The instance takes a while to get started, it can be up to 5 minutes. This is due to entropy and has been a general problem in virtual environments, across all architectures. We’re working to speed this up and it should be fixed soon. Once things are up an running though everything runs as expected.

Upcoming features

There will be Fedora 29 Atomic host coming in the next Two Week Atomic release, we unfortunately missed their release this time by a small window but it’ll be available in about 2 weeks with their next release and will appear on the site once released. We can’t let you have all the fun at once 😉