Posted on Leave a comment

Deploy Fedora CoreOS servers with Terraform

Fedora CoreOS is a lightweight, secure operating system optimized for running containerized workloads. A YAML document is all you need to describe the workload you’d like to run on a Fedora CoreOS server.

This is wonderful for a single server, but how would you describe a fleet of cooperating Fedora CoreOS servers? For example, what if you wanted a set of servers running load balancers, others running a database cluster and others running a web application? How can you get them all configured and provisioned? How can you configure them to communicate with each other? This article looks at how Terraform solves this problem.

Getting started

Before you start, decide whether you need to review the basics of Fedora CoreOS. Check out this previous article on the Fedora Magazine:

Terraform is an open source tool for defining and provisioning infrastructure. Terraform defines infrastructure as code in files. It provisions infrastructure by calculating the difference between the desired state in code and observed state and applying changes to remove the difference.

HashiCorp, the company that created and maintains Terraform, offers an RPM repository to install Terraform.

sudo dnf config-manager --add-repo \ https://rpm.releases.hashicorp.com/fedora/hashicorp.repo
sudo dnf install terraform

To get yourself familiar with the tools, start with a simple example. You’re going to create a single Fedora CoreOS server in AWS. To follow along, you need to install awscli and have an AWS account. awscli can be installed from the Fedora repositories and configured using the aws configure command

sudo dnf install -y awscli
aws configure

Please note, AWS is a paid service. If executed correctly, participants should expect less than $1 USD in charges, but mistakes may lead to unexpected charges.

Configuring Terraform

In a new directory, create a file named config.yaml. This file will hold the contents of your Fedore CoreOS configuration. The configuration simply adds an SSH key for the core user. Modify the authorized_ssh_key section to use your own.

variant: fcos
version: 1.2.0
passwd: users: - name: core authorized_ssh_keys: - "ssh-ed25519 AAAAC3....... user@hostname"

Next, create a file main.tf to contain your Terraform specification. Take a look at the contents section by section. It begins with a block to specify the versions of your providers.

terraform { required_providers { ct = { source = "poseidon/ct" version = "0.7.1" } aws = { source = "hashicorp/aws" version = "~> 3.0" } }
}

Terraform uses providers to control infrastructure. Here it uses the AWS provider to provision EC2 servers, but it can provision any kind of AWS infrastructure. The ct provider from Poseidon Labs stands for config transpiler. This provider will transpile Fedora CoreOS configurations into Ignition configurations. As a result, you do not need to use fcct to transpile your configurations. Now that your provider versions are specified, initialize them.

provider "aws" { region = "us-west-2"
} provider "ct" {}

The AWS region is set to us-west-2 and the ct provider requires no configuration. With the providers configured, you’re ready to define some infrastructure. Use a data source block to read the configuration.

data "ct_config" "config" { content = file("config.yaml") strict = true
}

With this data block defined, you can now access the transpiled Ignition output as data.ct_config.config.rendered. To create an EC2 server, use a resource block, and pass the Ignition output as the user_data attribute.

resource "aws_instance" "server" { ami = "ami-0699a4456969d8650" instance_type = "t3.micro" user_data = data.ct_config.config.rendered
}

This configuration hard-codes the virtual machine image (AMI) to the latest stable image of Fedora CoreOS in the us-west-2 region at time of writing. If you would like to use a different region or stream, you can discover the correct AMI on the Fedora CoreOS downloads page.

Finally, you’d like to know the public IP address of the server once it’s created. Use an output block to define the outputs to be displayed once Terraform completes its provisioning.

output "instance_ip_addr" { value = aws_instance.server.public_ip
}

Alright! You’re ready to create some infrastructure. To deploy the server simply run:

terraform init # Installs the provider dependencies
terraform apply # Displays the proposed changes and applies them

Once completed, Terraform prints the public IP address of the server, and you can SSH to the server by running ssh core@{public ip here}. Congratulations — you’ve provisioned your first Fedora CoreOS server using Terraform!

Updates and immutability

At this point you can modify the configuration in config.yaml however you like. To deploy your change simply run terraform apply again. Notice that each time you change the configuration, when you run terraform apply it destroys the server and creates a new one. This aligns well with the Fedora CoreOS philosophy: Configuration can only happen once. Want to change that configuration? Create a new server. This can feel pretty alien if you’re accustomed to provisioning your servers once and continuously re-configuring them with tools like Ansible, Puppet or Chef.

The benefit of always creating new servers is that it is significantly easier to test that newly provisioned servers will act as expected. It can be much more difficult to account for all of the possible ways in which updating a system in place may break. Tooling that adheres to this philosophy typically falls under the heading of Immutable Infrastructure. This approach to infrastructure has some of the same benefits seen in functional programming techniques, namely that mutable state is often a source of error.

Using variables

You can use Terraform input variables to parameterize your infrastructure. In the previous example, you might like to parameterize the AWS region or instance type. This would let you deploy several instances of the same configuration with differing parameters. What if you want to parameterize the Fedora CoreOS configuration? Do so using the templatefile function.

As an example, try parameterizing the username of your user. To do this, add a username variable to the main.tf file:

variable "username" { type = string description = "Fedora CoreOS user" default = "core"
}

Next, modify the config.yaml file to turn it into a template. When rendered, the ${username} will be replaced.

variant: fcos
version: 1.2.0
passwd: users: - name: ${username} authorized_ssh_keys: - "ssh-ed25519 AAAAC3....... user@hostname"

Finally, modify the data block to render the template using the templatefile function.

data "ct_config" "config" { content = templatefile( "config.yaml", { username = var.username } ) strict = true
}

To deploy with username set to jane, run terraform apply -var=”username=jane”. To verify, try to SSH into the server with ssh jane@{public ip address}.

Leveraging the dependency graph

Passing variables from Terraform into Fedora CoreOS configuration is quite useful. But you can go one step further and pass infrastructure data into the server configuration. This is where Terraform and Fedora CoreOS start to really shine.

Terraform creates a dependency graph to model the state of infrastructure and to plan updates. If the output of one resource (e.g the public IP address of a server) is passed as the input of another service (e.g the destination in a firewall rule), Terraform understands that changes in the former require recreating or modifying the later. If you pass infrastructure data into a Fedora CoreOS configuration, it will participate in the dependency graph. Updates to the inputs will trigger creation of a new server with the new configuration.

Consider a system of one load balancer and three web servers as an example.

The goal is to configure the load balancer with the IP address of each web server so that it can forward traffic to them.

Web server configuration

First, create a file web.yaml and add a simple Nginx configuration with a templated message.

variant: fcos
version: 1.2.0
systemd: units: - name: nginx.service enabled: true contents: | [Unit] Description=Nginx Web Server After=network-online.target Wants=network-online.target [Service] ExecStartPre=-/bin/podman kill nginx ExecStartPre=-/bin/podman rm nginx ExecStartPre=/bin/podman pull nginx ExecStart=/bin/podman run --name nginx -p 80:80 -v /etc/nginx/index.html:/usr/share/nginx/html/index.html:z nginx [Install] WantedBy=multi-user.target
storage: directories: - path: /etc/nginx files: - path: /etc/nginx/index.html mode: 0444 contents: inline: | <html> <h1>Hello from Server ${count}</h1> </html>

In main.tf, you can create three web servers using this template with the following blocks:

data "ct_config" "web" { count = 3 content = templatefile( "web.yaml", { count = count.index } ) strict = true
} resource "aws_instance" "web" { count = 3 ami = "ami-0699a4456969d8650" instance_type = "t3.micro" user_data = data.ct_config.web[count.index].rendered
}

Notice the use of count = 3 and the count.index variable. You can use count to make many copies of a resource. Here, it creates three configurations and three web servers. The count.index variable is used to pass the first configuration to the first web server and so on.

Load balancer configuration

The load balancer will be a basic HAProxy load balancer that forwards to each server. Place the configuration in a file named lb.yaml:

variant: fcos
version: 1.2.0
systemd: units: - name: haproxy.service enabled: true contents: | [Unit] Description=Haproxy Load Balancer After=network-online.target Wants=network-online.target [Service] ExecStartPre=-/bin/podman kill haproxy ExecStartPre=-/bin/podman rm haproxy ExecStartPre=/bin/podman pull haproxy ExecStart=/bin/podman run --name haproxy -p 80:8080 -v /etc/haproxy/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro haproxy [Install] WantedBy=multi-user.target
storage: directories: - path: /etc/haproxy files: - path: /etc/haproxy/haproxy.cfg mode: 0444 contents: inline: | global log stdout format raw local0 defaults mode tcp log global option tcplog frontend http bind *:8080 default_backend http backend http balance roundrobin
%{ for name, addr in servers ~} server ${name} ${addr}:80 check
%{ endfor ~}

The template expects a map with server names as keys and IP addresses as values. You can create that using the zipmap function. Use the ID of the web servers as keys and the public IP addresses as values.

data "ct_config" "lb" { content = templatefile( "lb.yaml", { servers = zipmap( aws_instance.web.*.id, aws_instance.web.*.public_ip ) } ) strict = true
} resource "aws_instance" "lb" { ami = "ami-0699a4456969d8650" instance_type = "t3.micro user_data = data.ct_config.lb.rendered
}

Finally, add an output block to display the IP address of the load balancer.

output "load_balancer_ip" { value = aws_instance.lb.public_ip
}

All right! Run terraform apply and the IP address of the load balancer displays on completion. You should be able to make requests to the load balancer and get responses from each web server.

$ export LB={{load balancer IP here}}
$ curl $LB
<html> <h1>Hello from Server 0</h1>
</html>
$ curl $LB
<html> <h1>Hello from Server 1</h1>
</html>
$ curl $LB
<html> <h1>Hello from Server 2</h1>
</html>

Now you can modify the configuration of the web servers or load balancer. Any changes can be realized by running terraform apply once again. Note in particular that any change to the web server IP addresses will cause Terraform to recreate the load balancer (changing the count from 3 to 4 is a simple test). Hopefully this emphasizes that the load balancer configuration is indeed a part of the Terraform dependency graph.

Clean up

You can destroy all the infrastructure using the terraform destroy command. Simply navigate to the folder where you created main.tf and run terraform destroy.

Where next?

Code for this tutorial can be found at this GitHub repository. Feel free to play with examples and contribute more if you find something you’d love to share with the world. To learn more about all the amazing things Fedora CoreOS can do, dive into the docs or come chat with the community. To learn more about Terraform, you can rummage through the docs, checkout #terraform on freenode, or contribute on GitHub.

Posted on Leave a comment

Vagrant beyond the basics

There are, like most things in the Unix/Linux world, many ways of doing things with Vagrant, but here are some examples of ways to grow your Vagrantfile portfolio and increase your knowledge and use.

If you have not yet installed vagrant you can follow the first part of this series.

Some Vagrantfile basics

All Vagrantfiles start with “Vagrant.configure(“2”) do |config|” and finish with a corresponding “end”:

 
Vagrant.configure("2") do |config|
  ...
  ...
end

The “2” represents the version of Vagrant, and is currently either 1 or 2. Unless you need to use the older version simply stick with the latest.

The config structure is broken down into namespaces:

config.vm – modify the configuration of the machine(s) that Vagrant manages.

config.ssh – for configuring how Vagrant will access your machine over SSH.

config.winrm – configuring how Vagrant will access your Windows guest over WinRM.

config.winssh – the WinSSH communicator is built specifically for the Windows native port of OpenSSH.

config.vagrant – modify the behavior of Vagrant itself.

Each line in a namespace begins with the word ‘config’:

config.vm.box = “fedora/32-cloud-base”
config.vm.network “private_network”

There are many options here, and a read of the documentation pages is strongly recommended. They can be found at https://www.vagrantup.com/docs/vagrantfile

Also in this section you can configure provider-specific options. In this case the provider is libvirt, and the specific config looks like this:

 
config.vm.provider :libvirt do |libvirt|
  libvirt.cpus = 1
  libvirt.memory = 512

In the example above, all libvirt VMs will be created with a single CPU and 512Mb of memory unless specifically overridden.

The VM namespace is where you define all machines you want this Vagrantfile to build. Notice that this is still a part of the config section, and lines should therefore begin with ‘config’. All sections or parts of sections have an ‘end’ statement to close them off.

Creating multiple machines at once

Depending on what you need to achieve, this can be a simple loop or multiple machine definitions. To create any number of machines in a series, with the same settings but perhaps different names and/or IP addresses, you can just provide a range as shown here:

 
(1..5).each do |i|
  config.vm.define "server#{i}" do |server|
    server.vm.hostname = "server#{i}.example.com"
  end
end

This will create 5 servers, named server1, server2, server3 etc.

Of note, using Ruby style “for i in 1..3 do” doesn’t work despite Vagrantfile syntax actually being Ruby, so use the method from the example above.

If you need servers with different hostnames, different hardware etc then you’ll need to specify them individually, or at least in groups if the situation lends itself to that. Let’s say you need to create a typical web/db/load balancer infrastructure, with 2 web servers, a single database server and a load balancer for the web traffic. Ignoring the specific software setup for this, to simply create the virtual machines ready for provisioning you could use something like this:

 
# Load Balancer
config.vm.define "loadbal", primary: true do |loadbal|
  loadbal.vm.hostname = "loadbal"
end

# Database
config.vm.define "db", primary: true do |db|
  db.vm.hostname = "db"
end

# Web Servers x2
(1..2).each do |i|
  config.vm.define "web#{i}" do |web|
    web.vm.hostname = "web#{i}"
  end
end

This uses a combination of multiple machine calls and a small loop to build 4 VMs with a single ‘vagrant up’ command.

Networking

Vagrant generally creates its own network for VM access, and you use this with ‘vagrant ssh’. If you create more than one VM then you must use the VM name to identify which one you wish to connect to – vagrant ssh vmname.

There are a number of configuration options available which allow you to interact with your VMs in various ways.

The vagrant-libvirt plugin creates a network for the guests to use. This is automated and will always be present even if you define your own networks. The network is named “vagrant-libvirt” and can be seen either in the Virtual Networks tab of virt-manager’s connection details or by issuing a sudo virsh net-list command.

If you use dhcp for your guests, you can find the individual IP addresses with the virsh net-dhcp-list command: sudo virsh net-dhcp-leases vagrant-libvirt

Port Forwarding

The simplest change to default networking is port forwarding. This uses a simple format like most Vagrant config: config.vm.network “forwarded_port”, guest: 80, host: 8080

This listens to port 8080 on your local machine and forwards connections to port 80 on the Vagrant machine. If you need to use a UDP port, simply add , protocol: “udp” to the end of that line (notice that comma which should come immediately after the second port number).

Obviously for more complex configurations this might not be ideal, as you need to specify every single port you want to forward. If you then add multiple machines the complexity can really become too much.

In addition to this, anyone on your network can access these ports if they know your IP address, so that’s something you should be aware of.

Public Network

This creates a network card for the Vagrant VM which connects to your host network, and will therefore be visible to all machines on that network. As Vagrant is not designed to be secure, you should be aware of any vulnerabilities and take steps to protect against them.

To configure a public network, add config.vm.network “public_network” to your Vagrantfile. This will use DHCP to obtain a network address.

If you wish to assign a static IP address, you can add one to the end of the network declaration: config.vm.network “public_network”, ip: “192.168.0.1”

If you’re creating multiple guests you can put the network configuration in the vm namespace, and even allocate IPs based on iteration too:

 
Vagrant.configure("2") do |config|
  config.vm.box = "centos/8"
  config.vm.provider :libvirt do |libvirt|
    libvirt.qemu_use_session = false
  end

  # Servers x2
  (1..2).each do |i|
    config.vm.define "server#{i}" do |server|
      server.vm.hostname = "server#{i}"
      server.vm.network "public_network", ip: "192.168.122.20#{i}"
    end
  end
end

Private Network

This works very much like the Public Network option, only the network is only available to the host machine and the Vagrant guests. The syntax is almost identical too: config.vm.network “private_network”, type: “dhcp”

 To use a static IP address, simply add it:

 
config.vm.network "private_network", ip: "192.168.50.4"

This will create a new network in libvirt, usually named something like “vagrant-private-dhcp” – you can see this with the command sudo virsh net-list while the VM is running. This network is created and destroyed along with the vagrant guests.

Again, the network config can be specified for all guests, or per guest as shown in the public network example above.

Provisioning

Once you have your VMs defined, you can obviously then do whatever you want with them, but as soon as you issue a ‘vagrant destroy’ command any changes will be lost. This is where automated provisioning comes in.

You can use several methods to provision your machines, from simple file copies to shell scripts, Ansible, Chef and Puppet. Many of the main methods can be used, but I’ll cover the simple ones here – if you need to use something else please read the documentation as it’s all covered.

File uploads

To copy a file to the Vagrant guest, add a line to the Vagrantfile like this:

 
config.vm.provision "file", source: "~/myfile", destination: "myfile"

You can copy directories too:

 
config.vm.provision "file", source: "~/path/to/host/folder", destination: "$HOME/remote/newfolder"

The directory structure should already exist on the Vagrant host, and will be copied in its entirety, including subdirectories and files.

Note: If you add a trailing slash to the destination path, the source path will be placed under this so make sure you only do this if you want that outcome. For example, if the above destination was “$HOME/remote/newfolder/”, then the result would see “$HOME/remote/newfolder/folder” created with the contents of the source placed here.

Shell commands

You can include individual commands, inline scripts or external scripts to perform provisioning tasks.

A single command would take this form, and any valid command line command can be used here: config.vm.provision “shell”, inline: “sudo dnf update -y”

An inline script is less common, and declared at the top of the Vagrantfile then called during provisioning:

 
$script = &lt;&lt;-SCRIPT
echo I am provisioning...
date > /etc/vagrant_provisioned_at
SCRIPT

Vagrant.configure("2") do |config|
  config.vm.provision "shell", inline: $script
end

More common is the external shell script, which gives more flexibility and makes code more modular. Vagrant uploads the file to the guest then executes it. Simply call the script in the provisioning line:

config.vm.provision “shell”, path: “script.sh”

The file need not be local to the Vagrant host either:

config.vm.provision “shell”, path: “https://example.com/provisioner.sh”

Ansible

To use Ansible to provision your VMs you must have it installed on the Vagrant host; see https://docs.ansible.com/ansible/latest/installation_guide/intro_installation.html#installing-ansible-on-rhel-centos-or-fedora.

You specify an Ansible playbook to provision your VM in the following way:

 
config.vm.provision "ansible" do |ansible|
  ansible.playbook = "playbook.yml"
end

This then calls the playbook, which will run as any externally-run ansible playbook would.

If you’re building multiple VMs with your Vagrantfile then it’s likely you want different configurations for some of them, and in this case you should provision within the definition of each VM, as shown here:

 
# Web Servers x2
(1..2).each do |i|
  config.vm.define "web#{i}" do |web|
    web.vm.hostname = "web#{i}"
    web.vm.provision "ansible" do |ansible|
      ansible.playbook = "web.yml"
    end
  end
end

Ansible provisioners come in two formats – ansible and ansible_local. The ansible provisioner requires that Ansible is installed on the Vagrant host, and will connect remotely to your guest VMs to provision them. This means all necessary ssh authentication must be in place for it to work. The ansible_local provisioner executes playbooks directly on the guest VMs, which therefore requires Ansible be installed on each of the guests you want to provision. Vagrant will try to install Ansible on the guests in order to do this, (This can be controlled with the install option, but is enabled by default). On RHEL-style systems like Fedora, Ansible is installed from the EPEL repository. Simply use either ansible or ansible_local in the config_vm_provision command to choose the style you need.

Synced Folders

Vagrant allows you to sync folders between your Vagrant host and your guests, allowing access to configuration files, data etc. By default, the folder containing the Vagrant file is shared and mounted under /vagrant on each guest.

To configure additional synced folders, use the config.vm.synced.folder command:

 
config.vm.synced_folder "src/", "/srv/website"

The two parameters are the source folder on the Vagrant host and the mount directory on the guest. The destination folder will be created if it does not exist, recursively if necessary.

Options for synced folders allow you to configure them better, including the option to disable them completely. Other options allow you to specify a group owner of the folder (group), the folder owner (owner), plus mount options. There are others but these are the main ones.

You can disable the default share with the following command:

 
config.vm.synced_folder ".", "/vagrant", disabled: true

Other options are configured as follows:

 
config.vm.synced_folder "src/", "/srv/website",
  owner: "apache", group: "apache"

NFS synced folders

When using Vagrant on a Linux host, synced folders use NFS (with the exception of the default share which uses rsync; see below) so you must have NFS installed on the Vagrant host, and the guests also need NFS support installation. To use NFS with non-Linux hosts, simply specify the folder type as ‘nfs’:

 
config.vm.synced_folder ".", "/vagrant", type: "nfs"

RSync synced folders

These are the easiest to use as they usually work without any intervention on a Linux host. This is a one-way sync from host to guest performed at startup (vagrant up) or after a vagrant reload command is issued. The default share of the Vagrant project directory is done with rsync. To configure a synced folder with rsync, specify the type as ‘rsync’:

 
config.vm.synced_folder ".", "/vagrant", type: "rsync"

Posted on Leave a comment

Automating Network Devices with Ansible

Ansible is a great automation tool for system and network engineers, with Ansible we can automate small network to a large scale enterprise network. I have been using Ansible to automate both Aruba, and Cisco switches from my Fedora powered laptops for a couple of years. This article covers the requirements and executing a couple of playbooks.

Configuring Ansible

If Ansible is not installed, it can be installed using the command below

$ sudo dnf -y install ansible

Once installed, create a folder in your home directory or a directory of your preference and copy the ansible configuration file. For this demonstration, I will be using the following.

$ mkdir -pv /home/$USER/network_automation
$ sudo cp -v /etc/ansible.cfg /home/$USER/network_automation
$ cd /home/$USER/network_automation
$ sudo chown $USER.$USER && chmod 0600 ansible.cfg

To prevent lengthy commands from failing, edit the ansible.cfg and append the following lines. We must add the persistent connection and set the desired time in seconds for the command_timeout as demonstrated below. A use case where this is useful is when you are performing backups of a network device that has a lengthy configuration.

$ vim ansible.cfg
[persistent_connection]
command_timeout = 300
connection_timeout = 30

Requirements

If SELinux is enabled, you will need to install SELinux binding, which is required when using the copy module.

# Install SELinux bindings
dnf -y install python3-libselinux python3-libsemanage

Creating the inventory

The inventory holds the names of the network assets, and grouping of the assets are in square brackets [], below is a  sample inventory.

[site_a]
Core_A ansible_host=192.168.122.200
Distro_A ansible_host=192.168.122.201
Distro_B ansible_host=192.168.122.202

Group vars can be used to address the common variables, for example, credentials, network operating system, and so on. Ansible document on inventory provides additional details.

Playbook

Playbooks are Ansible’s configuration, deployment, and orchestration language. They can describe a policy you want your remote systems to enforce, or a set of steps in a general IT process.
Ansible Playbook

Read Operations

Let us create a simple playbook to run a show command to read the configuration on a few switches.

 
  1 ---
  2 - name: Basic Playbook
  3   hosts: site_a
  4   connection: local
  5
  6   tasks:
  7   - name: Get Interface Brief
  8     ios_command:
  9       commands:
 10         - show ip interface brief | e una
 11     register: interfaces
 12
 13   - name: Print results
 14     debug:
 15       msg: "{{ interfaces.stdout[0] }}
Without Debug

With Debug

The above images show the differences without and with the debug module respectively.

Let’s break the playbook into three blocks, starting with lines 1 to 4.

  • The three dashes/hyphens starts the YAML document
  • The hosts defines the hosts or host groups, multiple groups are comma-separated
  • Connection defines the methodology to connect to the network devices. Another option is network_cli (recommended method) and will be used later in this article. See IOS Platform Options for more details.

Lines 6 to 11 starts the tasks, we will be using ios_command and ios_config. This play will execute the show command show ip interface brief | e una and save the output from the command into the interfaces variable, with the register key.

Lines 13 to 15, by default, when you execute a show command you will not see the output, though this is not used during automation. It is very useful for debugging; therefore, the debug module was used.

The below video shows the execution of the playbook. There are a couple of ways you can execute the playbook.

  • Passing arguments to the command line, for example, include -u <username> -k to prompt for the remote user credentials
 
ansible-playbook -i inventory show_demo.yaml -u admin -k
  • Include the credentials in the host or group vars
 
ansible-playbook -i inventory show_demo.yaml

Never store passwords in plain text. We recommend using SSH keys to authenticate SSH connections. Ansible supports ssh-agent to manage your SSH keys. If you must use passwords to authenticate SSH connections, we recommend encrypting them with
Using Vault in Playbooks

Passing arguments to the command line
Credentials in the inventory

If we want to save the output to a file, we will use the copy module as shown in the playbook below. In addition to using the copy module, we will include the backup_dir variable to specify the directory path.

 
---
- name: Get System Infomation
  hosts: site_a
  connection: network_cli
  gather_facts: no
 
  vars:
    backup_dir: /home/eramirez/dev/ansible/fedora_magazine
 
  tasks:
  - name: get system interfaces
    ios_command:
      commands:
        - show ip int br | e una
    register: interface
   
  - name: Save result to disk
    copy:
      content: "{{ interface.stdout[0] }}"
      dest: "{{ backup_dir }}/{{ inventory_hostname }}.txt"

To demonstrate the use of variables in the inventory, we will use plain text. This method Must not be used in production.

 
[site_a]
Core_A ansible_host=192.168.122.200
Distro_A ansible_host=192.168.122.201
Distro_B ansible_host=192.168.122.202
[all:vars]
ansible_connection=network_cli
ansible_network_os=ios
ansible_user=admin
ansible_password=fedora
ansible_become=yes
ansible_become_password=yes
ansible_become_method=enable

Write Operations

In the previous section, we saw that we could get information from the network devices; in this section, we will write (add/modify) the configuration on these network devices. To make changes to the network device, we will be using the ios config module.

Let us create a playbook to configure a couple of interfaces in all of the network devices in site_a. We will first take a backup of the current configuration of all devices in site_a. Lastly, we will save the configuration.

 
---
- name: Get System Infomation
  hosts: site_a
  connection: network_cli
  gather_facts: no
 
  vars:
    backup_dir: /home/eramirez/dev/ansible/fedora_magazine
 
  tasks:
  - name: Backup configs
    ios_config:
      backup: yes
      backup_options:
        filename: "{{ inventory_hostname }}_running_cfg.txt"
        dir_path: "{{ backup_dir }}"
   
  - name: get system interfaces
    ios_config:
      lines:
        - description Raspberry Pi
        - switchport mode access
        - switchport access vlan 100
        - spanning-tree portfast
        - logging event link-status
        - no shutdown
      parents: "{{ item }}"
    with_items:
      - interface FastEthernet1/12
      - interface FastEthernet1/13
     
  - name: Save switch configuration
    ios_config:
      save_when: modified

Before we execute the playbook, we will first validate the interface configuration. We will then run the playbook and confirm the changes as illustrated below.

Conclusion

This article is a basic introduction to whet your appetite that demonstrates how Ansible is used to manage network devices. Ansible is capable of automating a vast network, which includes MPLS routing and performing validation before executing the next task.

Posted on Leave a comment

Using Ansible to organize your SSH keys in AWS

If you’ve worked with instances in Amazon Web Services (AWS) for a long time, you may run into this common issue. It’s not technical, but more to do with the human nature of getting too comfortable. When you launch a new instance in a region you haven’t used recently, you may end up creating a new SSH key pair. This leads to having too many keys, which can become complicated and disordered.

This article shows you a way to have your public key in all regions. A recent Fedora Magazine article includes one solution. But the solution in this article is automated even further, and in a more concise and scalable way.

Say you have a Fedora 30 or 31 desktop system where your key is stored, and Ansible is installed as well. These two things together provide the solution to this problem and many more.

With Ansible’s ec2_key module, you can create a simple playbook that will maintain your SSH key pair in all regions. If you need to add or remove keys, it’s as simple as adding and removing lines from a file.

Setting up and running the playbook

To use the playbook, first install necessary dependencies for the ec2_key module:

$ sudo dnf install python3-boto python3-boto3

The playbook is simple: you need only to change your key and its name as in the example below. After that, run the playbook and it iterates over all the public AWS regions listed. The example also includes the restricted regions in case you have access. To include them, uncomment each line as needed, save the file, and then run the playbook again.

---
- name: Maintain an ssh key pair in ec2 hosts: localhost connection: local gather_facts: no vars: ansible_python_interpreter: python tasks: - name: Make available your ssh public key in ec2 for new instances ec2_key: name: "YOUR KEY NAME GOES HERE" key_material: 'YOUR KEY GOES HERE' state: present region: "{{ item }}" with_items: - us-east-2 #US East (Ohio) - us-east-1 #US East (N. Virginia) - us-west-1 #US West (N. California) - us-west-2 #US West (Oregon) - ap-east-1 #Asia Pacific (Hong Kong) - ap-south-1 #Asia Pacific (Mumbai) - ap-northeast-2 #Asia Pacific (Seoul) - ap-southeast-1 #Asia Pacific (Singapore) - ap-southeast-2 #Asia Pacific (Sydney) - ap-northeast-1 #Asia Pacific (Tokyo) - ca-central-1 #Canada (Central) - eu-central-1 #EU (Frankfurt) - eu-west-1 #EU (Ireland) - eu-west-2 #EU (London) - eu-west-3 #EU (Paris) - eu-north-1 #EU (Stockholm) - me-south-1 #Middle East (Bahrain) - sa-east-1 #South America (Sao Paulo) # - us-gov-east-1 #AWS GovCloud (US-East) # - us-gov-west-1 #AWS GovCloud (US-West) # - ap-northeast-3 #Asia Pacific (Osaka-Local) # - cn-north-1 #China (Beijing) # - cn-northwest-1 #China (Ningxia)

This playbook requires AWS access via API, as well. To do this, use environment variables as follows:

$ AWS_ACCESS_KEY="aws-access-key-id" AWS_SECRET_KEY="aws-secret-key-id" ansible-playbook ec2-playbook.yml

Another option is to install the aws cli tools and add the credentials as explained in a previous Fedora Magazine article. It is not recommended to insert these values in the playbook if you store it anywhere online! You can find this playbook code on GitHub.

After the playbook finishes, confirm that your key is available on the AWS console. To do that:

  1. Log into your AWS console
  2. Go to EC2 > Key Pairs
  3. You should see your key listed. The only limitation is that you have to check region-by-region with this method.

Another way is to use a quick command in a shell to do this check for you.

First create a variable with all regions on the playbook:

AWS_REGION="us-east-1 us-west-1 us-west-2 ap-east-1 ap-south-1 ap-northeast-2 ap-southeast-1 ap-southeast-2 ap-northeast-1 ca-central-1 eu-central-1 eu-west-1 eu-west-2 eu-west-3 eu-north-1 me-south-1 sa-east-1"

Then do a for loop and you will get the result from aws API:

for each in ${AWS_REGION} ; do aws ec2 describe-key-pairs --key-name <YOUR KEY GOES HERE> ; done

Keep in mind that to do the above you need to have the aws cli installed.

Posted on Leave a comment

CodeReady Containers: complex solutions on OpenShift + Fedora

Want to experiment with (complex) solutions on OpenShift 4.1+? CodeReady Containers (CRC) on a physical Fedora server is a great choice. It lets you:

  • Configure the RAM available to CRC / OpenShift (this is key as we’ll deploy Machine Learning, Change Data Capture, Process Automation and other solutions with significant memory requirements)
  • Avoid installing anything on your laptop
  • Standardize (on Fedora 30) so that you get the same results every time

Start by installing CRC and Ansible Agnostic Deployer (AgnosticD) on a Fedora 30 physical server. Then, you’ll use AgnosticD to deploy Open Data Hub on the OpenShift 4.1 environment created by CRC. Let’s get started!

Set up CodeReady Containers

$ dnf config-manager --set-enabled fedora
$ su -c 'dnf -y install git wget tar qemu-kvm libvirt NetworkManager jq libselinux-python'
$ sudo systemctl enable --now libvirtd

Let’s also add a user.

$ sudo adduser demouser
$ sudo passwd demouser
$ sudo usermod -aG wheel demouser

Download and extract CodeReady Containers:

$ su demouser
$ cd /home/demouser
$ wget https://mirror.openshift.com/pub/openshift-v4/clients/crc/1.0.0-beta.3/crc-linux-amd64.tar.xz
$ tar -xvf crc-linux-amd64.tar.xz
$ cd crc-linux-1.0.0-beta.3-amd64/
$ sudo cp ./crc /usr/bin

Set the memory available to CRC according to what you have on your physical server. For example, on a physical server with around 100GB you can allocate 80G to CRC as follows:

$ crc config set memory 81920
$ crc setup

You’ll need your pull secret from https://cloud.redhat.com/openshift/install/metal/user-provisioned.

$ crc start

That’s it — you can now login to your OpenShift environment:

eval $(crc oc-env) && oc login -u kubeadmin -p <password> https://api.crc.testing:6443

Set up Ansible Agnostic Deployer

github.com/redhat-cop/agnosticd is a fully automated two-phase deployer. Let’s deploy it!

$ su demouser
$ cd /home/demouser
$ git clone https://github.com/redhat-cop/agnosticd.git
$ cd agnosticd/ansible
$ python -m pip install --upgrade --trusted-host files.pythonhosted.org -r requirements.txt
$ python3 -m pip install --upgrade --trusted-host files.pythonhosted.org -r requirements.txt
$ pip3 install kubernetes
$ pip3 install openshift
$ pip install kubernetes
$ pip install openshift

Set up Open Data Hub on Code Ready Containers

Open Data Hub is a machine-learning-as-a-service platform built on OpenShift and Kafka/Strimzi. It integrates a collection of open source projects.

First, create an Ansible inventory file with the following content.

$ cat inventory
$ 127.0.0.1 ansible_connection=local

Set up the WORKLOAD environment variable so that Ansible Agnostic Deployer knows that we want to deploy Open Data Hub.

$ export WORKLOAD="ocp4-workload-open-data-hub"
$ sudo cp /usr/local/bin/ansible-playbook /usr/bin/ansible-playbook

We are only deploying one Open Data Hub project, so set user_count to 1. You can set up workshops for many students by setting user_count.

An OpenShift project (with Open Data Hub in our case) will be created for each student.

$ eval $(crc oc-env) && oc login -u kubeadmin -p <password> https://api.crc.testing:6443
$ ansible-playbook -i inventory ./configs/ocp-workloads/ocp-workload.yml -e"ocp_workload=${WORKLOAD}" -e"ACTION=create" -e"user_count=1" -e"ocp_username=kubeadmin" -e"ansible_become_pass=<password>" -e"silent=False"
$ oc project open-data-hub-user1
$ oc get route
NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD
jupyterhub jupyterhub-open-data-hub-user1.apps-crc.testing jupyterhub 8080-tcp edge/Redirect None

On your laptop, add jupyterhub-open-data-hub-user1.apps-crc.testing to your /etc/hosts file. For example:

127.0.0.1 localhost fedora30 console-openshift-console.apps-crc.testing oauth-openshift.apps-crc.testing mapit-app-management.apps-crc.testing mapit-spring-pipeline-demo.apps-crc.testing jupyterhub-open-data-hub-user1.apps-crc.testing jupyterhub-open-data-hub-user1.apps-crc.testing

On your laptop:

$ sudo ssh marc@fedora30 -L 443:jupyterhub-open-data-hub-user1.apps-crc.testing:443

You can now browse to https://jupyterhub-open-data-hub-user1.apps-crc.testing.

Now that we have Open Data Hub ready, you could deploy something interesting on it. For example, you could deploy IBM’s Qiskit open source framework for quantum computing. For more information, refer to Video no. 9 at this YouTube playlist, and the Github repo here.

You could also deploy plenty of other useful tools for Process Automation, Change Data Capture, Camel Integration, and 3scale API Management. You don’t have to wait for articles on these, though. Step-by-step short videos are already available on YouTube.

The corresponding step-by-step instructions are also on YouTube. You can also follow along with this article using the GitHub repo.


Photo by Marta Markes on Unsplash.