Posted on Leave a comment

CHAOSSCON

Meet the CHAOSS community and the tools used by several open source projects, communities, and engineering teams to track and analyze their development activities, communities health, diversity, risk, and value.

This conference will show CHAOSS updates, use cases, and hands-on workshops for developers, community managers, project managers, and anyone interested in measuring open source project health.

Posted on Leave a comment

Hyperledger Hackfest

Hyperledger ​Hackfests ​are ​regular ​gatherings ​for ​developers ​working ​on ​the ​different ​projects ​hosted ​at ​Hyperledger. ​ ​The ​primary ​goal ​for ​a ​Hackfest ​is ​to ​facilitate ​software ​development ​collaboration ​and ​knowledge ​sharing ​between ​participants, ​with ​an ​eye ​towards ​reflecting ​all ​ideas ​and ​conclusions ​back ​outward ​to ​the ​public ​open ​source ​community ​afterwards.

Posted on Leave a comment

Xen Project Developer and Design Summit

Xen Project

June 20, 2018

Crowne Plaza Nanjing Jiangning

Jiangning

china 32 2111100

China

The Xen Developer and Design Summit brings together the Xen Project’s community of developers and power users for their annual conference. The conference is about sharing ideas and the latest developments, sharing experience, planning, collaboration and above all to have fun and to meet the community that defines the Xen Project.

Click Here!

Posted on Leave a comment

Linux Kernel 4.17, “Merciless Moray,” Offers Improved Performance and Security

Linus Torvalds released version 4.17 of the Linux Kernel on Sunday, nine weeks after the prior version. Although Linus says he is running out “of fingers and toes to keep track of minor releases,” he has decided not to call this release “5.0” because he is saving that for 4.20.

As with the 4.16 cycle, 4.17 has been a relatively smooth, save a few hiccups due to those pesky chip issues. It turns out the shadow of the Spectre vulnerability is still long, and the last two weeks before the release were a busy ones, with patches designed to counteract the effects of Spectre v4 making up a significant portion of all the code submitted. That said, and even though Linus does not like large amounts of changes so late in the release cycle, he skipped an rc8 and released the final version of 4.17 anyway.

Be as it may, 4.17 also comes with plenty of other improvements. There is the set of changes that will improve the power consumption on most machines, for example. These changes affect what is called the “idle loop” of the kernel. Even if your machine is apparently not doing anything, as long as it is powered up, the kernel is working. The new code optimizes the “downtime” processes and, according to its author Rafael Wysocki, power consumption could go down “10% or more.” This means battery charges will last longer on laptops, clusters will be more efficient, and machines will be more eco-friendly across the board.

Something that is not often mentioned in these reports are the various curios — the leftovers from times gone by that still have developers working on them — such as the case of the Macintosh PowerBook 100 series, a laptop series manufactured by Apple in the early 1990s which used a Motorola processor. These things are still being maintained, and 4.17 comes with several improvements for the devices. I wonder if it is too late to get the support for the Commodore 64 in there.

Although the PowerBook 100 is still being supported, on a more pragmatic note, other architectures have been dropped. Such is the case of eight obsolete CPUs, including the Unicore32, Blackfin, and Hexagon, among others. All of these processors are very niche and are being superseded by other more modern alternatives. Support for POWER4 and POWER4+ processors is also being removed. Considering IBM is now on the ninth generation of POWER, it was probably about time. Dropping these architectures has had the side effect of making 4.17 one of the lightest releases in recent years, where the number of lines removed is larger than the number of lines added. All told, getting rid of code for obsolete architectures eliminates about half a million lines from the kernel.

Other stuff to look forward to in kernel 4.17

  • Kernel 4.17 also comes with HDCP, or High-bandwidth Digital Content Protection. This is the technology that “protects” proprietary content by making perfectly functional, but uncertified hardware underperform or directly useless. This may seem counterintuitive, and it is. No buts. The idea is that, to protect music and videos, manufacturers must certify their video cards, monitors, and HDMI cables (and pay up considerable amounts of money) so that HDCP-protected content will play on the devices. If making software act as an obstacle on perfectly adequate hardware sounds like a bonkers idea, that’s because it is. But that’s the state of the protection of copyrighted material nowadays. At least in theory, the inclusion of HDCP is a step towards allowing user to be able to play protected content.

  • Fortunately, most code in this release improves performance on the users’ machines. Changes in the realm of drivers/controllers and AMD video cards received a big boost this time around. In 4.17, AMDGPU DC is enabled by default, for example, and is now in the mainline kernel. This means you won’t need to install an external DKMS driver for your Radeon card at full capacity, and you will have HDMI/DP audio out of the box. Another improvement is that AMDKFD is now also part of the mainline kernel. This is important for using AMD GPUs in high performance computing, where GPUs are used to carry out complex and consuming calculations.

  • Speaking of performance enhancements, work has begun on code that allows users to tweak the power management of their cards and the first changes have also been incorporated into 4.17: On other platforms, Radeon WattMan allows users to control the voltage, fan speed, engine clock and so on of their cards, and that is what developers are starting to work into the Linux kernel.

  • Support for the RISC-V, the open source processor architecture, is also chugging along nicely. Developers have added dynamic ftrace on RISC-V, cleaned up the atomic and locking routines, as well as the module loading support. The latter is now enabled by default.

As always, to find out more, you can check out Kernel Newbies (when it becomes available) and Phoronix.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Posted on Leave a comment

An Introduction to FRRouting

I recently learned about FRRouting (FFR), an IP routing protocol suite for Linux and Unix platforms. FRR has been under rapid development since the first release in April 2017. So, they just turned one, and they recently released version 4.0 of the software. According to the website, this release brings various enhancements aimed at creating the best routing protocol stack available.

How did I not know about all this? Doubtless due to a personal defect. In any case, the contributors designed FRR to streamline the routing protocol stack. FRR can be used for connecting hosts, virtual machines, and containers to the network, for network switching and routing, and much more. Here’s what I learned about the excellent FRRouting project and how it came to be.

FRR has its roots in the Quagga project, which I covered here. In fact, it started as a fork by some long-time Quagga developers who combined their efforts to improve on that project’s well-established foundation.

Why Fork?

Anyone can fork an open source project, which can be either an advantage or a disadvantage. A fork can double the workload, divide the contributor community, or make two “meh” projects instead of one great one. It can create hard feelings. Or, a fork can succeed, by reviving a moribund project and bringing new energy and enthusiasm. It can also rescue a code base from the clutches of a bad commercial steward. Forking a project is rarely done on a whim because it’s a such a big step.

Examples of successful forks include Ubuntu, forked from Debian (although arguments rage over whether it is really a fork or some other weird thing nobody can think of a word for), the LibreOffice fork of OpenOffice, and my favorite, the MariaDB fork of MySQL.

MariaDB is the all-time great “having your cake and eating it” story. The short version is Sun Microsystems bought MySQL for a cool billion dollars and hired the talent that built it. But the MySQL executive team, including Monty Widenius and Marten Mickos, were publicly unhappy with Sun. They left Sun, taking their billion dollars with them. Then Oracle bought Sun, which motivated Widenius to fork MySQL and found MariaDB, which has been a clear success. (A longer version is here: Migrating to MariaDB from MySQL.)

Why Fork Quagga?

Quagga was created as a fork of the GNU Zebra project, which had died. So, FRR is a fork of a fork. I asked on the FRR developer’s mailing list about this and received a wealth of interesting answers. (You can read the whole thread on the dev list archive.)

Here’s what people said:

“There was a desire for a project that was governed by community consensus and documented process.”

“I find the FRR community very welcoming, very friendly, extremely helpful and respectful. It is very rare I come across communities like this and it is a real pleasure to engage in conversation and work on issues with them. If one needs a few eyes on a bit of code all you have to do is ask and you get constructive input, almost all of the time from multiple people.”

“The easy and pleasant direct access to the developers is a great bonus.”

Community-driven development

There was a point in time where Quagga was running on a skeleton crew, and thousands of patches were backed up from a variety of contributors, sitting, aging, and going nowhere. Thus, working through the backlog and creating a fast-paced, community-oriented project governed by consensus and documented process are some of the primary FRR drivers. A lot of the work on FRR is devoted to implementing new protocols and features, including cloud networking technologies.

Governance

Governance is also a necessary part of any OSS project, and it can make or break a project. FRRouting managed this by joining the vendor-neutral Linux Foundation, which is home to many important projects including the Node.js Foundation, Let’s Encrypt, and of course the Linux kernel.

FRRouting has a six-member technical steering committee, and members are elected to one-year terms. Maintainers have regular open meetings, and there is an official charter. This governing structure helps the project handle the numerous issues that any large, complex, and essential software project has to deal with, such as development direction and priorities, differences of opinion, licensing, finances, and so on.

Getting and Using FRR

FRR is hosted on GitHub. You may clone the repository, download source tarballs, or download .deb and .rpm packages. The documentation is quite good, and there is detailed information on becoming a contributor. The FRR user guide also provides a great overview of the architecture.
Visit FRRouting to learn more about the project. Also check out FRRouting on Juniper’s Advanced Forwarding Interface for an interesting example of where FRR is already finding a home in advanced networking architectures.

Posted on Leave a comment

Upstream Linux support for new NXP i.MX 8

The i.MX 6 platform has for the past few years enjoyed a large effort to add upstream support to Linux and surrounding projects. Now it is at the point where nothing is really missing any more. Improvements are still being made, to the graphics driver for i.MX 6, but functionally it is complete. The i.MX8 however, is a different story.

By Robert Foss, Software Engineer at Collabora.

The i.MX 6 platform has for the past few years enjoyed a large effort to add upstream support to Linux and surrounding projects. Now it is at the point where nothing is really missing any more. Improvements are still being made, to the graphics driver for i.MX 6, but functionally it is complete.

Etnaviv driver development timeline

The i.MX8 is a different story. The newly introduced platform, with hardware still difficult to get access to, is seeing lots of work being done, but much still remains to be done.

That being said, initial support for the GPU, the Vivante GC7000, is in place and is able to successfully run Wayland/Weston, glmark, etc. This should also mean that running Android ontop of the currently not-quite-upstream stack is possible using drm_hwcomposer.

An upstream display/scanout driver does currently not exist, since the display IP in the i.MX 8 is different and more capable than the IP in the i.MX 6 platform, the current imx-drm driver is not capable of supporting it.

A driver is provided by the NXP base support package. This BSP driver is based on KMS Atomic and supports most of the bells and whistles one would hope for, but is currently not in an upstreamable shape.

i.MX8 Kernel Support

But patches for the gpio, clk, netdev & arm-kernel subsystems have been submitted to their respective mailing lists by Lucas Stach.

The direct support for the i.MX8 that has landed in the kernel at this point is mostly done by NXP engineers.

But there are lots of components that currently have no support. The Video Processor Unit IP, the Hantro G1/G2, does not have any upstream support.

i.MX8 U-Boot Support

Looking at bootloader support, U-Boot has good support for the i.MX 8M platform since early 2018, and can be expected to just work.

Looking forward

While lot’s of support is still missing for the i.MX 8, the platform is under active development, with many new pieces of the hardware seeing attention.

Purism is one of the vendors who currently is actively working towards full Open Source support of the i.MX 8 platform.

Devboards

WandPi 8M is a series of 3 different boards based on the i.MX 8M platform.

Nitrogen 8M is another i.MX 8M based option, made by Boundary Devices who also made the popular Sabre Lite series of boards for the i.MX 6.

Posted on Leave a comment

Free Resources for Open Source Leadership, AI, Networking, and More

May was the month for learning at Linux.com and The Linux Foundation, and we covered a range of topics and offered an array of free resources to help you expand your knowledge of Linux and open source. Let’s take a look at some of the month’s most popular content.

Free Training 

We just released the latest in our series of free Open Source Guides for the Enterprise. This guide, produced in conjunction with The TODO Group, provides practical advice for Building Leadership in an Open Source Community. You’ll find the complete guide here, and you can browse the entire list of Open Source Guides here.

With the rapid adoption of open source in the enterprise comes the need for sound security practices. This article by Sam Dean looks at various resources for securing your open source code, including links to free tools, checklists, and best practices.

Looking for more free training? Enrollment is now open for The Linux Foundation’s new Introduction to Open Source Networking Technologies training course (LFS165x). This online course, available for free on edX.org, teaches the fundamentals needed to understand and adopt SDN, NFV, network automation, and modern networking.

Check out additional training resources in our recent series of tutorials previewing the Cloud Foundry for Developers training course (including a free sample chapter) and more cloud-related articles here.

Open Source AI

A new ebook Open Source AI: Projects, Insights, and Trends by Ibrahim Haddad covers 16 open source AI projects, including Acumos AI, Apache Spark, Caffe, and TensorFlow. This free, 100+ page ebook provides in-depth information on the projects’ histories, codebases, and GitHub contributions, and more.

Interested in more about AI? Check out these open source AI and machine learning articles and enter the Acumos AI developer challenge.

Tutorials and More

You can learn even more about Linux in these popular articles and tutorials from May:

Get started with Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Posted on Leave a comment

Get Started with Snap Packages in Linux

Chances are you’ve heard about Snap packages. These universal packages were brought into the spotlight with the release of Ubuntu 16.04 and have continued to draw attention as a viable solution for installing applications on Linux. What makes Snap packages so attractive to the end user? The answer is really quite easy: Simplicity. In this article, I’ll answer some common questions that arise when learning about Snaps and show how to start using them.

Exactly what are Snap packages? And why are they needed? Considering there are already multiple ways to install software on Linux, doesn’t this complicate the issue? Not in the slightest. Snaps actually makes installing/updating/removing applications on Linux incredibly easy.

How does it accomplish this? Essentially, a Snap package is a self-contained application that bundles most of the libraries and runtimes (necessary to successfully run an application) into a single, universal package. Because of this, Snaps can be installed, updated, and reverted without affecting the rest of the host system, and without having to first install dependencies. Snap packages are also confined from the OS (via various security mechanisms), yet can still function as if it were installed by the standard means (exchanging data with the host OS and other installed applications).

Are Snaps challenging to work with? In a word, no. In fact, Snaps make short work of installing apps that might otherwise challenge your Linux admin skills. Since Snap packages are self-contained, you only need to install one package to get an app up and running.

Although Snap packages were created by Ubuntu developers, they can be installed on most modern Linux distributions. Because the necessary tool for Snap packages is installed on the latest releases of Ubuntu out of the box, I’m going to walk you through the process of installing and using Snap packages on Fedora. Once installed, using Snap is the same, regardless of distribution.

Installation

The first thing you must do is install the Snap system, aka snapd. To do this on Fedora, open up the terminal window and issue the command:

sudo dnf install snapd

The above command will catch any necessary dependencies and install the system for Snap. That’s all there is to is. You’re ready to install your first Snap package.

Installing with Snap: Command-line edition

The first thing you’ll want to do is find out what packages are available to install via Snap. Although Snap has begun to gain significant momentum, not every application can be installed via Snap. Let’s say you want to install GIMP. First you might want to find out what GIMP-relate packages are available as Snaps. Back at the terminal window, issue the command:

sudo snap find gimp

The command should report only one package available for GIMP (Figure 1).

To get a better idea as to what the find option can do for you, issue the command:

sudo snap find nextcloud

The output of that command (Figure 2) will report Snap packages related to Nextcloud.

Let’s say you want to go ahead and install GIMP via Snap. To do this, issue the command:

sudo snap install gimp

The above command will download and install the Snap package. After the command completes, you’ll find GIMP in your desktop menu, ready to use.

Updating Snap packages

Once a Snap package is installed, it will not be updated by the normal method of system updating (via apt, yum, or dnf). To update a Snap package, the refresh option is used. Say you want to update GIMP, you would issue the command:

sudo snap refresh gimp

If an updated Snap package is available, it will be downloaded and installed. Say, however, you have a number of Snap packages installed, and you want to update them all. This is done with the command:

sudo snap refresh

The snapd system will check all installed Snap packages against what’s available. If there are newer versions, the installed Snap package will be updated. One thing to note is that Snap packages are automatically updated daily, so you don’t have to manually issue the refresh command, unless you want to do this manually.

Listing installed Snap packages

What if you’re not sure which Snap packages you’ve installed? Easy. Issue the command sudo snap list and all of your installed Snap packages will be listed for you (Figure 3).

Removing Snap packages

Removing a Snap package is just as simple as installing. We’ll stick with our GIMP example. To remove GIMP, issue the command:

sudo snap remove gimp

One thing you’ll notice is that removing a Snap package takes significantly less time than uninstalling via the standard method (i.e., sudo apt remove gimp or sudo dnf remove gimp). In fact, on my test Fedora system, installing, updating, and removing GIMP was quite a bit faster than doing so with dnf.

Installing with Snap: GUI edition

You can enable Snap support in GNOME Software with a quick dnf install command. That command is:

sudo dnf install gnome-software-snap

Once the command finishes, reboot your system and open up GNOME Software. You will be prompted to enable third party repositories (Figure 4). Click Enable and Snap packages are now ready to be installed.

If you now search for GIMP, you will see two versions available. Click on one and if you see Snap Store as the source (Figure 5), you know that’s the Snap version of GIMP.

Although I cannot imagine a reason for doing so, you can install both the standard and Snap version of the package. You might find it difficult to know which is which, however. Just remember, if you use a mixture of Snap and non-Snap packages, you must update them separately (which, in the case of Snap packages, happens automatically).

Get your Snap on

Snap packages are here to stay, of that there is no doubt. No matter if you administer or use  Linux on the server or desktop, Snap packages help make that task significantly easier. Get your Snap on today and see if you don’t start defaulting to this universal package format, over the standard installation fare.

Posted on Leave a comment

CNCF to Host Helm

Today, the Cloud Native Computing Foundation (CNCF) Technical Oversight Committee (TOC) voted to accept Helm as an incubation-level hosted project.

No longer a sub-project under Kubernetes, Helm is a package manager that provides an easy way to find, share, and use software built for Kubernetes. Helm removes complexity from configuration and deployment, and enables greater developer productivity.

“Helm addresses a common user need of deploying applications to Kubernetes by making their configurations reusable,” said Brian Grant, TOC representative and project sponsor, Principal Engineer at Google, and Kubernetes SIG Architecture co-chair and Steering Committee member. “Both the Helm and Kubernetes projects have grown substantially. As Kubernetes shifts its focus to its own core in order to better manage this growth, CNCF is a great home for Helm to continue making it easier for developers and operators to streamline Kubernetes deployments.”

According to a recent Kubernetes Application Survey, 64 percent of the application developers, application operators, and ecosystem tool developers who answered the survey reported using Helm to manage apps on Kubernetes.

“As Kubernetes focuses more on stability, CNCF gives Helm a new home to ensure the community’s needs will be met,” said Chris Aniszczyk, COO of Cloud Native Computing Foundation. “Helm has scaled their community with hundreds of contributors to its core and community charts, and we look forward to growing their community even further.”

The project was started by Deis (now part of Microsoft) in 2015 and later evolved into Kubernetes Helm, the merged result of Helm Classic and the Kubernetes Deployment Manager (built by Google). The project has more than 300 contributors, and more than 800 contributors to the community charts, a successful conference based solely on Helm, and a unique culture in comparison to core Kubernetes.

“In building Helm, we set out to build a tool to serve as an onramp to Kubernetes – one that seasoned developers would not only use, but also contribute back to,” said Matt Butcher, co-creator of Helm and Principal Engineer at Microsoft. “By joining CNCF, we’ll benefit from the input and participation of the community and, conversely, Kubernetes will benefit when a community of developers provides a vast repository of ready-made charts for running workloads on Kubernetes.”

Conceptually, Helm is similar to OS-level package managers like AptYum, and Homebrew in that it handles putting things in the right place for the running application – bringing all of the advantages of an OS package manager to a Kubernetes container platform. Helm’s packaging format, called charts, is a collection of files that describe a related set of Kubernetes resources. Charts are created as files laid out in a particular directory tree, which can then be packaged into versioned archives to be deployed.

Main Features:

  • Find and use popular software packaged as Kubernetes charts
  • Share applications as Kubernetes charts
  • Create reproducible builds of your Kubernetes applications
  • Intelligently manage Kubernetes manifest files
  • Manage releases of Helm packages

Notable Milestones:

  • 330 contributors
  • 5,531 GitHub stars
  • 51 releases
  • 4,186 commits
  • 1,935 forks

As a CNCF hosted project – alongside Incubated technologies like Prometheus, OpenTracing, Fluentd, Linkerd, gRPC, CoreDNS, containerd, rkt, CNI, Envoy, Jaeger, Notary, TUF, Vitess, and NATS – Helm is part of a neutral foundation aligned with its technical interests, as well as the larger Linux Foundation, which provide the project with governance, marketing support and community outreach.

Every CNCF project has an associated maturity level: sandbox, incubating, or graduated project. For more information on what qualifies a technology for each level, please visit https://www.cncf.io/projects/graduation-criteria/.

For more on Helm, please visit https://helm.sh/, and read more from co-creator Matt Butcher on the Deis Blog. You can also watch this session from KubeCon + CloudNativeCon Austin on building Helm charts, and read this blog post to see how Helm can be used with other projects.

This article originally appeared at Cloud Native Computing Foundation

Posted on Leave a comment

Atari Launches Linux Gaming Box Starting at $199

Attempts to establish Linux as a gaming platform have failed time and time again, with Valve’s SteamOS being the latest high-profile casualty. Yet, Linux has emerged as a significant platform in the much smaller niche of retro gaming, especially on the Raspberry Pi. Atari has now re-emerged from the fog of gaming history with an Ubuntu-based Atari VCS gaming and media streaming console aimed at retro gamers.

In addition to games, the Atari VCS will also offer Internet access and optional voice control. With a Bluetooth keyboard and mouse, the system can be used as a standard Linux computer. The catch is that the already delayed systems won’t ship until July 2019.

Indiegogo deals

Shortly after appearing on Indiegogo this week, the Atari VCS vaulted over its $100,000 funding goal to hit $1.7 million and counting. Indiegogo packages that are discounted by $50 include a basic Atari VCS Onyx model that goes for $199 or $229 with a classic joystick. These are both Early Bird deals that expire June 4.

There is also a wood-paneled Collector’s Edition version that sells for $299 with a classic joystick or $339 with a modern game controller.  Other deals, including a $319 package with both the joystick and modern controller, are available for the next month.

The Atari VCS was unveiled as the Ataribox last September. The new prototype looks the same, with a design borrowed from the circa-1977 Atari 2600, but with sleek, tapered edges.

The Ataribox was originally said to run a Linux stack on an AMD customized processor with Radeon Graphics technology. Some observers had hoped that the delay in launching the Indiegogo campaign meant that Atari would tap one of AMD’s new, gaming-friendly AMD Ryzen processors. However, it settled for one of AMD’s two-year old Bristol Ridge A1 chips with Radeon R7 graphics. This is overkill for most retro games, but, depending on the A1 model, may be too underpowered to attract developers thinking of porting more modern games.

Back in the ’70s and ’80s, Atari offered one of the largest game platforms around, combining a console with a large catalog of 2D titles. The company faded later under the onslaught of major 3D gaming consoles from Nintendo, Sony, and others, and its last console — the 1993 Jaguar — disappeared quickly. After filing for bankruptcy protection in 2013, Atari rebounded as a mobile games developer, and has licensed its name for the Blade Runner 2049 movie.

Features

Atari offers an Atari Vault library with more than 100 classic games in their original arcade and/or Atari 2600 formats. Next year, it will launch a new Atari VCS Store in partnership with “a leading industry partner to be announced shortly.”

By the launch date, Atari plans to have “new and exclusive” games for download or streaming, including “reimagined classic titles from Atari and other top developers,” as well as multi-player games. The Atari VCS Store will also offer video, music and other content. For now, Atari has listed 14 content partners.

The hardware is not open source, and the games will be protected with HDCP. However, the Ubuntu Linux stack based on Linux kernel 4.10 is open source, and includes a “customizable Linux UX.” A Linux “sandbox” will be available for developing or porting games and apps.

Developers can build games using any Linux compatible gaming engine, including Unity, Unreal Engine, and Gamemaker. Atari also says that “Linux-based games from Steam and other platforms that meet Atari VCS hardware specifications should work.” Developers must register with Atari, and the games must be pre-approved. Atari VCS Store will take an “industry-standard percentage” of the sale price.

Manufactured by Flex, the Atari VCS ships with 4GB DDR4 RAM, as well as 32GB eMMC and a microSD slot. The 14.5×5.3×1.6-inch system is further equipped with dual-band WiFi and Bluetooth 5.0, as well as HDMI 2.0, Gigabit Ethernet, and 4x USB 3.0 ports. A 4-mic array supports voice commands, and the system is compatible with typical Bluetooth and USB controllers in addition to Atari’s Bluetooth-connected joystick and controller.

The platform will offer live streaming using Twitch.tv and will support cross-game chat using Skype and Discord. Optional cloud storage and other Internet services will be available via subscription.

Despite its Indiegogo success, there’s no guarantee the Atari VCS won’t go the way of the Steam Machine in the larger gaming market. However, the competition is less daunting in retro gaming, and the fact that at least 6,300 backers are willing to wait over a year for their Linux gaming box is promising indeed.

Join us at Open Source Summit + Embedded Linux Conference Europe in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, cloud, containers, AI, community, and more.