post

GNOME 3.34 released — coming soon in Fedora 31

Today the GNOME project announced the release of GNOME 3.34. This latest release of GNOME will be the default desktop environment in Fedora 31 Workstation. The Beta release of Fedora 31 is currently expected in the next week or two, with the Final release scheduled for late October.

GNOME 3.34 includes a number of new features and improvements. Congratulations and thank you to the whole GNOME community for the work that went into this release! Read on for more details.

GNOME 3.34 desktop environment at work

Notable features

The desktop itself has been refreshed with a pleasing new background. You can also compare your background images to see what they’ll look like on the desktop.

There’s a new custom application folder feature in the GNOME Shell Overview. It lets you combine applications in a group to make it easier to find the apps you use.

You already know that Boxes lets you easily download an OS and create virtual machines for testing, development, or even daily use. Now you can find sources for your virtual machines more easily, as well as boot from CD or DVD (ISO) images more easily. There is also an Express Install feature available that now supports Windows versions.

Now that you can save states when using GNOME Games, gaming is more fun. You can snapshot your progress without getting in the way of the fun. You can even move snapshots to other devices running GNOME.

More details

These are not the only features of the new and improved GNOME 3.34. For an overview, visit the official release announcement. For even more details, check out the GNOME 3.34 release notes.

The Fedora 31 Workstation Beta release is right around the corner. Fedora 31 will feature GNOME 3.34 and you’ll be able to experience it in the Beta release.

post

exFAT in the Linux kernel? Yes!

Microsoft ♥ Linux – we say that a lot, and we mean it! Today we’re pleased to announce that Microsoft is supporting the addition of Microsoft’s exFAT technology to the Linux kernel.

exFAT is the Microsoft-developed file system that’s used in Windows and in many types of storage devices like SD Cards and USB flash drives. It’s why hundreds of millions of storage devices that are formatted using exFAT “just work” when you plug them into your laptop, camera, and car.

It’s important to us that the Linux community can make use of exFAT included in the Linux kernel with confidence. To this end, we will be making Microsoft’s technical specification for exFAT publicly available to facilitate development of conformant, interoperable implementations. We also support the eventual inclusion of a Linux kernel with exFAT support in a future revision of the Open Invention Network’s Linux System Definition, where, once accepted, the code will benefit from the defensive patent commitments of OIN’s 3040+ members and licensees.

For more information, please see the Microsoft technical specification for exFAT at https://docs.microsoft.com/windows/win32/fileio/exfat-specification.

post

Microsoft joins partners and The Linux Foundation to create Confidential Computing Consortium

Microsoft has invested in confidential computing for many years, so I’m excited to announce that Microsoft will join industry partners to create the Confidential Computing Consortium, a new organization that will be hosted at The Linux Foundation. The Confidential Computing Consortium will be dedicated to defining and accelerating the adoption of confidential computing.

Confidential computing technologies offer the opportunity for organizations to collaborate on their data sets without giving access to that data, to gain shared insights and to innovate for the common good. The Consortium, which will include other founding members Alibaba, ARM, Baidu, Google Cloud, IBM, Intel, Red Hat, Swisscom and Tencent, is the organization where the industry can come together to collaborate on open source technology and frameworks to support these new confidential computing scenarios.

As computing moves from on-premises to the public cloud and the edge, protecting data becomes more complex. There are three types of possible data exposure to protect against. One is data at rest and another data in transit. While there’s always room to improve and innovate, the industry has built technologies and standards to address these scenarios. The third possible exposure – or as I like to think of it, the critical ‘third leg of the stool’ – is data in use. Protecting data while in use is called confidential computing.

Protecting data in use means data is provably not visible in unencrypted form during computation except to the code authorized to access it. That can mean that it’s not even accessible to public cloud service providers or edge device vendors. This capability enables new solutions where data is private all the way from the edge to the public cloud. Some of the scenarios confidential computing can unlock include:

  • Training multi-party dataset machine learning models or executing analytics on multi-party datasets, which can allow customers to collaborate to obtain more accurate models or deeper insights without giving other parties access to their data.
  • Enabling confidential query processing in database engines within secure enclaves, which removes the need to trust database operators.
  • Empowering multiple parties to leverage technologies like the Confidential Consortium Framework, which delivers confidentiality and high transaction throughput for distributed databases and ledgers.
  • Protecting sensitive data at the edge, such as proprietary machine learning models and machine learning model execution, customer information, and billing/warranty logs.

Simply put, confidential computing capabilities, like the ability to collaborate on shared data without giving those collaborating access to that data, has the power to enable organizations to unlock the full potential of combined data sets. Future applications will generate more powerful understanding of industries’ telemetry, more capable machine learning models, and a new level of protection for all workloads.

However, enabling these new scenarios requires new attestation and key management services, and for applications to take advantage of those services and confidential computing hardware. There are multiple implementations of confidential hardware, but each has its own SDK. This leads to complexity for developers, inhibits application portability, and slows development of confidential applications.

This is where the Confidential Computing Consortium comes in, with its mission of creating technology, taxonomy, and cross-platform development tools for confidential computing. This will allow application and systems developers to create software that can be deployed across different public clouds and Trusted Execution Environment (TEE) architectures. The organization will also anchor industry outreach and education initiatives.

Microsoft will be contributing the Open Enclave SDK to the Confidential Computing Consortium to develop a broader industry collaboration and ensure a truly open development approach. Other founding members, Intel and Red Hat will be contributing Intel® SGX and Red Hat Enarx to the new group.

The Open Enclave SDK is targeted at creating a single unified enclave abstraction for developers to build TEE-based applications. It creates a pluggable, common way to create redistributable trusted applications securing data in use. The SDK originated inside Microsoft and was published on GitHub over a year ago under an open source license.

The Open Enclave SDK, which supports both Linux and Windows hosts and has been used and validated by multiple open source projects, was designed to:

  • Make it easy to write and debug code that runs inside TEEs.
  • Allow the development of code that’s portable between TEEs, starting with Intel® SGX and ARM TrustZone.
  • Provide a flexible plugin model to support different runtimes and cryptographic libraries.
  • Enable the development of auditable enclave code that works on both Linux and Windows.
  • Have a high degree of compatibility with existing code.

We want to thank the Linux Foundation and all our industry partners for coming together to advance confidential computing. These technologies offer the promise to protect data and enable collaboration to make the world more secure and unlock multiparty innovations. Personally, I’m looking forward to seeing what we can all do together.

Let us know what you’d like to see from the Confidential Computing Consortium in the comments.

Additional resources:
CCC Website
Linux Foundation press release
Open Enclave SDK site and on GitHub

Manage your passwords with Bitwarden and Podman

You might have encountered a few advertisements the past year trying to sell you a password manager. Some examples are LastPass, 1Password, or Dashlane. A password manager removes the burden of remembering the passwords for all your websites. No longer do you need to re-use passwords or use easy-to-remember passwords. Instead, you only need to remember one single password that can unlock all your other passwords for you.

This can make you more secure by having one strong password instead of many weak passwords. You can also sync your passwords across devices if you have a cloud-based password manager like LastPass, 1Password, or Dashlane. Unfortunately, none of these products are open source. Luckily there are open source alternatives available.

Open source password managers

These alternatives include Bitwarden, LessPass, or KeePass. Bitwarden is an open source password manager that stores all your passwords encrypted on the server, which works the same way as LastPass, 1Password, or Dashlane. LessPass is a bit different as it focuses on being a stateless password manager. This means it derives passwords based on a master password, the website, and your username rather than storing the passwords encrypted. On the other side of the spectrum there’s KeePass, a file-based password manager with a lot of flexibility with its plugins and applications.

Each of these three apps has its own downsides. Bitwarden stores everything in one place and is exposed to the web through its API and website interface. LessPass can’t store custom passwords since it’s stateless, so you need to use their derived passwords. KeePass, a file-based password manager, can’t easily sync between devices. You can utilize a cloud-storage provider together with WebDAV to get around this, but a lot of clients do not support it and you might get file conflicts if devices do not sync correctly.

This article focuses on Bitwarden.

Running an unofficial Bitwarden implementation

There is a community implementation of the server and its API called bitwarden_rs. This implementation is fully open source as it can use SQLite or MariaDB/MySQL, instead of the proprietary Microsoft SQL Server that the official server uses.

It’s important to recognize some differences exist between the official and the unofficial version. For instance, the official server has been audited by a third-party, whereas the unofficial one hasn’t. When it comes to implementations, the unofficial version lacks email confirmation and support for two-factor authentication using Duo or email codes.

Let’s get started running the server with SELinux in mind. Following the documentation for bitwarden_rs you can construct a Podman command as follows:

$ podman run -d \ 
--userns=keep-id \
--name bitwarden \
-e SIGNUPS_ALLOWED=false \
-e ROCKET_PORT=8080 \
-v /home/egustavs/Bitwarden/bw-data/:/data/:Z \
-p 8080:8080 \
bitwardenrs/server:latest

This downloads the bitwarden_rs image and runs it in a user container under the user’s namespace. It uses a port above 1024 so that non-root users can bind to it. It also changes the volume’s SELinux context with :Z to prevent permission issues with read-write on /data.

If you host this under a domain, it’s recommended to put this server under a reverse proxy with Apache or Nginx. That way you can use port 80 and 443 which points to the container’s 8080 port without running the container as root.

Running under systemd

With Bitwarden now running, you probably want to keep it that way. Next, create a unit file that keeps the container running, automatically restarts if it doesn’t respond, and starts running after a system restart. Create this file as /etc/systemd/system/bitwarden.service:

[Unit]
Description=Bitwarden Podman container
Wants=syslog.service

[Service]
User=egustavs
Group=egustavs
TimeoutStartSec=0
ExecStart=/usr/bin/podman run 'bitwarden'
ExecStop=-/usr/bin/podman stop -t 10 'bitwarden'
Restart=always
RestartSec=30s
KillMode=none

[Install]
WantedBy=multi-user.target

Now, enable and start it using sudo:

$ sudo systemctl enable bitwarden.service && sudo systemctl start bitwarden.service
$ systemctl status bitwarden.service
bitwarden.service - Bitwarden Podman container
Loaded: loaded (/etc/systemd/system/bitwarden.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2019-07-09 20:23:16 UTC; 1 day 14h ago
Main PID: 14861 (podman)
Tasks: 44 (limit: 4696)
Memory: 463.4M

Success! Bitwarden is now running under system and will keep running.

Adding LetsEncrypt

It’s strongly recommended to run your Bitwarden instance through an encrypted channel with something like LetsEncrypt if you have a domain. Certbot is a bot that creates LetsEncrypt certificates for us, and they have a guide for doing this through Fedora.

After you generate a certificate, you can follow the bitwarden_rs guide about HTTPS. Just remember to append :Z to the LetsEncrypt volume to handle permissions while not changing the port.


Photo by CMDR Shane on Unsplash.

post

Microsoft joins the OpenChain community to help drive open source compliance

A lot goes into making open source great – from licenses to code to community. A key part of doing open source right is being able to trust that the code you receive complies with its open source licenses. It’s a deceptively hard problem and one that Microsoft is working with the community to address.

The OpenChain Project plays an important role in increasing confidence around the open source code you receive. It does so by creating standards and training materials focused on how to run a quality open source compliance program, which in turn builds trust and removes friction in the ecosystem and supply chain.

We’ve had the honor of working with the OpenChain community to help develop its forthcoming specification version, and today we’re pleased to announce that we are joining OpenChain both as a platinum member and as a board member.

Our goal is to work even more closely with the OpenChain community to create the standards that will bring even greater trust to the open source ecosystem and that will work for everyone – from individual developers to the largest enterprises.

And Microsoft’s efforts to work with the community to improve open source compliance don’t stop with OpenChain. We’re actively working with ClearlyDefined, which brings clarity to open source component license terms and enables better compliance automation, and the Linux Foundation’s TODO Group, where members develop and share best practices for running world-class open source programs.

We look forward to continued collaboration with OpenChain and the broader open source community to bring greater confidence, clarity, and efficiency to the open source ecosystem.

To learn more, read full announcement here.

post

Microsoft acquires Citus Data, re-affirming commitment to open source and accelerating Azure PostgreSQL performance and scale

Data and analytics are increasingly at the center of digital transformation, with the most leading-edge enterprises leveraging data to drive customer acquisition and satisfaction, long-term strategic planning, and expansion into net new markets. This digital revolution is placing an incredible demand on technology solutions to be more open, flexible, and scalable to meet the demands of large data volumes, sub-second response times, and analytics driven business insights.

Microsoft is committed to building an open platform that is flexible and provides customers with technology choice to suit their unique needs. Microsoft Azure Data Services are a great example of a place where we have continuously invested in offering choice and flexibility with our fully managed community based open source relational database services, spanning MySQL, PostgreSQL and MariaDB. This builds on our other open source investments in SQL Server on Linux, a multi-model NoSQL database with Azure Cosmos DB, and support for open source analytics with the Spark and Hadoop ecosystems. With our acquisition of GitHub, we continue to expand on our commitment to empower developers to achieve more at every stage of the development lifecycle.

Building on these investments, I am thrilled to announce that we have acquired Citus Data, a leader in the PostgreSQL community. Citus is an innovative open source extension to PostgreSQL that transforms PostgreSQL into a distributed database, dramatically increasing performance and scale for application developers. Because Citus is an extension to open source PostgreSQL, it gives enterprises the performance advantages of a horizontally scalable database while staying current with all the latest innovations in PostgreSQL. Citus is available as a fully-managed database as a service, as enterprise software, and as a free open source download.

Since the launch of Microsoft’s fully managed community-based database service for PostgreSQL in March 2018, its adoption has surged. Earlier this month, PostgreSQL was named DBMS of the Year by DB-Engines, for the second year in a row. The acquisition of Citus Data builds on Azure’s open source commitment and enables us to provide the massive scalability and performance our customers demand as their workloads grow.

Together, Microsoft and Citus Data will further unlock the power of data, enabling customers to scale complex multi-tenant SaaS applications and accelerate the time to insight with real-time analytics over billions of rows, all with the familiar PostgreSQL tools developers know and love.

I am incredibly excited to welcome the high-caliber Citus Data team to Microsoft! Working together, we will accelerate the delivery of key, enterprise-ready features from Azure to PostgreSQL and enable critical PostgreSQL workloads to run on Azure with confidence. We continue to be energized by building on our promise around Azure as the most comprehensive cloud to run open source and proprietary workloads at any scale and look forward to working with the PostgreSQL community to accelerate innovation to customers.

For more information on Citus Data, you can read the blog post from Umur Cubukcu, CEO and co-founder, here.

Tags: , ,