Posted on Leave a comment

Announcing OAM, an open standard for developing and operating applications on Kubernetes and other platforms

Kubernetes has become the leading container orchestration environment. Its success has driven the remarkable growth of Kubernetes services on every public cloud. However, the core resources in Kubernetes like Services and Deployments represent disparate pieces of an overall application. They do not represent the application itself. Likewise, objects like Helm charts represent a potentially deployable application, but once deployed there’s no application-centric model of the running application. This need to have a well-defined and coherent model that represents the complete application, not just its template and/or its constituent pieces, is why Microsoft and Alibaba Cloud have created the Open Application Model (OAM) project under the Open Web Foundation.

OAM is a specification for describing applications so that the application description is separated from the details of how the application is deployed onto and managed by the infrastructure. This separation of concerns is helpful for multiple reasons. In the real world, every Kubernetes cluster is different, from ingress to CNI to service mesh. Separating the application definition from the operational details of the cluster enables application developers to focus on the key elements of their application rather than the operational details of where it deploys. Furthermore, the separation of concerns also allows for platform architects to develop re-usable components and for application developers to focus on integrating those components with their code to quickly and easily build reliable applications. In all of this, the goal of the Open Application Model is to make simple applications easy and complex applications manageable.

In OAM, an Application is made from several concepts. The first is the Components that make up an application. These components might be services like a MySQL database or a replicated PHP server with a corresponding load balancer. Developers can author code that they package as a component and then author manifests that describe the relationships between that component and other microservices. Components enable platform architects and others to build re-usable modules which are known to encapsulate best practices around security and scalable deployment. They also enable the separation of the implementation of the component from the description of how those components come together in a complete distributed application architecture.

To transform these components into a concrete application, application operators use a configuration of these components to form a specific instance of an application that should be deployed. The configuration resource is what enables an application operator to run a real application out of the components provided by developers and platforms.

The final concept is a collection of Traits which describe the characteristics of the application environment including capabilities like auto-scaling and ingress which are important to the operation of applications but may be implemented in different ways in different environments. An easy example of such differences might be a hyperscale cloud-provided load balancer versus an on-premises hardware load-balancer. From an application developer’s perspective they are entirely identical, while from the operator’s perspective they are completely different. Traits enable this separation of concerns whereby the application can run anywhere its necessary traits are deployed. Those traits can then be configured by infrastructure operators to satisfy the unique operating requirements of their environment (e.g. compliance and security).

In contrast to a more traditional PaaS application model, OAM has some unique characteristics. Most importantly, it is platform agnostic. While our initial open implementation of OAM, named Rudr, is built on top of Kubernetes, the Open Application Model itself is not tightly bound to Kubernetes. It is possible to develop implementations for numerous other environments including small-device form factors, like edge deployments and elsewhere, where Kubernetes may not be the right choice. Or serverless environments where users don’t want or need the complexity of Kubernetes.

Equally important, the specification is extensible by design – rather than the walled garden of a PaaS, or an application environment that hides the unique characteristics of where it is running. Likewise, OAM enables platform providers to expose the unique characteristics of their platform through the trait system in a way that enables application developers to build cross-platform apps wherever the necessary traits are supported. Hardware providers can similarly expose the unique characteristics of their hardware platforms via traits. The entirety of OAM was designed to prevent the “lowest common denominator” problem that can occur in portable platforms. Instead OAM is designed to make portability possible while ensuring that each platform can still surface the capabilities that make them unique and useful. OAM provides developers the freedom to balance between portability and capability among platforms in a standard way.

We’re excited about the initial work we have done to develop this application-oriented open model and the implementation for Kubernetes. The specification is currently being developed under the Open Web Foundation agreement, and our goal is to bring the Open Application Model to a vendor-neutral foundation to enable open governance and collaboration. If you want to learn more, please have a look at the OAM specification, and Rudr – the open implementation for Kubernetes – over on Github. This is really just a start. We look forward to hearing your feedback and partnering closely to bring an easy, portable, and re-usable application model to Kubernetes and the cloud.

Questions or feedback? Please let us know in the comments.

Posted on Leave a comment

How to contribute to Fedora

One of the great things about open source software projects is that users can make meaningful contributions. With a large project like Fedora, there’s somewhere for almost everyone to contribute. The hard part is finding the thing that appeals to you. This article covers a few of the ways people participate in the Fedora community every day.

The first step for contributing is to create an account in the Fedora Account System. After that, you can start finding areas to contribute. This article is not comprehensive. If you don’t see something you’re interested in, check out What Can I Do For Fedora or contact the Join Special Interest Group (SIG).

Software development

This seems like an obvious place to get started, but Fedora has an “upstream first” philosophy. That means most of the software that ends up on your computer doesn’t originate in the Fedora Project, but with other open source communities. Even when Fedora package maintainers write code to add a feature or fix a bug, they work with the community to get those patches into the upstream project.

Of course, there are some applications that are specific to Fedora. These are generally more about building and shipping operating systems than the applications that get shipped to the end users. The Fedora Infrastructure project on GitHub has several applications that help make Fedora happen.

Packaging applications

Once software is written, it doesn’t just magically end up in Fedora. Package maintainers are the ones who make that happen. Fundamentally, the job of the package maintainer is to make sure the application successfully builds into an RPM package and to generally keep up-to-date with upstream releases. Sometimes, that’s as simple as editing a line in the RPM spec file and uploading the new source code. Other times, it involves diagnosing build problems or adding patches to fix bugs or apply configuration settings.

Packagers are also often the first point of contact for user support. When something goes wrong with an application, the user (or ABRT) will file a bug in Red Hat Bugzilla. The Fedora package maintainer can help the user diagnose the problem and either fix it in the Fedora package or help file a bug in the upstream project’s issue tracker.

Writing

Documentation is a key part of the success of any open source project. Without documentation, users don’t know how to use the software, contributors don’t know how to submit code or run test suites, and administrators don’t know how to install and run the application. The Fedora Documentation team writes release notes, in-depth guides, and short “quick docs” that provide task-specific information. Multi-lingual contributors can also help with translation and localization of both the documentation and software strings by joining the localization (L10n) team.

Of course, Fedora Magazine is always looking for contributors to write articles. The Contributing page has more information. [We’re partial to this way of contributing! — ed.]

Testing

Fedora users have come to rely on our releases working well. While we emphasize being on the leading edge, we want to make sure releases are usable, too. The Fedora Quality Assurance team runs a broad set of test cases and ensures all of the release criteria are met before anything ships. Before each release, the team arranges test days for various components.

Once the release is out, testing continues. Each package update first goes to the updates-testing repository before being published to the main testing repository. This gives people who are willing to test the opportunity to try updates before they go to the wider community. 

Graphic design

One of the first things that people notice when they install a new Fedora release is the desktop background. In fact, using a new desktop background is one of our release criteria. The Fedora Design team produces several backgrounds for each release. In addition, they design stickers, logos, infographics, and many other visual elements for teams within Fedora. As you contribute, you may notice that you get awarded badges; the Badges team produces the art for those.

Helping others

Cooperative effort is a hallmark of open source communities. One of the best ways to contribute to any project is to help other users. In Fedora, that can mean answering questions on the Ask Fedora forum, the users mailing list, or in the #fedora IRC channel. Many third-party social media and news aggregator sites have discussion related to Fedora where you can help out as well.

Spreading the word

Why put so much effort into making something that no one knows about? Spreading the word helps our user and contributor communities grow. You can host a release party, speak at a conference, or share how you use Fedora on your blog or social media sites. The Fedora Mindshare committee has funds available to help with the costs of parties and other events.

Other contributions

This article only shared a few of the areas where you can contribute to Fedora. What Can I Do For Fedora has more options. If there’s something you don’t see, you can just start doing it. If others see the value, they can join in and help you. We look forward to your contributions!


Photo by Anunay Mahajan on Unsplash.

Posted on Leave a comment

Fedora and CentOS Stream

From the desk of the Fedora Project Leader:

Hi everyone! You may have seen the announcement about changes over at the CentOS Project. (If not, please go ahead and take a few minutes and read it — I’ll wait!) And now you may be wondering: if CentOS is now upstream of RHEL, what happens to Fedora? Isn’t that Fedora’s role in the Red Hat ecosystem?

First, don’t worry. There are changes to the big picture, but they’re all for the better.

If you’ve been following the conference talks from Red Hat Enterprise Linux leadership about the relationship between Fedora, CentOS, and RHEL, you have heard about “the Penrose Triangle”. That’s a shape like something from an M. C. Escher drawing: it’s impossible in real life!

We’ve been thinking for a while that maybe impossible geometry is not actually the best model. 

For one thing, the imagined flow where contributions at the end would flow back into Fedora and grow in a “virtuous cycle” never actually worked that way. That’s a shame, because there’s a huge, awesome CentOS community and many great people working on it — and there’s a lot of overlap with the Fedora community too. We’re missing out.

But that gap isn’t the only one: there’s not really been a consistent flow between the projects and product at all. So far, the process has gone like this: 

  1. Some time after the previous RHEL release, Red Hat would suddenly turn more attention to Fedora than usual.
  2. A few months later, Red Hat would split off a new RHEL version, developed internally.
  3. After some months, that’d be put into the world, including all of the source — from which CentOS is built. 
  4. Source drops continue for updates, and sometimes those updates include patches that were in Fedora — but there’s no visible connection.

Each step here has its problems: intermittent attention, closed-door development, blind drops, and little ongoing transparency. But now Red Hat and CentOS Project are fixing that, and that’s good news for Fedora, too.

Fedora will remain the first upstream of RHEL. It’s where every RHEL came from, and is where RHEL 9 will come from, too. But after RHEL branches off, CentOS will be upstream for ongoing work on those RHEL versions. I like to call it “the midstream”, but the marketing folks somehow don’t, so that’s going to be called “CentOS Stream”.

We — Fedora, CentOS, and Red Hat — still need to work out all of the technical details, but the idea is that these branches will live in the same package source repository. (The current plan is to make a “src.centos.org” with a  parallel view of the same data as src.fedoraproject.org). This change gives public visibility into ongoing work on released RHEL, and a place for developers and Red Hat’s partners to collaborate at that level.

CentOS SIGs — the special interest groups for virtualization, storage, config management and so on — will do their work in shared space right next to Fedora branches. This will allow much easier collaboration and sharing between the projects, and I’m hoping we’ll even be able to merge some of our similar SIGs to work together directly. Fixes from Fedora packages can be cherry-picked into the CentOS “midstream” ones — and where useful, vice versa.

Ultimately, Fedora, CentOS, and RHEL are part of the same big project family. This new, more natural flow opens possibilities for collaboration which were locked behind artificial (and extra-dimensional!) barriers. I’m very excited for what we can now do together!

— Matthew Miller, Fedora Project Leader

Posted on Leave a comment

GNOME 3.34 released — coming soon in Fedora 31

Today the GNOME project announced the release of GNOME 3.34. This latest release of GNOME will be the default desktop environment in Fedora 31 Workstation. The Beta release of Fedora 31 is currently expected in the next week or two, with the Final release scheduled for late October.

GNOME 3.34 includes a number of new features and improvements. Congratulations and thank you to the whole GNOME community for the work that went into this release! Read on for more details.

GNOME 3.34 desktop environment at work

Notable features

The desktop itself has been refreshed with a pleasing new background. You can also compare your background images to see what they’ll look like on the desktop.

There’s a new custom application folder feature in the GNOME Shell Overview. It lets you combine applications in a group to make it easier to find the apps you use.

You already know that Boxes lets you easily download an OS and create virtual machines for testing, development, or even daily use. Now you can find sources for your virtual machines more easily, as well as boot from CD or DVD (ISO) images more easily. There is also an Express Install feature available that now supports Windows versions.

Now that you can save states when using GNOME Games, gaming is more fun. You can snapshot your progress without getting in the way of the fun. You can even move snapshots to other devices running GNOME.

More details

These are not the only features of the new and improved GNOME 3.34. For an overview, visit the official release announcement. For even more details, check out the GNOME 3.34 release notes.

The Fedora 31 Workstation Beta release is right around the corner. Fedora 31 will feature GNOME 3.34 and you’ll be able to experience it in the Beta release.

Posted on Leave a comment

exFAT in the Linux kernel? Yes!

Microsoft ♥ Linux – we say that a lot, and we mean it! Today we’re pleased to announce that Microsoft is supporting the addition of Microsoft’s exFAT technology to the Linux kernel.

exFAT is the Microsoft-developed file system that’s used in Windows and in many types of storage devices like SD Cards and USB flash drives. It’s why hundreds of millions of storage devices that are formatted using exFAT “just work” when you plug them into your laptop, camera, and car.

It’s important to us that the Linux community can make use of exFAT included in the Linux kernel with confidence. To this end, we will be making Microsoft’s technical specification for exFAT publicly available to facilitate development of conformant, interoperable implementations. We also support the eventual inclusion of a Linux kernel with exFAT support in a future revision of the Open Invention Network’s Linux System Definition, where, once accepted, the code will benefit from the defensive patent commitments of OIN’s 3040+ members and licensees.

For more information, please see the Microsoft technical specification for exFAT at https://docs.microsoft.com/windows/win32/fileio/exfat-specification.

Posted on Leave a comment

Microsoft joins partners and The Linux Foundation to create Confidential Computing Consortium

Microsoft has invested in confidential computing for many years, so I’m excited to announce that Microsoft will join industry partners to create the Confidential Computing Consortium, a new organization that will be hosted at The Linux Foundation. The Confidential Computing Consortium will be dedicated to defining and accelerating the adoption of confidential computing.

Confidential computing technologies offer the opportunity for organizations to collaborate on their data sets without giving access to that data, to gain shared insights and to innovate for the common good. The Consortium, which will include other founding members Alibaba, ARM, Baidu, Google Cloud, IBM, Intel, Red Hat, Swisscom and Tencent, is the organization where the industry can come together to collaborate on open source technology and frameworks to support these new confidential computing scenarios.

As computing moves from on-premises to the public cloud and the edge, protecting data becomes more complex. There are three types of possible data exposure to protect against. One is data at rest and another data in transit. While there’s always room to improve and innovate, the industry has built technologies and standards to address these scenarios. The third possible exposure – or as I like to think of it, the critical ‘third leg of the stool’ – is data in use. Protecting data while in use is called confidential computing.

Protecting data in use means data is provably not visible in unencrypted form during computation except to the code authorized to access it. That can mean that it’s not even accessible to public cloud service providers or edge device vendors. This capability enables new solutions where data is private all the way from the edge to the public cloud. Some of the scenarios confidential computing can unlock include:

  • Training multi-party dataset machine learning models or executing analytics on multi-party datasets, which can allow customers to collaborate to obtain more accurate models or deeper insights without giving other parties access to their data.
  • Enabling confidential query processing in database engines within secure enclaves, which removes the need to trust database operators.
  • Empowering multiple parties to leverage technologies like the Confidential Consortium Framework, which delivers confidentiality and high transaction throughput for distributed databases and ledgers.
  • Protecting sensitive data at the edge, such as proprietary machine learning models and machine learning model execution, customer information, and billing/warranty logs.

Simply put, confidential computing capabilities, like the ability to collaborate on shared data without giving those collaborating access to that data, has the power to enable organizations to unlock the full potential of combined data sets. Future applications will generate more powerful understanding of industries’ telemetry, more capable machine learning models, and a new level of protection for all workloads.

However, enabling these new scenarios requires new attestation and key management services, and for applications to take advantage of those services and confidential computing hardware. There are multiple implementations of confidential hardware, but each has its own SDK. This leads to complexity for developers, inhibits application portability, and slows development of confidential applications.

This is where the Confidential Computing Consortium comes in, with its mission of creating technology, taxonomy, and cross-platform development tools for confidential computing. This will allow application and systems developers to create software that can be deployed across different public clouds and Trusted Execution Environment (TEE) architectures. The organization will also anchor industry outreach and education initiatives.

Microsoft will be contributing the Open Enclave SDK to the Confidential Computing Consortium to develop a broader industry collaboration and ensure a truly open development approach. Other founding members, Intel and Red Hat will be contributing Intel® SGX and Red Hat Enarx to the new group.

The Open Enclave SDK is targeted at creating a single unified enclave abstraction for developers to build TEE-based applications. It creates a pluggable, common way to create redistributable trusted applications securing data in use. The SDK originated inside Microsoft and was published on GitHub over a year ago under an open source license.

The Open Enclave SDK, which supports both Linux and Windows hosts and has been used and validated by multiple open source projects, was designed to:

  • Make it easy to write and debug code that runs inside TEEs.
  • Allow the development of code that’s portable between TEEs, starting with Intel® SGX and ARM TrustZone.
  • Provide a flexible plugin model to support different runtimes and cryptographic libraries.
  • Enable the development of auditable enclave code that works on both Linux and Windows.
  • Have a high degree of compatibility with existing code.

We want to thank the Linux Foundation and all our industry partners for coming together to advance confidential computing. These technologies offer the promise to protect data and enable collaboration to make the world more secure and unlock multiparty innovations. Personally, I’m looking forward to seeing what we can all do together.

Let us know what you’d like to see from the Confidential Computing Consortium in the comments.

Additional resources:
CCC Website
Linux Foundation press release
Open Enclave SDK site and on GitHub

Posted on Leave a comment

Manage your passwords with Bitwarden and Podman

You might have encountered a few advertisements the past year trying to sell you a password manager. Some examples are LastPass, 1Password, or Dashlane. A password manager removes the burden of remembering the passwords for all your websites. No longer do you need to re-use passwords or use easy-to-remember passwords. Instead, you only need to remember one single password that can unlock all your other passwords for you.

This can make you more secure by having one strong password instead of many weak passwords. You can also sync your passwords across devices if you have a cloud-based password manager like LastPass, 1Password, or Dashlane. Unfortunately, none of these products are open source. Luckily there are open source alternatives available.

Open source password managers

These alternatives include Bitwarden, LessPass, or KeePass. Bitwarden is an open source password manager that stores all your passwords encrypted on the server, which works the same way as LastPass, 1Password, or Dashlane. LessPass is a bit different as it focuses on being a stateless password manager. This means it derives passwords based on a master password, the website, and your username rather than storing the passwords encrypted. On the other side of the spectrum there’s KeePass, a file-based password manager with a lot of flexibility with its plugins and applications.

Each of these three apps has its own downsides. Bitwarden stores everything in one place and is exposed to the web through its API and website interface. LessPass can’t store custom passwords since it’s stateless, so you need to use their derived passwords. KeePass, a file-based password manager, can’t easily sync between devices. You can utilize a cloud-storage provider together with WebDAV to get around this, but a lot of clients do not support it and you might get file conflicts if devices do not sync correctly.

This article focuses on Bitwarden.

Running an unofficial Bitwarden implementation

There is a community implementation of the server and its API called bitwarden_rs. This implementation is fully open source as it can use SQLite or MariaDB/MySQL, instead of the proprietary Microsoft SQL Server that the official server uses.

It’s important to recognize some differences exist between the official and the unofficial version. For instance, the official server has been audited by a third-party, whereas the unofficial one hasn’t. When it comes to implementations, the unofficial version lacks email confirmation and support for two-factor authentication using Duo or email codes.

Let’s get started running the server with SELinux in mind. Following the documentation for bitwarden_rs you can construct a Podman command as follows:

$ podman run -d \ 
--userns=keep-id \
--name bitwarden \
-e SIGNUPS_ALLOWED=false \
-e ROCKET_PORT=8080 \
-v /home/egustavs/Bitwarden/bw-data/:/data/:Z \
-p 8080:8080 \
bitwardenrs/server:latest

This downloads the bitwarden_rs image and runs it in a user container under the user’s namespace. It uses a port above 1024 so that non-root users can bind to it. It also changes the volume’s SELinux context with :Z to prevent permission issues with read-write on /data.

If you host this under a domain, it’s recommended to put this server under a reverse proxy with Apache or Nginx. That way you can use port 80 and 443 which points to the container’s 8080 port without running the container as root.

Running under systemd

With Bitwarden now running, you probably want to keep it that way. Next, create a unit file that keeps the container running, automatically restarts if it doesn’t respond, and starts running after a system restart. Create this file as /etc/systemd/system/bitwarden.service:

[Unit]
Description=Bitwarden Podman container
Wants=syslog.service

[Service]
User=egustavs
Group=egustavs
TimeoutStartSec=0
ExecStart=/usr/bin/podman run 'bitwarden'
ExecStop=-/usr/bin/podman stop -t 10 'bitwarden'
Restart=always
RestartSec=30s
KillMode=none

[Install]
WantedBy=multi-user.target

Now, enable and start it using sudo:

$ sudo systemctl enable bitwarden.service && sudo systemctl start bitwarden.service
$ systemctl status bitwarden.service
bitwarden.service - Bitwarden Podman container
Loaded: loaded (/etc/systemd/system/bitwarden.service; enabled; vendor preset: disabled)
Active: active (running) since Tue 2019-07-09 20:23:16 UTC; 1 day 14h ago
Main PID: 14861 (podman)
Tasks: 44 (limit: 4696)
Memory: 463.4M

Success! Bitwarden is now running under system and will keep running.

Adding LetsEncrypt

It’s strongly recommended to run your Bitwarden instance through an encrypted channel with something like LetsEncrypt if you have a domain. Certbot is a bot that creates LetsEncrypt certificates for us, and they have a guide for doing this through Fedora.

After you generate a certificate, you can follow the bitwarden_rs guide about HTTPS. Just remember to append :Z to the LetsEncrypt volume to handle permissions while not changing the port.


Photo by CMDR Shane on Unsplash.

Posted on Leave a comment

Microsoft joins the OpenChain community to help drive open source compliance

A lot goes into making open source great – from licenses to code to community. A key part of doing open source right is being able to trust that the code you receive complies with its open source licenses. It’s a deceptively hard problem and one that Microsoft is working with the community to address.

The OpenChain Project plays an important role in increasing confidence around the open source code you receive. It does so by creating standards and training materials focused on how to run a quality open source compliance program, which in turn builds trust and removes friction in the ecosystem and supply chain.

We’ve had the honor of working with the OpenChain community to help develop its forthcoming specification version, and today we’re pleased to announce that we are joining OpenChain both as a platinum member and as a board member.

Our goal is to work even more closely with the OpenChain community to create the standards that will bring even greater trust to the open source ecosystem and that will work for everyone – from individual developers to the largest enterprises.

And Microsoft’s efforts to work with the community to improve open source compliance don’t stop with OpenChain. We’re actively working with ClearlyDefined, which brings clarity to open source component license terms and enables better compliance automation, and the Linux Foundation’s TODO Group, where members develop and share best practices for running world-class open source programs.

We look forward to continued collaboration with OpenChain and the broader open source community to bring greater confidence, clarity, and efficiency to the open source ecosystem.

To learn more, read full announcement here.

Posted on Leave a comment

Microsoft acquires Citus Data, re-affirming commitment to open source and accelerating Azure PostgreSQL performance and scale

Data and analytics are increasingly at the center of digital transformation, with the most leading-edge enterprises leveraging data to drive customer acquisition and satisfaction, long-term strategic planning, and expansion into net new markets. This digital revolution is placing an incredible demand on technology solutions to be more open, flexible, and scalable to meet the demands of large data volumes, sub-second response times, and analytics driven business insights.

Microsoft is committed to building an open platform that is flexible and provides customers with technology choice to suit their unique needs. Microsoft Azure Data Services are a great example of a place where we have continuously invested in offering choice and flexibility with our fully managed community based open source relational database services, spanning MySQL, PostgreSQL and MariaDB. This builds on our other open source investments in SQL Server on Linux, a multi-model NoSQL database with Azure Cosmos DB, and support for open source analytics with the Spark and Hadoop ecosystems. With our acquisition of GitHub, we continue to expand on our commitment to empower developers to achieve more at every stage of the development lifecycle.

Building on these investments, I am thrilled to announce that we have acquired Citus Data, a leader in the PostgreSQL community. Citus is an innovative open source extension to PostgreSQL that transforms PostgreSQL into a distributed database, dramatically increasing performance and scale for application developers. Because Citus is an extension to open source PostgreSQL, it gives enterprises the performance advantages of a horizontally scalable database while staying current with all the latest innovations in PostgreSQL. Citus is available as a fully-managed database as a service, as enterprise software, and as a free open source download.

Since the launch of Microsoft’s fully managed community-based database service for PostgreSQL in March 2018, its adoption has surged. Earlier this month, PostgreSQL was named DBMS of the Year by DB-Engines, for the second year in a row. The acquisition of Citus Data builds on Azure’s open source commitment and enables us to provide the massive scalability and performance our customers demand as their workloads grow.

Together, Microsoft and Citus Data will further unlock the power of data, enabling customers to scale complex multi-tenant SaaS applications and accelerate the time to insight with real-time analytics over billions of rows, all with the familiar PostgreSQL tools developers know and love.

I am incredibly excited to welcome the high-caliber Citus Data team to Microsoft! Working together, we will accelerate the delivery of key, enterprise-ready features from Azure to PostgreSQL and enable critical PostgreSQL workloads to run on Azure with confidence. We continue to be energized by building on our promise around Azure as the most comprehensive cloud to run open source and proprietary workloads at any scale and look forward to working with the PostgreSQL community to accelerate innovation to customers.

For more information on Citus Data, you can read the blog post from Umur Cubukcu, CEO and co-founder, here.

Tags: , ,