Posted on Leave a comment

gRPC performance improvements in .NET 5

Avatar

James

gRPC is a modern open source remote procedure call framework. There are many exciting features in gRPC: real-time streaming, end-to-end code generation, and great cross-platform support to name a few. The most exciting to me, and consistently mentioned by developers who are interested in gRPC, is performance.

Last year Microsoft contributed a new implementation of gRPC for .NET to the CNCF. Built on top of Kestrel and HttpClient, gRPC for .NET makes gRPC a first-class member of the .NET ecosystem.

In our first gRPC for .NET release, we focused on gRPC’s core features, compatibility, and stability. In .NET 5, we made gRPC really fast.

gRPC and .NET 5 are fast

In a community run benchmark of different gRPC server implementations, .NET gets the highest requests per second after Rust, and is just ahead of C++ and Go.

gRPC performance comparison

This result builds on top of the work done in .NET 5. Our benchmarks show .NET 5 server performance is 60% faster than .NET Core 3.1. .NET 5 client performance is 230% faster than .NET Core 3.1.

Stephen Toub discusses dotnet/runtime changes in his Performance Improvements in .NET 5 blog post. Check it out to read about improvements in HttpClient and HTTP/2.

In the rest of this blog post I’ll talk about the improvements we made to make gRPC fast in ASP.NET Core.

HTTP/2 allocations in Kestrel

gRPC uses HTTP/2 as its underlying protocol. A fast HTTP/2 implementation is the most important factor when it comes to performance. Our gRPC server builds on top of Kestrel, a HTTP server written in C# that is designed with performance in mind. Kestrel is a top contender in the TechEmpower benchmarks, and gRPC benefits from a lot of the performance improvements in Kestrel automatically. However, there are many HTTP/2 specific optimizations that were made in .NET 5.

Reducing allocations is a good place to start. Fewer allocations per HTTP/2 request means less time doing garbage collection (GC). And CPU time “wasted” in GC is CPU time not spent serving HTTP/2 requests.

.NET Core 3.1 memory graph

The performance profiler above is measuring allocations over 100,000 gRPC requests. The live object graph’s sawtooth shaped pattern indicates memory building up, then being garbage collected. About 3.9KB is being allocated per request. Lets try to get that number down!

dotnet/aspnetcore#18601 adds pooling of streams in a HTTP/2 connection. This one change almost cuts allocations per request in half. It enables reuse of internal types like Http2Stream, and publicly accessible types like HttpContext and HttpRequest, across multiple requests.

Once streams are pooled a range of optimizations become available:

There are many smaller allocation savings. dotnet/aspnetcore#19783 removes allocations in Kestrel’s HTTP/2 flow control. A resettable ManualResetValueTaskSourceCore<T> type replaces allocating a new object each time flow control is triggered. dotnet/aspnetcore#19273 replaces an array allocation with stackalloc when validating the HTTP request path. dotnet/aspnetcore#19277 and dotnet/aspnetcore#19325 eliminate some unintended allocations related to logging. dotnet/aspnetcore#22557 avoids allocating a Task<T> if a task is already complete. And finally dotnet/aspnetcore#19732 saves a string allocation by special casing content-length of 0. Because every allocation matters.

.NET 5 memory

Per-request memory in .NET 5 is now just 330 B, a decrease of 92%. The sawtooth pattern has also disappeared. Reduced allocations means garbage collection didn’t run at all while the server processed 100,000 gRPC calls.

Reading HTTP headers in Kestrel

A hotpath in HTTP/2 is reading and writing HTTP headers. A HTTP/2 connection supports concurrent requests over a TCP socket, a feature called multiplexing. Multiplexing allows HTTP/2 to make efficient use of connections, but only the headers for one request on a connection can be processed at a time. HTTP/2’s HPack header compression is stateful and depends on order. Processing HTTP/2 headers is a bottleneck so has to be as fast as possible.

dotnet/aspnetcore#23083 optimizes the performance of HPackDecoder. The decoder is a state machine that reads incoming HTTP/2 HEADER frames. The approach here is good, the state machine allows Kestrel to decode frames as they arrive, but the decoder was checking state after parsing each byte. Another problem is literal values, the header names and values, were copied multiple times. Optimizations in this PR include:

  • Tighten parsing loops. For example, if we’ve just parsed a header name then the value must come afterwards. There is no need to check the state machine to figure out the next state.
  • Skip literal parsing all together. Literals in HPack have a length prefix. If we know the next 100 bytes are a literal then there is no need to inspect each byte. Mark the literal’s location and resuming parsing at its end.
  • Avoid copying literal bytes. Previously literal bytes were always copied to an intermediary array before passed to Kestrel. Most of the time this isn’t necessary and instead we can just slice the original buffer and pass a ReadOnlySpan<byte> to Kestrel.

Together these changes significantly decrease the time it takes to parse headers. Header size is almost no longer a factor. The decoder marks the start and end position of a value and then slices that range.

private HPackDecoder _decoder = CreateDecoder();
private byte[] _smallHeader = new byte[] { /* HPack bytes */ };
private byte[] _largeHeader = new byte[] { /* HPack bytes */ };
private IHttpHeadersHandler _noOpHandler = new NoOpHeadersHandler(); [Benchmark]
public void SmallDecode() => _decoder.Decode(_smallHeader, endHeaders: true, handler: _noOpHandler); [Benchmark]
public void LargeDecode() => _decoder.Decode(_largeHeader, endHeaders: true, handler: _noOpHandler);
Method Runtime Mean Ratio Allocated
SmallDecode .NET Core 3.1 111.20 ns 1.00 0 B
SmallDecode .NET 5.0 71.90 ns 0.65 0 B
LargeDecode .NET Core 3.1 49,083.00 ns 1.00 0 B
LargeDecode .NET 5.0 98.68 ns 0.002 0 B

Once headers have been decoded, Kestrel needs to validate and process them. For example, special HTTP/2 headers like :path and :method need to be set onto HttpRequest.Path and HttpRequest.Method, and other headers need to be converted to strings and added to the HttpRequest.Headers collection.

Kestrel has the concept of known request headers. Known headers are a selection of commonly occuring request headers that have been optimized for fast setting and getting. dotnet/aspnetcore#24730 adds an even faster path for setting HPack static table headers to the known headers. The HPack static table gives 61 common header names and values a number ID that can be sent instead of the full name. A header with a static table ID can use the optimized path to bypass some validation and quickly be set in the collection based on its ID. dotnet/aspnetcore#24945 adds extra optimization for static table IDs with a name and value.

Adding HPack response compression

Prior to .NET 5, Kestrel supported reading HPack compressed headers in requests, but it didn’t compress response headers. The obvious advantage of response header compression is less network usage, but there are performance benefits as well. It’s faster to write a couple of bits for a compressed header than it is to encode and write the header’s full name and value as bytes.

dotnet/aspnetcore#19521 adds initial HPack static compression. Static compression is pretty simple: if the header is in the HPack static table then write the ID to identify the header instead of the longer name.

Dynamic HPack header compression is more complicated, but also provides bigger gains. Response header names and values are tracked in a dynamic table and are each assigned an ID. As a response’s headers are written, the server checks to see if the header name and value are in the table. If there is a match then the ID is written. If there isn’t then the full header is written, and it is added to the table for the next response. There is a maximum size of the dynamic table, so adding a header to it may evict other headers with a first in, first out order.

dotnet/aspnetcore#20058 adds dynamic HPack header compression. To quickly search for headers the dynamic table groups header entries using a basic hash table. To track order and evict the oldest headers, entries maintain a linked list. To avoid allocations, removed entries are pooled and reused.

Wireshark HTTP/2 response

Using Wireshark, we can see the impact of header compression on response size for this example gRPC call. .NET Core 3.x writes 77 B, while .NET 5 is only 12 B.

Protobuf message serialization

gRPC for .NET uses the Google.Protobuf package as the default serializer for messages. Protobuf is an efficient binary serialization format. Google.Protobuf is designed for performance, using code generation instead of reflection to serialize .NET objects. There are some modern .NET APIs and features that can be added to it to reduce allocations and improve efficiency.

The biggest improvement to Google.Protobuf is support for modern .NET IO types: Span<T>, ReadOnlySequence<T> and IBufferWriter<T>. These types allow gRPC messages to be serialized directly using buffers exposed by Kestrel. This saves Google.Protobuf allocating an intermediary array when serializing and deserializing Protobuf content.

Support for Protobuf buffer serialization was a multi-year effort between Microsoft and Google engineers. Changes were spread across multiple repositories.

protocolbuffers/protobuf#7351 and protocolbuffers/protobuf#7576 add support for buffer serialization to Google.Protobuf. This is by far the biggest and most complicated change. Three attempts were made to add this feature before the right balance between performance, backwards compatibility and code reuse was found. Protobuf reading and writing uses many performance oriented features and APIs added to C# and .NET Core:

  • Span<T> and C# ref struct types enables fast and safe access to memory. Span<T> represents a contiguous region of arbitrary memory. Using span lets us serialize to managed .NET arrays, stack allocated arrays, or unmanaged memory, without using pointers. Span<T> and .NET protects us against buffer overflow.
  • stackalloc is used to create stack-based arrays. stackalloc is a useful tool to avoid allocations when a small buffer is required.
  • Low-level methods such as MemoryMarshal.GetReference(), Unsafe.ReadUnaligned() and Unsafe.WriteUnaligned() convert directly between primitive types and bytes.
  • BinaryPrimitives has helper methods for efficiently converting between .NET primitive types and bytes. For example, BinaryPrimitives.ReadUInt64LittleEndian reads little endian bytes and returns an unsigned 64 bit number. Methods provided by BinaryPrimitive are heavily optimized and use vectorization.

A great thing about modern C# and .NET is it is possible to write fast, efficient, low-level libraries without sacrificing memory safety. When it comes to performance, .NET lets you have your cake and eat it too!

private TestMessage _testMessage = CreateMessage();
private ReadOnlySequence<byte> _testData = CreateData();
private IBufferWriter<byte> _bufferWriter = CreateWriter(); [Benchmark]
public IMessage ToByteArray() => _testMessage.ToByteArray(); [Benchmark]
public IMessage ToBufferWriter() => _testMessage.WriteTo(_bufferWriter); [Benchmark]
public IMessage FromByteArray() => TestMessage.Parser.ParseFrom(CreateBytes()); [Benchmark]
public IMessage FromSequence() => TestMessage.Parser.ParseFrom(_testData);
Method Runtime Mean Ratio Allocated
ToByteArray .NET 5.0 1,133.82 ns 1.00 184 B
ToBufferWriter .NET 5.0 589.05 ns 0.51 64 B
FromByteArray .NET 5.0 409.88 ns 1.00 1960 B
FromSequence .NET 5.0 381.03 ns 0.92 1776 B

Adding support for buffer serialization to Google.Protobuf is just the first step. More work is required for gRPC for .NET to take advantage of the new capability:

  • grpc/grpc#18865 and grpc/grpc#19792 adds ReadOnlySequence<byte> and IBufferWriter<byte> APIs to the gRPC serialization abstraction layer in Grpc.Core.Api.
  • grpc/grpc#23485 updates gRPC code generation to glue the changes in Google.Protobuf to Grpc.Core.Api.
  • grpc/grpc-dotnet#376 and grpc/grpc-dotnet#629 updates gRPC for .NET to use the new serialization abstractions in Grpc.Core.Api. This code is the integration between Kestrel and gRPC. Because Kestrel’s IO is built on top of System.IO.Pipelines, we can use its buffers during serialization.

The end result is gRPC for .NET serializes Protobuf messages directly to Kestrel’s request and response buffers. Intermediary array allocations and byte copies have been eliminated from gRPC message serialization.

Wrapping Up

Performance is a feature of .NET and gRPC, and as cloud apps scale it is more important than ever. I think all developers can agree it is fun to make fast apps, but performance has real world impact. Lower latency and higher throughput means fewer servers. It is an opportunity to save money, reduce power use and build greener apps.

.NET Core 3.1 vs .NET 5 results

As is obvious from this tour, a lot of changes have gone into gRPC, Protobuf and .NET aimed at improving performance. Our benchmarks show a 60% improvement in gRPC server RPS and a 230% improvement in gRPC client RPS.

.NET 5 RC2 is available now, and the official .NET 5 release is in November. To try out the performance improvements and to get started using gRPC with .NET, the best place to start is the Create a gRPC client and server in ASP.NET Core tutorial.

We look forward to hearing about apps built with gRPC and .NET, and to your future contributions in the dotnet and grpc repos!

Posted on Leave a comment

Orchard Core Release Candidate 2 now available

Avatar

Sebastien

We are thrilled to announce that Orchard Core RC2 is now available.

What is Orchard Core?

Orchard Core Framework is a community-based application framework for building modular, multi-tenant applications on ASP.NET Core. It has been created by more than 150 contributors and has over 4K stars on GitHub.

Orchard Core also includes Orchard Core CMS, a Web Content Management System (CMS), that is built on top of the Orchard Core Framework. It allows you to build full websites, or headless websites using GraphQL.

Getting Started

Installing the templates

You can install the recommended templates by running:

dotnet new -i OrchardCore.ProjectTemplates::1.0.0-*

Creating a new modular application

Using the templates, a modular MVC application can be created by running:

dotnet new ocmvc -n MySite

And a module is created by running:

dotnet new ocmodulemvc -n MyModule
dotnet add MySite reference MyModule

Creating an Orchard CMS website

To create a new site based on the Orchard Core CMS run:

dotnet new occms -n MySite
dotnet run --project .\MySite\MySite.csproj

After going through the setup form you get a working Blog.

Image blog

What’s new

Some notable improvements include:

  • Content localization support, and pre-configured localized Setup experience

setup

  • Improved block content management experience

Blocks

  • Sitemaps management
  • Azure support improvements

Resources

Development plan

The Orchard Core source code is available on GitHub.

There are still many important pieces to add and you might want to check our roadmap, but it’s also the best time to jump into the project and start contributing new modules, themes, improvements, or just ideas.

Feel free to drop on our dedicated Gitter chat and ask questions.

Posted on Leave a comment

Introducing Project Tye

Amiee Lo

Amiee

Project Tye is an experimental developer tool that makes developing, testing, and deploying microservices and distributed applications easier.

When building an app made up of multiple projects, you often want to run more than one at a time, such as a website that communicates with a backend API or several services all communicating with each other. Today, this can be difficult to setup and not as smooth as it could be, and it’s only the very first step in trying to get started with something like building out a distributed application. Once you have an inner-loop experience there is then a, sometimes steep, learning curve to get your distributed app onto a platform such as Kubernetes.

The project has two main goals:

  1. Making development of microservices easier by:
    • Running many services with one command
    • Using dependencies in containers
    • Discovering addresses of other services using simple conventions
  2. Automating deployment of .NET applications to Kubernetes by:
    • Automatically containerizing .NET applications
    • Generating Kubernetes manifests with minimal knowledge or configuration
    • Using a single configuration file

If you have an app that talks to a database, or an app that is made up of a couple of different processes that communicate with each other, then we think Tye will help ease some of the common pain points you’ve experienced.

We have recently demonstrated Tye in a few Build sessions that we encourage you to watch, Cloud Native Apps with .NET and AKS and Journey to one .NET

Installation

To get started with Tye, you will first need to have .NET Core 3.1 installed on your machine.

Tye can then be installed as a global tool using the following command:

dotnet tool install -g Microsoft.Tye --version "0.2.0-alpha.20258.3"

Running a single service

Tye makes it very easy to run single applications. To demonstrate this:

1. Make a new folder called microservices and navigate to it:

mkdir microservices
cd microservices

2. Then create a frontend project:

dotnet new razor -n frontend

3. Now run this project using tye run:

tye run frontend

Image tye run output The above displays how Tye is building, running, and monitoring the frontend application.

One key feature from tye run is a dashboard to view the state of your application. Navigate to http://localhost:8000 to see the dashboard running.

Image tye dashboard

The dashboard is the UI for Tye that displays a list of all of your services. The Bindings column has links to the listening URLs of the service. The Logs column allows you to view the streaming logs for the service.

Image tye logs

Services written using ASP.NET Core will have their listening ports assigned randomly if not explicitly configured. This is useful to avoid common issues like port conflicts.

Running multiple services

Instead of just a single application, suppose we have a multi-application scenario where our frontend project now needs to communicate with a backend project. If you haven’t already, stop the existing tye run command using Ctrl + C.

1. Create a backend API that the frontend will call inside of the microservices/ folder.

dotnet new webapi -n backend

2. Then create a solution file and add both projects:

dotnet new sln
dotnet sln add frontend backend

You should now have a solution called microservices.sln that references the frontend and backend projects.

3. Run tye in the folder with the solution.

tye run

The dashboard should show both the frontend and backend services. You can navigate to both of them through either the dashboard of the url outputted by tye run.

The backend service in this example was created using the webapi project template and will return an HTTP 404 for its root URL.

Getting the frontend to communicate with the backend

Now that we have two applications running, let’s make them communicate.

To get both of these applications communicating with each other, Tye utilizes service discovery. In general terms, service discovery describes the process by which one service figures out the address of another service. Tye uses environment variables for specifying connection strings and URIs of services.

The simplest way to use Tye’s service discovery is through the Microsoft.Extensions.Configuration system – available by default in ASP.NET Core or .NET Core Worker projects. In addition to this, we provide the Microsoft.Tye.Extensions.Configuration package with some Tye-specific extensions layered on top of the configuration system.

If you want to learn more about Tye’s philosophy on service discovery and see detailed usage examples, check out this reference document.

1. If you haven’t already, stop the existing tye run command using Ctrl + C. Open the solution in your editor of choice.

2. Add a file WeatherForecast.cs to the frontend project.

using System; namespace frontend { public class WeatherForecast { public DateTime Date { get; set; } public int TemperatureC { get; set; } public int TemperatureF => 32 + (int)(TemperatureC / 0.5556); public string Summary { get; set; } } }

This will match the backend WeatherForecast.cs.

3. Add a file WeatherClient.cs to the frontend project with the following contents:

using System.Net.Http;
using System.Text.Json;
using System.Threading.Tasks; namespace frontend
{ public class WeatherClient { private readonly JsonSerializerOptions options = new JsonSerializerOptions() { PropertyNameCaseInsensitive = true, PropertyNamingPolicy = JsonNamingPolicy.CamelCase, }; private readonly HttpClient client; public WeatherClient(HttpClient client) { this.client = client; } public async Task<WeatherForecast[]> GetWeatherAsync() { var responseMessage = await this.client.GetAsync("/weatherforecast"); var stream = await responseMessage.Content.ReadAsStreamAsync(); return await JsonSerializer.DeserializeAsync<WeatherForecast[]>(stream, options); } }
}

4. Add a reference to the Microsoft.Tye.Extensions.Configuration package to the frontend project

dotnet add frontend/frontend.csproj package Microsoft.Tye.Extensions.Configuration --version "0.2.0-*"

5. Now register this client in frontend by adding the following to the existing ConfigureServices method to the existing Startup.cs file:

...
public void ConfigureServices(IServiceCollection services)
{ services.AddRazorPages(); /** Add the following to wire the client to the backend **/ services.AddHttpClient<WeatherClient>(client => { client.BaseAddress = Configuration.GetServiceUri("backend"); }); /** End added code **/
}
...

This will wire up the WeatherClient to use the correct URL for the backend service.

6. Add a Forecasts property to the Index page model under Pages\Index.cshtml.cs in the frontend project.

... public WeatherForecast[] Forecasts { get; set; }
...

7. Change the OnGet method to take the WeatherClient to call the backend service and store the result in the Forecasts property:

... public async Task OnGet([FromServices]WeatherClient client) { Forecasts = await client.GetWeatherAsync(); }
...

8. Change the Index.cshtml razor view to render the Forecasts property in the razor page:

@page
@model IndexModel
@{ ViewData["Title"] = "Home page";
} <div class="text-center"> <h1 class="display-4">Welcome</h1> <p>Learn about <a href="https://docs.microsoft.com/aspnet/core">building Web apps with ASP.NET Core</a>.</p>
</div> Weather Forecast: <table class="table"> <thead> <tr> <th>Date</th> <th>Temp. (C)</th> <th>Temp. (F)</th> <th>Summary</th> </tr> </thead> <tbody> @foreach (var forecast in @Model.Forecasts) { <tr> <td>@forecast.Date.ToShortDateString()</td> <td>@forecast.TemperatureC</td> <td>@forecast.TemperatureF</td> <td>@forecast.Summary</td> </tr> } </tbody>
</table>

9. Run the project with tye run and the frontend service should be able to successfully call the backend service!

When you visit the frontend service you should see a table of weather data. This data was produced randomly in the backend service. The fact that you’re seeing it in a web UI in the frontend means that the services are able to communicate. Unfortunately, this doesn’t work out of the box on Linux right now due to how self-signed certificates are handled, please see the workaround here

Tye’s configuration schema

Tye has a optional configuration file (tye.yaml) to enable customizing settings. This file contains all of your projects and external dependencies. If you have an existing solution, Tye will automatically populate this with all of your current projects.

To initalize this file, you will need to run the following command in the microservices directory to generate a default tye.yaml file:

tye init

The contents of the tye.yaml should look like this:

Image tye yaml

The top level scope (like the name node) is where global settings are applied.

tye.yaml lists all of the application’s services under the services node. This is the place for per-service configuration.

To learn more about Tye’s yaml specifications and schema, you can check it out here in Tye’s repository on Github.

We provide a json-schema for tye.yaml and some editors support json-schema for completion and validation of yaml files. See json-schema for instructions.

Adding external dependencies (Redis)

Not only does Tye make it easy to run and deploy your applications to Kubernetes, it’s also fairly simple to add external dependencies to your applications as well. We will now add redis to the frontend and backend application to store data.

Tye can use Docker to run images that run as part of your application. Make sure that Docker is installed on your machine.

1. Change the WeatherForecastController.Get() method in the backend project to cache the weather information in redis using an IDistributedCache.

2. Add the following using‘s to the top of the file:

using Microsoft.Extensions.Caching.Distributed;
using System.Text.Json;

3. Update Get():

[HttpGet]
public async Task<string> Get([FromServices]IDistributedCache cache)
{ var weather = await cache.GetStringAsync("weather"); if (weather == null) { var rng = new Random(); var forecasts = Enumerable.Range(1, 5).Select(index => new WeatherForecast { Date = DateTime.Now.AddDays(index), TemperatureC = rng.Next(-20, 55), Summary = Summaries[rng.Next(Summaries.Length)] }) .ToArray(); weather = JsonSerializer.Serialize(forecasts); await cache.SetStringAsync("weather", weather, new DistributedCacheEntryOptions { AbsoluteExpirationRelativeToNow = TimeSpan.FromSeconds(5) }); } return weather;
}

This will store the weather data in Redis with an expiration time of 5 seconds.

4. Add a package reference to Microsoft.Extensions.Caching.StackExchangeRedis in the backend project:

cd backend/
dotnet add package Microsoft.Extensions.Caching.StackExchangeRedis
cd ..

5. Modify Startup.ConfigureServices in the backend project to add the redis IDistributedCache implementation.

 public void ConfigureServices(IServiceCollection services) { services.AddControllers(); services.AddStackExchangeRedisCache(o => { o.Configuration = Configuration.GetConnectionString("redis"); }); }

The above configures redis to the configuration string for the redis service injected by the tye host.

6. Modify tye.yaml to include redis as a dependency.

name: microservice
services:
- name: backend project: backend\backend.csproj
- name: frontend project: frontend\frontend.csproj
- name: redis image: redis bindings: - port: 6379 connectionString: "${host}:${port}"
- name: redis-cli image: redis args: "redis-cli -h redis MONITOR"

We’ve added 2 services to the tye.yaml file. The redis service itself and a redis-cli service that we will use to watch the data being sent to and retrieved from redis.

The "${host}:${port}" format in the connectionString property will substitute the values of the host and port number to produce a connection string that can be used with StackExchange.Redis.

7. Run the tye command line in the solution root

Make sure your command-line is in the microservices/ directory. One of the previous steps had you change directories to edit a specific project.

tye run

Navigate to http://localhost:8000 to see the dashboard running. Now you will see both redis and the redis-cli running listed in the dashboard.

Navigate to the frontend application and verify that the data returned is the same after refreshing the page multiple times. New content will be loaded every 5 seconds, so if you wait that long and refresh again, you should see new data. You can also look at the redis-cli logs using the dashboard and see what data is being cached in redis.

The "${host}:${port}" format in the connectionString property will substitute the values of the host and port number to produce a connection string that can be used with StackExchange.Redis.

Deploying to Kubernetes

Tye makes the process of deploying your application to Kubernetes very simple with minimal knowlege or configuration required.

Tye will use your current credentials for pushing Docker images and accessing Kubernetes clusters. If you have configured kubectl with a context already, that’s what tye deploy is going to use!

Prior to deploying your application, make sure to have the following:

  1. Docker installed based off on your operating system
  2. A container registry. Docker by default will create a container registry on DockerHub. You could also use Azure Container Registry (ACR) or another container registry of your choice.
  3. A Kubernetes Cluster. There are many different options here, including:

If you choose a container registry provided by a cloud provider (other than Dockerhub), you will likely have to take some steps to configure your kubernetes cluster to allow access. Follow the instructions provided by your cloud provider.

Deploying Redis

tye deploy will not deploy the redis configuration, so you need to deploy it first by running:

kubectl apply -f https://raw.githubusercontent.com/dotnet/tye/master/docs/tutorials/hello-tye/redis.yaml

This will create a deployment and service for redis.

Tye deploy

You can deploy your application by running the follow command:

tye deploy --interactive

Enter the Container Registry (ex: example.azurecr.io for Azure or example for dockerhub):

You will be prompted to enter your container registry. This is needed to tag images, and to push them to a location accessible by kubernetes.

Image tye deploy output

If you are using dockerhub, the registry name will be your dockerhub username. If you are using a standalone container registry (for instance from your cloud provider), the registry name will look like a hostname, eg: example.azurecr.io.

You’ll also be prompted for the connection string for redis.

Image redis connection string

Enter the following to use the instance that you just deployed:

redis:6379

tye deploy will create Kubernetes secret to store the connection string.

–interactive is needed here to create the secret. This is a one-time configuration step. In a CI/CD scenario you would not want to have to specify connection strings over and over, deployment would rely on the existing configuration in the cluster.

Tye uses Kubernetes secrets to store connection information about dependencies like redis that might live outside the cluster. Tye will automatically generate mappings between service names, binding names, and secret names.

tye deploy does many different things to deploy an application to Kubernetes. It will:

  • Create a docker image for each project in your application.
  • Push each docker image to your container registry.
  • Generate a Kubernetes Deployment and Service for each project.
  • Apply the generated Deployment and Service to your current Kubernetes context.

Image tye deploy building images

You should now see three pods running after deploying.

kubectl get pods NAME READY STATUS RESTARTS AGE
backend-ccfcd756f-xk2q9 1/1 Running 0 85m
frontend-84bbdf4f7d-6r5zp 1/1 Running 0 85m
redis-5f554bd8bd-rv26p 1/1 Running 0 98m

You can visit the frontend application, you will need to port-forward to access the frontend from outside the cluster.

kubectl port-forward svc/frontend 5000:80

Now navigate to http://localhost:5000 to view the frontend application working on Kubernetes.

Image kubernetes portforward

Currently tye does not automatically enable TLS within the cluster, and so communication takes place over HTTP instead of HTTPS. This is typical way to deploy services in kubernetes – we may look to enable TLS as an option or by default in the future.

Adding a registry to tye.yaml

If you want to use tye deploy as part of a CI/CD system, it’s expected that you’ll have a tye.yaml file initialized. You will then need to add a container registry to tye.yaml. Based on what container registry you configured, add the following line in the tye.yaml file:

registry: <registry_name>

Now it’s possible to use tye deploy without --interactive since the registry is stored as part of configuration.

This step may not make much sense if you’re using tye.yaml to store a personal Dockerhub username. A more typical use case would storing the name of a private registry for use in a CI/CD system.

For a conceptual overview of how Tye behaves when using tye deploy for deployment, check out this document.

Undeploying your application

After deploying and playing around with the application, you may want to remove all resources associated from the Kubernetes cluster. You can remove resources by running:

tye undeploy

This will remove all deployed resources. If you’d like to see what resources would be deleted, you can run:

tye undeploy --what-if

If you want to experiment more with using Tye, we have a variety of different sample applications and tutorials that you can walk through, check them out down below:

We have been diligently working on adding new capabilities and integrations to continuously improve Tye. Here are some integrations below that we have recently released. There is also information provided on how to get started for each of these:

  • Ingressto expose pods/services created to the public internet.
  • Redisto store data, cache, or as a message broker.
  • Daprfor integrating a Dapr application with Tye.
  • Zipkinusing Zipkin for distributed tracing.
  • Elastic Stackusing Elastic Stack for logging.

While we are excited about the promise Tye holds, it’s an experimental project and not a committed product. During this experimental phase we expect to engage deeply with anyone trying out Tye to hear feedback and suggestions. The point of doing experiments in the open is to help us explore the space as much as we can and use what we learn to determine what we should be building and shipping in the future.

Project Tye is currently commited as an experiment until .NET 5 ships. At which point we will be evaluating what we have and all that we’ve learnt to decide what we should do in the future.

Our goal is to ship every month, and some new capabilities that we are looking into for Tye include:

  • More deployment targets
  • Sidecar support
  • Connected development
  • Database migrations

We are excited by the potential Tye has to make developing distributed applications easier and we need your feedback to make sure it reaches that potential. We’d really love for you to try it out and tell us what you think, there is a link to a survey on the Tye dashboard that you can fill out or you can create issues and talk to us on GitHub. Either way we’d love to hear what you think.

Posted on Leave a comment

ASP.NET Core Apps Observability

Francisco Beltrao

Francisco

Thank you Sergey Kanzhelev for the support and review of this ASP.NET Core Apps Observability article.

Modern software development practices value quick and continuous updates, following processes that minimize the impact of software failures. As important as identifying bugs early, finding out if changes are improving business value are equally important. These practices can only work when a monitoring solution is in place. This article explores options for adding observability to .NET Core apps. They have been collected based on interactions with customers using .NET Core in different environments. We will be looking into OpenTelemetry and Application Insights SDKs to add observability to a sample distributed application.

Identifying software error and business impact require a monitoring solution with the ability to observe and report how the system and users behave. The collected data must provide the required information to analyze and identify a bad update. Answering questions such as:

  • Are we observing more errors than before?
  • Were there new error types?
  • Did the request duration unexpectedly increase compared to previous versions?
  • Has the throughput (req/sec) decreased?
  • Has the CPU and/or Memory usage increased?
  • Were there changes in our KPIs?
  • Is it selling less than before?
  • Did our visitor count decrease?

The impact of a bad system update can be minimized by combining the monitoring information with progressive deployment strategies. Such as canary, mirroring, rings, blue/green, etc.

Observability is Built on 3 Pillars:

  • Logging: collects information about events happening in the system. Helping the team analyze unexpected application behavior. Searching through the logs of suspect services can provide the necessary hint to identify the problem root cause. Such as: service throwing out of memory exceptions and app configuration not reflecting expected values. As well as calls to external service with incorrect address, calls to external service returns with unexpected results, and incoming requests with unexpected input.

  • Tracing: collects information to create an end-to-end view of how transactions are executed in a distributed system. A trace is like a stack trace spanning multiple applications. Once a problem has been recognized, traces are a good starting point in identifying the source in distributed operations. Like calls from service A to B are taking longer than normal, service payment calls are failing, etc.

  • Metrics: provide a real-time indication of how the system is running. It can be leveraged to build alerts, allowing proactive reactance to unexpected values. As opposed to logs and traces, the amount of data collected using metrics remains constant as the system load increases. Application problems are often first detected through abnormal metric values. Such as CPU usage being higher than before, payment error count spiking, and queued item count keeps growing.

Adding Observability to a .NET Core Application

There are many ways to add observability aspects to an application. Dapr for example, is a runtime to build distributed applications, transparently adding distribute tracing. Another example is through the usage of service meshes in Kubernetes (Istio, Linkerd).

Built-in and transparent tracing are typically covering basic scenarios and answering generic questions, such as observed request duration or CPU trends. Other questions, such as custom KPIs or user behavior, require adding instrumentation to your code.

To illustrate how observability can be added to a .NET Core application we will be using the following asynchronous distributed transaction example:

Sample Observability Application Overview

  1. Main Api receives a request from a “source”.
  2. Main Api enriches the request body with current day, obtained from Time Api.
  3. Main Api enqueues enriched request to a RabbitMQ queue for asynchronous processing.
  4. RabbitMQProcessor dequeues request.
  5. RabbitMQProcessor, as part of the request processing, calls Time Api to get dbtime.
  6. Time Api calls SQL Server to get current time.

To run the sample application locally (including dependencies and observability tools), follow this guide. The article will walkthrough adding each observability pillar (logging, tracing, metrics) into the sample asynchronous distributed transaction.

Note: for information on bootstrapping OpenTelemetry or Application Insights SDK please refer to the documentation: OpenTelemetry and Application Insights.

Logging was redesigned in .NET Core, bringing an integrated and extensible API. Built-in and external logging providers allow the collection of logs in multiple formats and targets. When deciding a logging platform, consider the following features:

  • Centralized: allowing the collection/storage of all system logs in a central location.
  • Structured logging: allows you to add searchable metadata to logs.
  • Searchable: allows searching by multiple criteria (app version, date, category, level, text, metadata, etc.)
  • Configurable: allows changing verbosity without code changes (based on log level and/or scope).
  • Integrated: integrated into tracing, facilitating analysis of traces and logs in the same tool.

The sample application uses the ILogger interface for logging. The snippet below demonstrates an example of structure logging. Which captures events using message template and generates information that is human and machine readable.

var result = await repository.GetTimeFromSqlAsync();
logger.LogInformation("{operation} result is {result}", nameof(repository.GetTimeFromSqlAsync), result);

When using a logging backend that understands structured logs, such as Application Insights, search instances of the example log items where “operation” is equal to “GetTimeForSqlAsync”:

Observability Application Insights structured log search

Tracing collects required information to enable the observation of a transaction as it “walks” through the system. It must be implemented in every service taking part of the transaction to be effective.

.NET Core defines a common way in which traces can be defined through the System.Diagnostics.Activity class. Through the usage of this class, dependency implementations (i.e. HTTP, SQL, Azure, EF Core, StackExchange.Redis, etc.) can create traces in a neutral way, independent of the monitoring tool used.

It is important to notice that those activities will not be available automatically in a monitoring system. Publishing them is responsibility of the monitoring SDK used. Typically, SDKs have built-in collectors to common activities, transferring them to the destination platform automatically.

In the last quarter of 2019 OpenTelemetry was announced, promising to standardize telemetry instrumentation and collection across languages and tools. Before OpenTelemetry (or its predecessors OpenCensus and OpenTracing), adding observability would often mean adding proprietary SDKs (in)directly to the code base.

The OpenTelemetry .NET SDK is currently in alpha. The Azure Monitor Application Insights team is investing in OpenTelemetry as a next step of Azure Monitor SDKs evolution.

Quick Intro on Tracing with OpenTelemetry

In a nutshell, OpenTelemetry collects traces using spans. A span delimits an operation (HTTP request processing, dependency call). It contains start and end time (among other properties). It has a unique identifier (SpanId, 16 characters, 8 bytes) and a trace identifier (TraceId, 32 characters, 16 bytes). The trace identifier is used to correlate all spans for a given transaction. A span can contain children spans (as calls in a stack trace). If you are familiar with Azure Application Insights, the following table might be helpful to understand OpenTelemetry terms:

Application Insights OpenTelemetry
Request, PageView Span with span.kind = server
Dependency Span with span.kind = client
Id of Request and Dependency SpanId
Operation_Id TraceId
Operation_ParentId ParentId

Adding Tracing to a .NET Core Application

As mentioned previously, an SDK is needed in order to collect and publish distributed tracing in a .NET Core application. Application Insights SDK sends traces to its centralized database while OpenTelemetry supports multiple exporters (including Application Insights). When configured to use OpenTelemetry, the sample application sends traces to a Jaeger instance.

In the asynchronous distributed transaction scenario, track the following operations:

HTTP Requests between microservices

HTTP correlation propagation is part of both SDKs. With the only requirement of setting activity id format to W3C at application start:

public static void Main(string[] args)
{ Activity.DefaultIdFormat = ActivityIdFormat.W3C; Activity.ForceDefaultIdFormat = true; // rest is omitted
}

Dependency calls (SQL, RabbitMQ)

Unlike Application Insights SDK, OpenTelemetry (in early alpha) does not yet have support for SQL Server trace collection. A simple way to track dependencies with OpenTelemetry is to wrap the call like the following example:

var span = this.tracer.StartSpan("My external dependency", SpanKind.Client);
try
{ return CallToMyDependency();
}
catch (Exception ex)
{ span.Status = Status.Internal.WithDescription(ex.ToString()); throw;
}
finally
{ span.End();
}

Asynchronous Processing / Queued Items

There is no built-in trace correlation between publishing and processing a RabbitMQ message. Custom code is required, creating the publishing activity (optional) and referencing the parent trace during the item dequeuing.

We covered previously creating traces by wrapping the dependency call. This option allows expressing additional semantic information such as links between spans for batching and other fan-in patterns. Another option is to use System.Diagnostics.Activity, which is a SDK independent way to create traces. This option has limited set of features, however, is built-in into .NET.

These two options work well with each other and .NET team is working on making .NET Activity and OpenTelemetry spans integration better.

Creating an Operation Trace

The snippet below demonstrates how the publish operation trace can be created. It adds the trace information to the enqueued message header, which will later be used to link both operations.

Activity activity = null;
if (diagnosticSource.IsEnabled("Sample.RabbitMQ"))
{ // Generates the Publishing to RabbitMQ trace // Only generated if there is an actual listener activity = new Activity("Publish to RabbitMQ"); diagnosticSource.StartActivity(activity, null);
} // Add current activity identifier to the RabbitMQ message
basicProperties.Headers.Add("traceparent", Activity.Current.Id); channel.BasicPublish(...) if (activity != null)
{ // Signal the end of the activity diagnosticSource.StopActivity(activity, null);
}

A collector, which subscribes to target activities, is required to publish the trace to a backend. Implementing a collector is not a straightforward task and is intended to be used by SDK implementors. The snippet below is taken from the sample application, where a simplified and not production-ready, RabbitMQ collector for OpenTelemetry was implemented:

public class RabbitMQListener : ListenerHandler
{ public override void OnStartActivity(Activity activity, object payload) { var span = this.Tracer.StartSpanFromActivity(activity.OperationName, activity); foreach (var kv in activity.Tags) span.SetAttribute(kv.Key, kv.Value); } public override void OnStopActivity(Activity activity, object payload) { var span = this.Tracer.CurrentSpan; span.End(); if (span is IDisposable disposableSpan) { disposableSpan.Dispose(); } }
} var subscriber = new DiagnosticSourceSubscriber(new RabbitMQListener("Sample.RabbitMQ", tracer), DefaultFilter);
subscriber.Subscribe();

For more information on how to build collectors, please refer to OpenTelemetry/Application Insights built-in collectors as well as this user guide.

Activity

As mentioned, HTTP requests in ASP.NET have built-in activity correlation injected by the framework. That is not the case for the RabbitMQ consumer. In order to continue the distributed transaction, we must create the span referencing the parent trace. This was injected into the message by the publisher. The snippet below uses an extension method to build the activity:

public static Activity ExtractActivity(this BasicDeliverEventArgs source, string name)
{ var activity = new Activity(name ?? Constants.RabbitMQMessageActivityName); if (source.BasicProperties.Headers.TryGetValue("traceparent", out var rawTraceParent) && rawTraceParent is byte[] binRawTraceParent) { activity.SetParentId(Encoding.UTF8.GetString(binRawTraceParent)); } return activity;
}

The activity is then used to create the concrete trace. In OpenTelemetry the code looks like this:

// Note: OpenTelemetry requires the activity to be started
activity.Start();
tracer.StartActiveSpanFromActivity(activity.OperationName, activity, SpanKind.Consumer, out span);

The snippet below creates the telemetry using Application Insights SDK:

// Note: Application Insights will start the activity
var operation = telemetryClient.StartOperation<Dependencytelemetry>(activity);

The usage of activities gives flexibility in terms of SDK used, as it is a neutral way to create traces. Once instrumented the distributed end-to-end transaction in Jaeger looks like this:

Distributed Trace in Jaeger

The same transaction in Application Insights looks like this:

Distributed Trace in Application Insights

When using single monitoring solution for traces and logs, such as Application Insights, the logs become part of the end-to-end transaction:

Observability Application Insights: traces and logs

Metrics

There are common metrics applicable to most applications, like CPU usage, allocated memory, and request time. As well as business specific like visitors, page views, sold items, and sent items. Exposing business metrics in a .NET Core application typically requires using an SDK.

Collection metrics in .NET Core happens through 3rd-party SDKs which aggregate values locally, before sending to a backend. Most libraries have built-in collection for common application metrics. However, business specific metrics need to be built in the application logic, since they are created based on events that occur in the application domain.

In the sample application we are using metric counters for: enqueued items, successfully processed items and unsuccessfully processed items. The implementation in both SDKs is similar, requiring setting up a metric, dimensions and finally, tracking the counter values.

OpenTelemetry supports multiple exporters and we will be using Prometheus exporter. Prometheus combined with Grafana, for visualization and alerting, is a popular choice for open source monitoring. Application Insights supports metrics as any other instrumentation type, requiring no additional SDK or tool.

Defining a metric and tracking values using OpenTelemetry looks like this:

// Create counter
var simpleProcessor = new UngroupedBatcher(exporter, TimeSpan.FromSeconds(5));
var meterFactory = MeterFactory.Create(simpleProcessor);
var meter = meterFactory.GetMeter("Sample App");
var enqueuedCounter = meter.CreateInt64Counter("Enqueued Item"); // Incrementing counter for specific source
var labelSet = new Dictionary<string, string>() { { "Source", source } }; enqueuedCounter.Add(context, 1L, this.meter.GetLabelSet(labelSet));

The visualization with Grafana is illustrated in the image below:

Metrics with Grafana/Prometheus

The snippet below demonstrates how to define a metric and track its values using the Application Insights SDK:

// create counter
var enqueuedCounter = telemetryClient.GetMetric(new MetricIdentifier("Sample App", "Enqueued Item", "Source")); // Incrementing counter for specific source
enqueuedCounter.TrackValue(metricValue, source);

The visualization in Application Insights is illustrated below:

Observability Application Insights custom metrics

Troubleshooting

Now that we have added the 3 observability pillars to a sample application, let’s use them to troubleshoot a scenario where the application is experiencing problems.

The first signals of an application problems are usually detected by anomalies in metrics. The snapshot below illustrates such a scenario, where the number of failed processed items spikes (red line).

Metrics indicating failure

A possible next step is to look for hints in distributed traces. This should help us identify where the problem is happening. In Jaeger, searching with the tag “error=true” filters the results, listing transaction where at least one error happened.

Jaeger traces with error

In Application Insights, we can search for errors in end-to-end transactions by looking in the Failures/Dependencies or Failures/Exceptions.

Search traces with error in Application Insights

Application Insights error details in trace

The problem seems to be related to the Sample.RabbitMQProcessor service. Logs of this service can help us identify the problem. When using Application Insights logging provider, log and traces are correlated, being displayed in the same view:

Observability Application Insights errors and logs

Looking at the details, we discover that the exception InvalidEventNameException is being raised. Since we are logging the message payload, details of the failed message are available in the monitoring tool. It appears the message being processed has the eventName value of “error”, which is causing the exception to be raised.

When introducing observability into a .NET Core application, two decisions need to be taken:

  • The backend(s) where collected data will be stored and analyzed.
  • How instrumentation will be added to the application code.

Depending on your organization, the monitoring tool might already be selected. However, if you do have the chance to make this decision, consider the following:

  • Centralized: having all data in a single place makes it simple to correlate information. For example, logs, distribute traces and CPU usage. If they are split, more effort is required.
  • Manageability: how simple is to manage the monitoring tool? Is it hosted in the same machines/VMs where your application is running? In that case, shared infrastructure unavailability might leave you in the dark. When monitoring is not working, alerts won’t be triggered and metrics won’t be collected.
  • Vendor Locking: if you need to run the same application in different environments (i.e. on premises and cloud), choosing a solution that can run everywhere might be favored.
  • Application Dependencies: parts of your infrastructure or tooling that might require you to use a specific monitoring vendor. For example, Kubernetes scaling and/or progressive deployment based on Prometheus metrics.

Once the monitoring tool has been defined, choosing an SDK is limited to two options. Using the one provided by the monitoring vendor or a library capable of integrating to multiple backends.

Vendor SDKs typically yield little/no surprises regarding stability and functionality. That is the case with Application Insights, for example. It is stable with a rich feature set, including live stream, which is a feature-specific to this specific monitoring system.

OpenTelemetry

Using OpenTelemetry SDK gives you more flexibility, offering integration with multiple monitoring backends. You can even mesh them: a centralized monitoring solution for all collected data, while having a subset sent to Prometheus to fulfill a requirement. If you are unsure whether OpenTelemetry is a good fit for your project, consider the following:

  • When is your project going to production? The SDK is currently in alpha, meaning breaking changes and non-production-ready is expected.
  • Are you using vendor specific features not yet available through the OpenTelemetry SDK (specific collectors, etc.)?
  • Is your monitoring backend supported by the SDK?
  • Are you replacing a vendor SDK with OpenTelemetry? Plan some time to compare both SDKs, OpenTelemetry exporters might have differences compared to how the vendor SDK collects data.

Source code with the sample application can be found in this GitHub Repository.

Francisco Beltrao

Posted on Leave a comment

ASP.NET Core updates in .NET Core 3.1

Avatar

Sourabh

.NET Core 3.1 is now available and is ready for production use! .NET Core 3.1 is a Long Term Support (LTS) release.

Here’s what’s new in this release for ASP.NET Core:

  • Partial class support for Razor components
  • Pass parameters to top-level components
  • New component tag helper
  • Prevent default actions for events in Blazor apps
  • Stop event propagation in Blazor apps
  • Detailed errors during Blazor app development
  • Support for shared queues in HttpSysServer
  • Breaking changes for SameSite cookies

You can find all the details about these new features in the What’s new in ASP.NET Core 3.1 topic.

See the release notes for additional details and known issues.

Get started

To get started with ASP.NET Core in .NET Core 3.1 install the .NET Core 3.1 SDK.

If you’re on Windows using Visual Studio, install Visual Studio 2019 16.4. Installing Visual Studio 2019 16.4 will also install .NET Core 3.1, so you don’t need to separately install it.

Upgrade an existing project

To upgrade an existing ASP.NET Core app to .NET Core 3.0, follow the migrations steps in the ASP.NET Core docs.

See the full list of breaking changes in ASP.NET Core 3.1.

To upgrade an existing ASP.NET Core 3.0 RC1 project to 3.0:

  • Update all Microsoft.AspNetCore.* and Microsoft.Extensions.* package references to 3.1.0

That’s it! You should now be all set to use .NET Core 3.1!

Blazor WebAssembly update

Alongside this .NET Core 3.1 release, we’ve also released a Blazor WebAssembly update. Blazor WebAssembly is still in preview and is not part of the .NET Core 3.1 release. Blazor WebAssembly will ship as a stable release at a future date.

To install the latest Blazor WebAssembly template run the following command:

dotnet new -i Microsoft.AspNetCore.Blazor.Templates::3.1.0-preview4.19579.2

This release of Blazor WebAssembly includes a number of new features and improvements:

  • .NET Standard 2.1 support
  • Support for static assets in when publishing
  • iOS 13 support
  • Better linker errors
  • Attach to process debugging from Visual Studio

.NET Standard 2.1 support

Blazor WebAssembly apps now target .NET Standard 2.1 by default. Using .NET Standard 2.1 libraries from a Blazor WebAssembly app is now supported within the limits of the browser security sandbox.

Support for static assets in libraries when publishing

Standalone Blazor WebAssembly apps now support static assets from Razor class libraries both during development and when publishing. This applies to both standalone Blazor WebAssembly apps and ASP.NET Core hosted apps. Static assets are consumed from referenced libraries using the path prefix: _content/{LIBRARY NAME}/.

iOS 13 support

Blazor WebAssembly apps now work from iOS 13 based devices. The .NET IL interpreter now uses a non-recursive implementation to prevent exceeding the size of the stack on these devices.

Better linker errors

The IL linker is now integrated with Blazor WebAssembly projects such that linker errors are surfaced as build errors.

Attach to process debugging from Visual Studio

You can now debug Blazor WebAssembly apps from Visual Studio by attaching to the browser process. Currently this experience is very manual. In a future update, we expect to enable Visual Studio to handle all of the necessary wire-up to debug a Blazor WebAssembly app when you hit F5. Also, various features of the debugging experience (like viewing locals) are not yet enabled. This is something we will be working on over the next few months.

To debug a running Blazor WebAssembly app from Visual Studio:

  1. Run the app without debugging (Ctrl-F5 instead of F5)
  2. Open the Debug properties of the app and copy the HTTP app URL
  3. Browse to the HTTP address (not the HTTPS address) of the app using a Chromium based browser (Edge Beta or Chrome).
  4. With the browser in focus, press Shift-Alt-D and then follow the instructions to open a browser with remote debugging enabled
  5. Close all other browser instances
  6. In Visual Studio, select Debug > Attach to Process.
  7. For the Connection type, select Chrome devtools protocol websocket (no authentication).
  8. For the Connection target, paste in the HTTP address (not the HTTPS address) of the app and press Enter (don’t click “Find” – that does something else).
  9. Select the browser process you want to debug and select Attach
  10. In the Select Code Type dialog, select the code type for the specific browser you are attaching to (Edge or Chrome) and then select OK
  11. Set a break point in your app (for example, in the IncrementCount method in the Counter component) and then use that part of the app to hit the breakpoint.

In a later release, this process will become automated inside Visual Studio and Visual Studio Code so you can launch and attach the debugger with a single click or keystroke. Then you will no longer need to go through this detailed attachment process manually.

Give feedback

We hope you enjoy this release of ASP.NET Core in .NET Core 3.1! We are eager to hear about your experiences with this latest .NET Core release. Let us know what you think by filing issues on GitHub.

Thanks for trying out ASP.NET Core!

Avatar

Posted on Leave a comment

ASP.NET Core updates in .NET Core 3.1 Preview 2

Daniel Roth

Daniel

.NET Core 3.1 Preview 2 is now available. This release is primarily focused on bug fixes, but it contains a few new features as well.

Here’s what’s new in this release for ASP.NET Core:

  • New component tag helper
  • Prevent default actions for events in Blazor apps
  • Stop event propagation in Blazor apps
  • Validation of nested models in Blazor forms
  • Detailed errors during Blazor app development

See the release notes for additional details and known issues.

Get started

To get started with ASP.NET Core in .NET Core 3.1 Preview 2 install the .NET Core 3.1 Preview 2 SDK.

If you’re on Windows using Visual Studio, for the best experience we recommend installing the latest preview of Visual Studio 2019 16.4. Installing Visual Studio 2019 16.4 will also install .NET Core 3.1 Preview 2, so you don’t need to separately install it. For Blazor development with .NET Core 3.1, Visual Studio 2019 16.4 is required.

Alongside this .NET Core 3.1 Preview 2 release, we’ve also released a Blazor WebAssembly update. To install the latest Blazor WebAssembly template also run the following command:

dotnet new -i Microsoft.AspNetCore.Blazor.Templates::3.1.0-preview2.19528.8

Upgrade an existing project

To upgrade an existing ASP.NET Core 3.1 Preview 1 project to 3.1 Preview 2:

  • Update all Microsoft.AspNetCore.* package references to 3.1.0-preview2.19528.8

See also the full list of breaking changes in ASP.NET Core 3.1.

That’s it! You should now be all set to use .NET Core 3.1 Preview 2!

New component tag helper

Using Razor components from views and pages is now more convenient with the new component tag helper.

Previously, rendering a component from a view or page involved using the RenderComponentAsync HTML helper.

@(await Html.RenderComponentAsync<Counter>(RenderMode.ServerPrerendered, new { IncrementAmount = 10 }))

The new component tag helper simplifies the syntax for rendering components from pages and views. Simply specify the type of the component you wish to render as well as the desired render mode. You can also specify component parameters using attributes prefixed with param-.

<component type="typeof(Counter)" render-mode="ServerPrerendered" param-IncrementAmount="10" />

The different render modes allow you to control how the component is rendered:

RenderMode Description
Static Renders the component into static HTML.
Server Renders a marker for a Blazor Server application. This doesn’t include any output from the component. When the user-agent starts, it uses this marker to bootstrap the Blazor app.
ServerPrerendered Renders the component into static HTML and includes a marker for a Blazor Server app. When the user-agent starts, it uses this marker to bootstrap the Blazor app.

Prevent default actions for events in Blazor apps

You can now prevent the default action for events in Blazor apps using the new @oneventname:preventDefault directive attribute. For example, the following component displays a count in a text box that can be changed by pressing the “+” or “-” keys:

<p>Press "+" or "-" in change the count.</p>
<input value="@count" @onkeypress="@KeyHandler" @onkeypress:preventDefault /> @code { int count = 0; void KeyHandler(KeyboardEventArgs ev) { if (ev.Key == "+") { count++; } else if (ev.Key == "-") { count--; } }
}

The @onkeypress:preventDefault directive attribute prevents the default action of showing the text typed by the user in the text box. Specifying this attribute without a value is equivalent to @onkeypress:preventDefault="true". The value of the attribute can also be an expression: @onkeypress:preventDefault="shouldPreventDefault". You don’t have to define an event handler to prevent the default action; both features can be used independently.

Stop event propagation in Blazor apps

Use the new @oneventname:stopPropagation directive attribute to stop event propagation in Blazor apps.

In the following example, checking the checkbox prevents click events from the child div from propagating to the parent div:

<input @bind="stopPropagation" type="checkbox" />
Parent div
Child div
</div> <button @onclick="OnClick">Click me!</button> @code { bool stopPropagation; void OnClickParentDiv() => Console.WriteLine("Parent div clicked."); void OnClickChildDiv() => Console.WriteLine("Child div clicked."); }

Detailed errors during Blazor app development

When your Blazor app isn’t functioning properly during development, it’s important to get detailed error information so that you can troubleshoot and fix the issues. Blazor apps now display a gold bar at the bottom of the screen when an error occurs.

During development, in Blazor Server apps, the gold bar will direct you to the browser console where you can see the exception that has occurred.

Blazor detailed errors in development

In production, the gold bar notifies the user that something has gone wrong, and recommends the user to refresh the browser.

Blazor detailed errors in production

The UI for this error handling experience is part of the updated Blazor project templates so that it can be easily customized:

_Host.cshtml

An error has occurred. This application may no longer respond until reloaded. An unhandled exception has occurred. See browser dev tools for details. Reload 🗙

Validation of nested models in Blazor forms

Blazor provides support for validating form input using data annotations with the built-in DataAnnotationsValidator. However, the DataAnnotationsValidator only validates top-level properties of the model bound to the form.

To validate the entire object graph of the bound model, try out the new ObjectGraphDataAnnotationsValidator available in the experimental Microsoft.AspNetCore.Blazor.DataAnnotations.Validation package:

<EditForm Model="@model" OnValidSubmit="@HandleValidSubmit"> <ObjectGraphDataAnnotationsValidator /> ...
</EditForm>

The Microsoft.AspNetCore.Blazor.DataAnnotations.Validation is not slated to ship with .NET Core 3.1, but is provided as an experimental package to get early feedback.

Give feedback

We hope you enjoy the new features in this preview release of ASP.NET Core! Please let us know what you think by filing issues on GitHub.

Thanks for trying out ASP.NET Core!

Daniel Roth
Daniel Roth

Principal Program Manager, ASP.NET

Follow Daniel   

Posted on Leave a comment

ASP.NET Core updates in .NET Core 3.1 Preview 1

Daniel Roth

Daniel

.NET Core 3.1 Preview 1 is now available. This release is primarily focused on bug fixes, but it contains a few new features as well.

Here’s what’s new in this release for ASP.NET Core:

  • Partial class support for Razor components
  • Pass parameters to top-level components
  • Support for shared queues in HttpSysServer
  • Breaking changes for SameSite cookies

Alongside this .NET Core 3.1 Preview 1 release, we’ve also released a Blazor WebAssembly update, which now requires .NET Core 3.1. To use Blazor WebAssembly you will need to install .NET Core 3.1 Preview 1 as well as the latest preview of Visual Studio.

See the release notes for additional details and known issues.

Get started

To get started with ASP.NET Core in .NET Core 3.1 Preview 1 install the .NET Core 3.1 Preview 1 SDK.

If you’re on Windows using Visual Studio, for the best experience we recommend installing the latest preview of Visual Studio 2019 16.4. Installing Visual Studio 2019 16.4 will also install .NET Core 3.1 Preview 1, so you don’t need to separately install it. For Blazor development with .NET Core 3.1, Visual Studio 2019 16.4 is required.

To install the latest Blazor WebAssembly template run the following command:

dotnet new -i Microsoft.AspNetCore.Blazor.Templates::3.1.0-preview1.19508.20

Upgrade an existing project

To upgrade an existing ASP.NET Core 3.0 project to 3.1 Preview 1:

  • Update any projects targeting netcoreapp3.0 to target netcoreapp3.1
  • Update all Microsoft.AspNetCore.* package references to 3.1.0-preview1.19506.1

See also the full list of breaking changes in ASP.NET Core 3.1.

That’s it! You should now be all set to use .NET Core 3.1 Preview 1!

Partial class support for Razor components

Razor components are now generated as partial classes. You can author the code for a Razor component using a code-behind file defined as a partial class instead of defining all the code for the component in a single file.

For example, instead of defining the default Counter component with an @code block, like this:

Counter.razor

@page "/counter" <h1>Counter</h1> <p>Current count: @currentCount</p> <button class="btn btn-primary" @onclick="IncrementCount">Click me</button> @code { int currentCount = 0; void IncrementCount() { currentCount++; }
}

You can now separate out the code into a code-behind file using a partial class like this:

Counter.razor

@page "/counter" <h1>Counter</h1> <p>Current count: @currentCount</p> <button class="btn btn-primary" @onclick="IncrementCount">Click me</button>

Counter.razor.cs

namespace BlazorApp1.Pages
{ public partial class Counter { int currentCount = 0; void IncrementCount() { currentCount++; } }
}

Pass parameters to top-level components

Blazor Server apps can now pass parameters to top-level components during the initial render. Previously you could only pass parameters to a top-level component with RenderMode.Static. With this release, both RenderMode.Server and RenderModel.ServerPrerendered are now supported. Any specified parameter values are serialized as JSON and included in the initial response.

For example, you could prerender a Counter component with a specific current count like this:

@(await Html.RenderComponentAsync<Counter>(RenderMode.ServerPrerendered, new { CurrentCount = 123 }))

Support for shared queues in HttpSysServer

In addition to the existing behavior where HttpSysServer created anonymous request queues, we’ve added to ability to create or attach to an existing named HTTP.sys request queue.
This should enable scenarios where the HTTP.Sys controller process that owns the queue is independent to the listener process making it possible to preserve existing connections and enqueued requests between across listener process restarts.

public static IHostBuilder CreateHostBuilder(string[] args) => Host.CreateDefaultBuilder(args) .ConfigureWebHostDefaults(webBuilder => { // ... webBuilder.UseHttpSys(options => { options.RequestQueueName = "MyExistingQueue", options.RequestQueueMode = RequestQueueMode.CreateOrAttach }) });

Breaking changes for SameSite cookies

This release updates the behavior of SameSite cookies in ASP.NET Core to conform to the latest standards being enforced by browsers. For details on these changes and their impact on existing apps see https://github.com/aspnet/Announcements/issues/390.

Give feedback

We hope you enjoy the new features in this preview release of ASP.NET Core! Please let us know what you think by filing issues on GitHub.

Thanks for trying out ASP.NET Core!

Daniel Roth
Daniel Roth

Principal Program Manager, ASP.NET

Follow Daniel   

Posted on Leave a comment

Blazor Server in .NET Core 3.0 scenarios and performance

Daniel Roth

Daniel

Since the release of Blazor Server with .NET Core 3.0 last month lots of folks have shared their excitement with us about being able to build client-side web UI with just .NET and C#. At the same time, we’ve also heard lots of questions about what Blazor Server is, how it relates to Blazor WebAssembly, and what scenarios Blazor Server is best suited for. Should you choose Blazor Server for your client-side web UI needs, or wait for Blazor WebAssembly? This post seeks to answer these questions, and to provide insights into how Blazor Server performs at scale and how we envision Blazor evolving in the future.

What is Blazor Server?

Blazor Server apps host Blazor components on the server and handle UI interactions over a real-time SignalR connection. As the user interacts with the app, the UI events are sent to the server over the connection to be handled by the various components that make up the app. When a component handles a UI event, it’s rendered based on its updated state. Blazor compares the newly rendered output with what was rendered previously and send the changes back to the browser and applies them to the DOM.

Blazor Server

Since Blazor Server apps run on .NET Core on the server, they enjoy all the benefits of running on .NET Core including great runtime performance and tooling. Blazor Server apps can leverage the full ecosystem of .NET Standard libraries without any browser imposed limitations.

When should I use Blazor Server?

Blazor Server enables you to add rich interactive UI to your .NET apps today without having to write JavaScript. If you need the interactivity of a single-page app in your .NET app, then Blazor Server is a great solution.

Blazor Server can be used to write completely new apps or to complement existing MVC and Razor Pages apps. There’s no need to rewrite existing app logic. Blazor is designed to work together with MVC and Razor Pages, not replace them. You can continue to use MVC and Razor Pages for your server-rendering needs while using Blazor for client-side UI interactions.

Blazor Server works best for scenarios where you have a reliable low-latency network connection, which is normally achieved when the client and server are geographically on the same continent. Apps that require extremely high fidelity instant updates on every tiny mouse twitch, like real-time games or drawing apps, are not a good fit for Blazor Server. Because Blazor Server apps require an active network connection, offline scenarios are not supported.

Blazor Server is also useful when you want to offload work from the client to the server. Blazor Server apps require only a small download to establish the connection with the server and to process UI interactions. All the hard work of running the app logic and rendering the UI is then done on the server. This means Blazor Server apps load fast even as the app functionality grows. Because the client side of a Blazor Server app is so thin, it’s a great solution for apps that need to run on low-powered devices.

Using Blazor Server at scale

Blazor Server can scale from small internal line of business apps to large internet scale apps. While .NET Core 3.0 was still in preview we tested Blazor Server to see what its baseline scale characteristics look like. We put a Blazor Server app under load with active clients and monitored the latency of the user interactions. In our tests, a single Standard_D1_v2 instance on Azure (1 vCPU, 3.5 GB memory) could handle over 5,000 concurrent users without any degradation in latency. A Standard_D3_V2 instance (4 vCPU, 14GB memory) handled well over 20,000 concurrent clients. The main bottleneck for handling further load was available memory. Will you see this level of scale in your own app? That will depend in large part on how much additional memory your app requires per user. But for many apps, we believe this level of scale out is quite reasonable. We also plan to post additional updates on improvements in Blazor Server scalability in the weeks ahead. So stay tuned!

What is Blazor WebAssembly?

Blazor is a UI framework that can run in different environments. When you build UI components using Blazor, you get the flexibility to choose how and where they are hosted and run. As well as running your UI components on the server with Blazor Server, you can run those same components on the client with Blazor WebAssembly. This flexibility means you can adapt to your users’ needs and avoid the risk of being tied to a specific app hosting model.

Blazor WebAssembly apps host components in the browser using a WebAssembly-based .NET runtime. The components handle UI events and execute their rendering logic directly in the browser. Blazor WebAssembly apps use only open web standards to run .NET code client-side, without the need for any browser plugins or code transpilation. Just like with Blazor Server apps, the Blazor framework handles comparing the newly rendered output with what was rendered previous and updates the DOM accordingly, but with Blazor WebAssembly the UI rendering is handled client-side.

Blazor WebAssembly

When should I use Blazor WebAssembly?

Blazor WebAssembly is still in preview and isn’t yet ready for production use yet. If you’re looking for a production ready solution, then Blazor Server is what we’d recommend.

Once Blazor WebAssembly ships (May 2020), it will enable running Razor components and .NET code in the browser on the user’s device. Blazor WebAssembly apps help offload work from the server to the client. A Blazor WebAssembly app can leverage the client device’s compute, memory, and storage resources, as well as other resources made available through standard browser APIs.

Blazor WebAssembly apps don’t require the use of .NET on the server and can be used to build static sites. A Blazor WebAssembly app is just a bunch of static files that can be hosted using any static site hosting solution, like GitHub pages or Azure Static Website Hosting. When combined with a service worker, a Blazor WebAssembly app can function completely offline.

When combined with .NET on the server, Blazor WebAssembly enables full stack web development. You can share code, leverage the .NET ecosystem, and reuse your existing .NET skills and infrastructure.

Including a .NET runtime with your web app does increase the app size, which will impact load time. While there are a variety of techniques to mitigate this (prerendering on the server, HTTP caching, IL linking, etc.), Blazor WebAssembly may not be the best choice for apps that are very sensitive to download size and load time.

Blazor WebAssembly apps also require a browser that supports WebAssembly. WebAssembly is supported by all modern browsers, including mobile and desktop browsers. However, if you need to support older browsers without WebAssembly support then Blazor WebAssembly isn’t for you.

Blazor WebAssembly is optimized for UI rendering scenarios, but isn’t currently great for running CPU intensive workloads. Blazor WebAssembly apps today use a .NET IL interpreter to execute your .NET code, which doesn’t have the same performance as a native .NET runtime with JIT compilation. We’re working to better address this scenario in the future by adding support for compiling your .NET code directly to WebAssembly instead of using an interpreter.

You can change your mind later

Regardless of whether you choose Blazor Server or Blazor WebAssembly, you can always change your mind later. All Blazor apps use a common component model, Razor components. The same components can be hosted in a Blazor Server app or a Blazor WebAssembly app. So if you start with one Blazor hosting model and then later decide you want to switch to a different one, doing so is very straight forward.

What’s next for Blazor?

After shipping Blazor WebAssembly, we plan to expand Blazor to support not just web apps, but also Progressive Web Apps (PWAs), hybrid apps, and even fully native apps.

  • Blazor PWAs: PWAs are web apps that leverage the latest web standards to provide a more native-like experience. PWAs can support offline scenarios, push notifications, and OS integrations, like support for pinning the app to your home screen or the Windows Start menu.
  • Blazor Hybrid: Hybrid apps are native apps that use web technologies for the UI. Examples include Electron apps and mobile apps that render to a web view. Blazor Hybrid apps don’t run on WebAssembly, but instead use a native .NET runtime like .NET Core or Xamarin. You can find an experimental sample for using Blazor with Electron on GitHub.
  • Blazor Native: Blazor apps today render HTML, but the renderer can be replaced to render native controls instead. A Blazor Native app runs natively on the devices and uses a common UI abstraction to render native controls for that device. This is very similar to how frameworks like Xamarin Forms or React Native work today.

These three efforts are all currently experimental. We expect to have official previews of support for Blazor PWAs and Blazor Hybrid apps using Electron in the .NET 5 time frame (Nov 2020). There isn’t a road map for Blazor Native support yet, but it’s an area we are actively investigating.

Summary

With .NET Core 3.0, you can build rich interactive client-side UI today with Blazor Server. Blazor Server is a great way to add client-side functionality to your existing and new web apps using your existing .NET skills and assets. Blazor Server is built to scale for all your web app needs. Blazor WebAssembly is still in preview, but is expected to ship in May of next year. In the future we expect to continue to evolve Blazor to support PWAs, hybrid apps, and native apps. For now, we hope you’ll give Blazor Server a try by installing .NET Core 3.0!

Daniel Roth
Daniel Roth

Principal Program Manager, ASP.NET

Follow Daniel   

Posted on Leave a comment

ASP.NET Core and Blazor updates in .NET Core 3.0

Daniel Roth

Daniel

Today we are thrilled to announce the release of .NET Core 3.0! .NET Core 3.0 is ready for production use, and is loaded with lots of great new features for building amazing web apps with ASP.NET Core and Blazor.

Some of the big new features in this release of ASP.NET Core include:

  • Build rich interactive client-side web apps using C# instead of JavaScript using Blazor).
  • Create high-performance backend services with gRPC.
  • SignalR now has support for automatic reconnection and client-to-server streaming.
  • Generate strongly typed client code for Web APIs with OpenAPI documents.
  • Endpoint routing integrated through the framework.
  • HTTP/2 now enabled by default in Kestrel.
  • Authentication support for Web APIs and single-page apps integrated with IdentityServer
  • Support for certificate and Kerberos authentication.
  • Integrates with the new System.Text.Json serializer.
  • New generic host sets up common hosting services like dependency injection (DI), configuration, and logging.
  • New Worker Service template for building long-running services.
  • New EventCounters created for requests per second, total requests, current requests, and failed requests.
  • Startup errors now reported to the Windows Event Log when hosted in IIS.
  • Request pipeline integrated with with System.IO.Pipelines.
  • Performance improvements across the entire stack.

You can find all the details about what’s new in ASP.NET Core in .NET Core 3.0 in the What’s new in ASP.NET Core 3.0 topic.

See the .NET Core 3.0 release notes for additional details and known issues.

Get started

To get started with ASP.NET Core in .NET Core 3.0 install the .NET Core 3.0 SDK.

If you’re on Windows using Visual Studio, install Visual Studio 2019 16.3, which includes .NET Core 3.0.

Note: .NET Core 3.0 requires Visual Studio 2019 16.3 or later.

There is also a Blazor WebAssembly preview update available with this release. This update to Blazor WebAssembly still has a Preview 9 version, but carries an updated build number. Blazor WebAssembly is still in preview and is not part of the .NET Core 3.0 release.

To install the latest Blazor WebAssembly template run the following command:

dotnet new -i Microsoft.AspNetCore.Blazor.Templates::3.0.0-preview9.19465.2

Upgrade an existing project

To upgrade an existing ASP.NET Core app to .NET Core 3.0, follow the migrations steps in the ASP.NET Core docs.

See the full list of breaking changes in ASP.NET Core 3.0.

To upgrade an existing ASP.NET Core 3.0 RC1 project to 3.0:

  • Update all Microsoft.AspNetCore.* and Microsoft.Extensions.* package references to 3.0.0
  • Update all Microsoft.AspNetCore.Blazor.* package references to 3.0.0-preview9.19465.2

That’s it! You should now be all set to use .NET Core 3.0!

Join us at .NET Conf!

Please join us at .NET Conf to learn all about the new features in .NET Core 3.0 and to celebrate the release with us! .NET Conf is a live streaming event open to everyone, and features talks from many talented speakers from the .NET team and the .NET community. Check out the schedule and attend a local event near you. Or join the Virtual Attendee Party for the chance to win prizes!

Give feedback

We hope you enjoy the new features in this release of ASP.NET Core and Blazor in .NET Core 3.0! We are eager to hear about your experiences with this latest .NET Core release. Let us know what you think by filing issues on GitHub.

Thanks for using ASP.NET Core and Blazor!

Daniel Roth
Daniel Roth

Principal Program Manager, ASP.NET

Follow Daniel   

Posted on Leave a comment

ASP.NET Core and Blazor updates in .NET Core 3.0 Release Candidate 1

Daniel Roth

Daniel

.NET Core 3.0 Release Candidate 1 (RC1) is now available. This release contains only a handful of bug fixes and closely represents what we expect to release for .NET Core 3.0.

Please see the release notes for additional details and known issues.

Get started

To get started with ASP.NET Core in .NET Core 3.0 RC1 install the .NET Core 3.0 RC1 SDK.

If you’re on Windows using Visual Studio, install the latest preview of Visual Studio 2019.

.NET Core 3.0 RC1 requires Visual Studio 2019 16.3 Preview 4 or later.

There is also a Blazor WebAssembly preview update available with this release. This update to Blazor WebAssembly still has a Preview 9 version, but carries an updated build number. This is not a release candidate for Blazor WebAssembly. Blazor WebAssembly isn’t expected to ship as a stable release until some time after .NET Core 3.0 ships (details coming soon!).

To install the latest Blazor WebAssembly template run the following command:

dotnet new -i Microsoft.AspNetCore.Blazor.Templates::3.0.0-preview9.19457.4

Upgrade an existing project

To upgrade an existing ASP.NET Core app to .NET Core 3.0 Preview 9, follow the migrations steps in the ASP.NET Core docs.

Please also see the full list of breaking changes in ASP.NET Core 3.0.

To upgrade an existing ASP.NET Core 3.0 Preview 9 project to RC1:

  • Update all Microsoft.AspNetCore.* package references to 3.0.0-rc1.19457.4
  • Update all Microsoft.AspNetCore.Blazor.* package references to 3.0.0-preview9.19457.4

That’s it You should now be all set to use .NET Core 3.0 RC1!

Give feedback

We hope you enjoy the new features in this preview release of ASP.NET Core and Blazor! Please let us know what you think by filing issues on GitHub.

Thanks for trying out ASP.NET Core and Blazor!

Daniel Roth
Daniel Roth

Principal Program Manager, ASP.NET

Follow Daniel