Posted on Leave a comment

ASP.NET Core updates in .NET 5 Release Candidate 1

Daniel Roth

Daniel

.NET 5 Release Candidate 1 (RC1) is now available and is ready for evaluation. Here’s what’s new in this release:

  • Blazor WebAssembly performance improvements
  • Blazor component virtualization
  • Blazor WebAssembly prerendering
  • Browser compatibility analyzer for Blazor WebAssembly
  • Blazor JavaScript isolation and object references
  • Blazor file input support
  • Custom validation class attributes in Blazor
  • Blazor support for ontoggle event
  • Model binding DateTime as UTC
  • Control Startup class activation
  • Open API Specification (Swagger) on-by-default in ASP.NET Core API projects
  • Better F5 Experience for ASP.NET Core API Projects
  • SignalR parallel hub invocations
  • Added Messagepack support in SignalR Java client
  • Kestrel endpoint-specific options via configuration

See the .NET 5 release notes for additional details and known issues.

Get started

To get started with ASP.NET Core in .NET 5 RC1 install the .NET 5 SDK. .NET RC1 also is included with Visual Studio 2019 16.8 Preview 3.

You need to use Visual Studio 2019 16.8 Preview 3 or newer to use .NET 5 RC1. .NET 5 is also supported with the latest preview of Visual Studio for Mac. To use .NET 5 with Visual Studio Code, install the latest version of the C# extension.

Upgrade an existing project

To upgrade an existing ASP.NET Core app from .NET 5 Preview 8 to .NET 5 RC1:

  • Update all Microsoft.AspNetCore.* package references to 5.0.0-rc.1.*.
    • If you’re using the new Microsoft.AspNetCore.Components.Web.Extensions package, update to version 5.0.0-preview.9.*. This package doesn’t have an RC version number yet because we expect to make a few more design changes to the components it contains before it’s ready to ship.
  • Update all Microsoft.Extensions.* package references to 5.0.0-rc.1.*.
  • Update System.Net.Http.Json package references to 5.0.0-rc.1.*.

In Blazor WebAssembly projects, also make the follwowing updates to the project file:

  • Update the SDK from “Microsoft.NET.Sdk.Web” to “Microsoft.NET.Sdk.BlazorWebAssembly”
  • Remove the <RuntimeIdentifier>browser-wasm</RuntimeIdentifier> and <UseBlazorWebAssembly>true</UseBlazorWebAssembly> properties.

That’s it! You should be all ready to go.

See also the full list of breaking changes in ASP.NET Core for .NET 5.

What’s new?

Blazor WebAssembly performance improvements

For .NET 5, we’ve made significant improvements to Blazor WebAssembly runtime performance, with a specific focus on complex UI rendering and JSON serialization. In our performance tests, Blazor WebAssembly in .NET 5 is 2-3x faster for most scenarios.

Runtime code execution in Blazor WebAssembly in .NET 5 is generally faster than Blazor WebAssembly 3.2 due to optimizations in the core framework libraries and improvements to the .NET IL interpreter. Things like string comparisons, dictionary lookups, and JSON handling are generally much faster in .NET 5 on WebAssembly.

As shown in the chart below, JSON handling is almost twice as fast in .NET 5 on WebAssembly:

Blazor WebAssembly 1kb JSON handling

We also optimized the performance of Blazor component rendering, particularly for UI involving lots of components, like when using high-density grids.

To test the performance of grid component rendering in .NET 5, we used three different grid component implementations, each rendering 200 rows with 20 columns:

  • Fast Grid: A minimal, highly optimized implementation of a grid
  • Plain Table: A minimal but not optimized implementation of a grid.
  • Complex Grid: A maximal, not optimized implementation of a grid, using a wide range of Blazor features at once, deliberately intended to create a bad case for the renderer.

From our tests, grid rendering is 2-3x faster in .NET 5:

Blazor WebAssembly grid rendering

You can find the code for these performance tests in the ASP.NET Core GitHub repo.

You can expect ongoing work to improve Blazor WebAssembly performance. Besides optimizing the Blazor WebAssembly runtime and framework, the .NET team is also working with browser implementers to further speed up WebAssembly execution. And for .NET 6, we expect to ship support for ahead-of-time (AoT) compilation to WebAssembly, which should further improve performance.

Blazor component virtualization

You can further improve the perceived performance of component rendering using the new built-in virtualization support. Virtualization is a technique for limiting the UI rendering to just the parts that are currently visible, like when you have a long list or table with many rows and only a small subset is visible at any given time. Blazor in .NET 5 adds a new Virtualize component that can be used to easily add virtualization to your components.

A typical list or table-based component might use a C# foreach loop to render each item in the list or each row in the table, like this:

@foreach (var employee in employees)
{ <tr> <td>@employee.FirstName</td> <td>@employee.LastName</td> <td>@employee.JobTitle</td> </tr>
}

If the list grew to include thousands of rows, then rendering it may take a while, resulting in a noticeable UI lag.

Instead, you can replace the foreach loop with the Virtualize component, which only renders the rows that are currently visible.

<Virtualize Items="employees" Context="employee"> <tr> <td>@employee.FirstName</td> <td>@employee.LastName</td> <td>@employee.JobTitle</td> </tr>
</Virtualize>

The Virtualize component calculates how many items to render based on the height of the container and the size of the rendered items.

If you don’t want to load all items into memory, you can specify an ItemsProvider, like this:

<Virtualize ItemsProvider="LoadEmployees" Context="employee"> <tr> <td>@employee.FirstName</td> <td>@employee.LastName</td> <td>@employee.JobTitle</td> </tr>
</Virtualize>

An items provider is a delegate method that asynchronously retrieves the requested items on demand. The items provider receives an ItemsProviderRequest, which specifies the required number of items starting at a specific start index. The items provider then retrieves the requested items from a database or other service and returns them as an ItemsProviderResult<TItem> along with a count of the total number of items available. The items provider can choose to retrieve the items with each request, or cache them so they are readily available.

async ValueTask<ItemsProviderResult<Employee>> LoadEmployees(ItemsProviderRequest request)
{ var numEmployees = Math.Min(request.Count, totalEmployees - request.StartIndex); var employees = await EmployeesService.GetEmployeesAsync(request.StartIndex, numEmployees, request.CancellationToken); return new ItemsProviderResult<Employee>(employees, totalEmployees);
}

Because requesting items from a remote data source might take some time, you also have the option to render a placeholder until the item data is available.

<Virtualize ItemsProvider="LoadEmployees" Context="employee"> <ItemContent> <tr> <td>@employee.FirstName</td> <td>@employee.LastName</td> <td>@employee.JobTitle</td> </tr> </ItemContent> <Placeholder> <tr> <td>Loading...</td> </tr> </Placeholder>
</Virtualize>

Blazor WebAssembly prerendering

The component tag helper now supports two additional render modes for prerendering a component from a Blazor WebAssembly app:

  • WebAssemblyPrerendered: Prerenders the component into static HTML and includes a marker for a Blazor WebAssembly app to later use to make the component interactive when loaded in the browser.
  • WebAssembly: Renders a marker for a Blazor WebAssembly app to use to include an interactive component when loaded in the browser. The component is not prerendered. This option simply makes it easier to render different Blazor WebAssembly components on different cshtml pages.

To setup prerendering in an Blazor WebAssembly app:

  1. Host the Blazor WebAssembly app in an ASP.NET Core app.
  2. Replace the default static index.html file in the client project with a _Host.cshtml file in the server project.
  3. Update the server startup logic to fallback to _Host.cshtml instead of index.html (similar to how the Blazor Server template is set up).
  4. Update _Host.cshtml to use the component tag helper to prerender the root App component:

    <component type="typeof(App)" render-mode="WebAssemblyPrerendered" />
    

You can also pass parameters to the component tag helper when using the WebAssembly-based render modes if the parameters are serializable. The parameters must be serializable so that they can be transferred to the client and used to initialize the component in the browser. If prerendering, you’ll also need to be sure to author your components so that they can gracefully execute server-side without access to the browser.

<component type="typeof(Counter)" render-mode="WebAssemblyPrerendered" param-IncrementAmount="10" />

In additional to improving the perceived load time of a Blazor WebAssembly app, you can also use the component tag helper with the new render modes to add multiple components on different pages and views. You don’t need to configure these components as root components in the app or add your own marker tags on the page – the framework handles that for you.

Browser compatibility analyzer for Blazor WebAssembly

Blazor WebAssembly apps in .NET 5 target the full .NET 5 API surface area, but not all .NET 5 APIs are supported on WebAssembly due to browser sandbox constraints. Unsupported APIs throw PlatformNotSupportedException when running on WebAssembly. .NET 5 now includes a platform compatibility analyzer that will warn you when your app uses APIs that are not supported by your target platforms. For Blazor WebAssembly apps, this means checking that APIs are supported in browsers.

We haven’t added all of the annotations yet to the core libraries in .NET 5 to indicate which APIs are not supported in browsers, but some APIs, mostly Windows specific APIs, are already annotated:

Platform compatibility analyzer

To enable browser compatibilty checks for libraries, add “browser” as a supported platform in your project file (Blazor WebAssembly projects and Razor class library projects do this for you):

<SupportedPlatform Include="browser" />

When authoring a library, you can optionally indicate that a particular API is not supported in browsers by applying the UnsupportedOSPlatformAttribute:

[UnsupportedOSPlatform("browser")]
private static string GetLoggingDirectory()
{ // ...
}

Blazor JavaScript isolation and object references

Blazor now enables you to isolate your JavaScript as standard JavaScript modules. This has a couple of benefits:

  1. Imported JavaScript no longer pollutes the global namespace.
  2. Consumers of your library and components no longer need to manually import the related JavaScript.

For example, the following JavaScript module exports a simple JavaScript function for showing a browser prompt:

export function showPrompt(message) { return prompt(message, 'Type anything here');
}

You can add this JavaScript module to your .NET library as a static web asset (wwwroot/exampleJsInterop.js) and then import the module into your .NET code using the IJSRuntime service:

var module = await jsRuntime.InvokeAsync<JSObjectReference>("import", "./_content/MyComponents/exampleJsInterop.js");

The “import” identifier is a special identifier used specifically for importing a JavaScript module. You specify the module using its stable static web asset path: _content/[LIBRARY NAME]/[PATH UNDER WWWROOT].

The IJSRuntime imports the module as a JSObjectReference, which represents a reference to a JavaScript object from .NET code. You can then use the JSObjectReference to invoke exported JavaScript functions from the module:

public async ValueTask<string> Prompt(string message)
{ return await module.InvokeAsync<string>("showPrompt", message);
}

JSObjectReference greatly simplifies interacting with JavaScript libraries where you want to capture JavaScript object references, and then later invoke their functions from .NET.

Blazor file input support

Blazor now offers an InputFile component for handling file uploads, or more generally for reading browser file data into your .NET code.

The InputFile component renders as an HTML input of type “file”. By default, the user can select single files, or if you add the “multiple” attribute then the user can supply multiple files at once. When one or more files is selected by the user, the InputFile component fires an OnChange event and passes in an InputFileChangeEventArgs that provides access to the selected file list and details about each file.

<InputFile OnChange="OnInputFileChange" multiple /> <div class="image-list"> @foreach (var imageDataUrl in imageDataUrls) { <img src="@imageDataUrl" /> }
</div> @code { IList<string> imageDataUrls = new List<string>(); async Task OnInputFileChange(InputFileChangeEventArgs e) { var imageFiles = e.GetMultipleFiles(); var format = "image/png"; foreach (var imageFile in imageFiles) { var resizedImageFile = await imageFile.RequestImageFileAsync(format, 100, 100); var buffer = new byte[resizedImageFile.Size]; await resizedImageFile.OpenReadStream().ReadAsync(buffer); var imageDataUrl = $"data:{format};base64,{Convert.ToBase64String(buffer)}"; imageDataUrls.Add(imageDataUrl); } }
}

To read data from a user-selected file, you call OpenReadStream on the file and read from the returned stream. In a Blazor WebAssembly app, the data is streamed directly into your .NET code within the browser. In a Blazor Server app, the file data is streamed to your .NET code on the server as you read from the stream. In case you’re using the component to receive an image file, Blazor also provides a RequestImageFileAsync convenience method for resizing images data within the browser’s JavaScript runtime before they’re streamed into your .NET application.

Custom validation class attributes in Blazor

You can now specify custom validation class names in Blazor. This is useful when integrating with CSS frameworks, like Bootstrap.

To specify custom validation class names, create a class derived from FieldCssClassProvider and set it on the EditContext instance.

var editContext = new EditContext(model);
editContext.SetFieldCssClassProvider(new MyFieldClassProvider()); // ... class MyFieldClassProvider : FieldCssClassProvider
{ public override string GetFieldCssClass(EditContext editContext, in FieldIdentifier fieldIdentifier) { var isValid = !editContext.GetValidationMessages(fieldIdentifier).Any(); return isValid ? "legit" : "totally-bogus"; }
}

Blazor support for ontoggle event

Blazor now has support for the ontoggle event:

<div> @if (detailsExpanded) { <p>Read the details carefully!</p> } <details id="details-toggle" @ontoggle="OnToggle"> <summary>Summary</summary> <p>Detailed content</p> </details>
</div> @code { bool detailsExpanded; string message { get; set; } void OnToggle() { detailsExpanded = !detailsExpanded; }
}

Thank you Vladimir Samoilenko for this contribution!

Model binding DateTime as UTC

Model binding in ASP.NET Core now supports correctly binding UTC time strings to DateTime. If the request contains a UTC time string (for example https://example.com/mycontroller/myaction?time=2019-06-14T02%3A30%3A04.0576719Z), model binding will correctly bind it to a UTC DateTime without the need for further customization.

Control Startup class activation

We’ve provided an additional UseStartup overload that lets you provide a factory method for controlling Startup class activation. This is useful if you want to pass additional parameters to Startup that are initialized along with the host.

public class Program
{ public static async Task Main(string[] args) { var logger = CreateLogger(); var host = Host.CreateDefaultBuilder() .ConfigureWebHost(builder => { builder.UseStartup(context => new Startup(logger)); }) .Build(); await host.RunAsync(); }
}

Open API Specification On-by-default

Open API Specification is a industry-adopted convention for describing HTTP APIs and integrating them into complex business processes or with 3rd parties. Open API is widely supported by all cloud providers and many API registries, so developers who emit Open API documents from their Web APIs have a variety of new opportunities in which those APIs can be used. In partnership with the maintainers of the open-source project Swashbuckle.AspNetCore, we’re excited to announce that the ASP.NET Core API template in RC1 comes pre-wired with a NuGet dependency on Swashbuckle, a popular open-source NuGet package that emits Open API documents dynamically. Swashbuckle does this by introspecting over your API Controllers and generating the Open API document at run-time, or at build time using the Swashbuckle CLI.

In .NET 5 RC1, running dotnet new webapi will result in the Open API output being enabled by default, but if you prefer to have Open API disabled, use dotnet new webapi --no-openapi true. All .csproj files that are created for Web API projects will come with the NuGet package reference.

<ItemGroup> <PackageReference Include="Swashbuckle.AspNetCore" Version="5.5.1" />
</ItemGroup>

In addition to the NuGet package reference being in the .csproj file we’ve added code to both the ConfigureServices method in Startup.cs that activates Open API document generation.

public void ConfigureServices(IServiceCollection services)
{ services.AddControllers(); services.AddSwaggerGen(c => { c.SwaggerDoc("v1", new OpenApiInfo { Title = "WebApplication1", Version = "v1" }); });
}

The Configure method is also pre-wired to enlist in the Swashbuckle middleware. This lights up the document generation process and turns on the Swagger UI page by default in development mode. This way you’re confident that the out-of-the-box experience doesn’t accidentally expose your API’s description when you publish to production.

public void Configure(IApplicationBuilder app, IWebHostEnvironment env)
{ if (env.IsDevelopment()) { app.UseDeveloperExceptionPage(); app.UseSwagger(); app.UseSwaggerUI(c => c.SwaggerEndpoint("/swagger/v1/swagger.json", "WebApplication1 v1")); } // ...
}

Azure API Management Import

When ASP.NET Core API projects are wired to output Open API in this way, the Visual Studio 2019 version 16.8 Preview 2.1 publishing experience will automatically offer an additional step in the publishing flow. Developers who use Azure API Management have an opportunity to automatically import their APIs into Azure API Management during the publish flow.

Publising APIs

This additional publishing step reduces the number of steps HTTP API developers need to execute to get their APIs published and used with Azure Logic Apps or PowerApps.

Better F5 Experience for Web API Projects

With Open API enabled by default, we were able to significantly improve the F5 experience for Web API developers. With .NET 5 RC1, the Web API template comes pre-configured to load up the Swagger UI page. The Swagger UI page provides both the documentation you’ve added for your API, but enables you to test your APIs with a single click.

Swagger UI page

SignalR parallel hub invocations

In .NET 5 RC1, ASP.NET Core SignalR is now capable of handling parallel hub invocations. You can change the default behavior and allow clients to invoke more than one hub method at a time.

public void ConfigureServices(IServiceCollection services)
{ services.AddSignalR(options => { options.MaximumParallelInvocationsPerClient = 5; });
}

Added Messagepack support in SignalR Java client

We’re introducing a new package, com.microsoft.signalr.messagepack, that adds messagepack support to the SignalR java client. To use the messagepack hub protocol, add .withHubProtocol(new MessagePackHubProtocol()) to the connection builder.

HubConnection hubConnection = HubConnectionBuilder.create("http://localhost:53353/MyHub") .withHubProtocol(new MessagePackHubProtocol()) .build();

Kestrel endpoint-specific options via configuration

We have added support for configuring Kestrel’s endpoint-specific options via configuration. The endpoint-specific configurations includes the Http protocols used, the TLS protocols used, the certificate selected, and the client certificate mode.

You are able to configure the which certificate is selected based on the specified server name as part of the Server Name Indication (SNI) extension to the TLS protocol as indicated by the client. Kestrel’s configuration also support a wildcard prefix in the host name.

The example belows shows you how to specify endpoint-specific using a configuration file:

{ "Kestrel": { "Endpoints": { "EndpointName": { "Url": "https://*", "Sni": { "a.example.org": { "Protocols": "Http1AndHttp2", "SslProtocols": [ "Tls11", "Tls12"], "Certificate": { "Path": "testCert.pfx", "Password": "testPassword" }, "ClientCertificateMode" : "NoCertificate" }, "*.example.org": { "Certificate": { "Path": "testCert2.pfx", "Password": "testPassword" } }, "*": { // At least one subproperty needs to exist per SNI section or it // cannot be discovered via IConfiguration "Protocols": "Http1", } } } } }
}

Give feedback

We hope you enjoy this release of ASP.NET Core in .NET 5! We are eager to hear about your experiences with this latest .NET 5 release. Let us know what you think by filing issues on GitHub.

Thanks for trying out ASP.NET Core!

Posted on Leave a comment

Free e-book: Blazor for ASP.NET Web Forms Developers

Avatar

Nish

We are thrilled to announce the release of our new e-book: Blazor for ASP.NET Web Forms developers. This book caters specifically to ASP.NET Web Forms developers looking for guidelines. As well as strategies for migrating their existing apps to a modern, open-source, and cross-platform web framework.

Blazor E-book for ASP.NET Web Forms

Blazor is a new web framework that changes what is possible when building web apps with .NET. It is also a client-side web UI framework based on C# instead of JavaScript. When paired with .NET running on the server, Blazor enables full-stack web development with .NET. Blazor shares many commonalities with ASP.NET Web Forms. Such as having a reusable component model and a simple way to handle user events. It also builds on the foundations of .NET Core to provide a modern and high-performance web development experience. Additionally, Blazor is a natural solution for ASP.NET Web Forms developers looking to take advantage of client-side development and the open-source, cross-platform future of .NET.

In this book, each Blazor concept is presented in the context of analogous ASP.NET Web Forms features and practices. The book covers:

  • Building Blazor apps.
  • How Blazor works.
  • Blazor’s relation to .NET Core.
  • Reasonable strategies for migrating existing ASP.NET Web Forms apps to Blazor where appropriate.
  • A reference sample that demonstrates the migration strategies used.

Blazor for ASP.NET Web Forms Developers Cover Image
Read Online

Should I Still Use ASP.NET Web Forms?

Of course! As long as the .NET Framework ships as part of Windows, ASP.NET Web Forms will be supported. For many Web Forms developers, the lack of cross-platform and open-source support is a non-issue. If you do not require cross-platform support, open-source, or any other new features in .NET Core or .NET 5, then stick with ASP.NET Web Forms on Windows. ASP.NET Web Forms will continue to be a productive way to write web apps for many years to come!

Other Free E-books

If you are in the path of migrating your existing web and server apps, check out our free e-books on gRPC for WCF Developers. As well as Migrating your .NET App to Azure and more at dot.net/architecture.

Feedback

The book and related reference application will be evolving, so we welcome your feedback! If you have comments about how this book can be improved, please submit feedback.

Posted on Leave a comment

Introducing “Web Live Preview”

Avatar

Tim

If you work on any type of app that has a user interface (UI) you probably have experienced that inner-loop development cycle of making a change, compile and run the app, see the change wasn’t what you wanted, stop debugging, then re-run the cycle again. Depending on the frameworks or technology you use, there are options to improve this experience such as edit-and-continue, Xamarin Hot Reload, and design-time editors. Of course, nothing will show the UI of your app like…well, your app!

For ASP.NET WebForms we have had designers for a while allowing you to switch from your WebForms code view to the Design view to get an idea what the UI may look like. As modern UI frameworks have evolved and relied more on fragments or components of CSS/HTML/etc. this design view may not always reflect the UI:

Screenshot of designer and rendered view of HTML

And these frameworks and UI libraries are becoming more popular and common to a web developer’s experience. We ship them in the box as well with some of our Visual Studio templates! As we looked at some of the web trends and talked with customers in our user research labs we wanted to adapt to that philosophy that the best representation of your UI, data, state, etc. is your actual running app. And so that is what we are working on right now.

Starting today you can download our preview Visual Studio extension for a new editing mode we’re calling “web live preview.” The extension is available now so head on over to the Visual Studio Marketplace and download/install the “Web Live Preview” extension for Visual Studio 2019. Seriously, go do that now and just click install, then come back and read the rest. It will be ready for you when you’re done!

Using the extension

After installing the extension, in an ASP.NET web application you’ll now have an option that says “Edit in Browser” when right-clicking on an ASPX page:

Screenshot of context menu

This will launch your default browser with your app in a special mode. You should immediately notice a small difference in that your view has some adorners on it:

Screenshot of adorners on web page

In this mode you can now interactively select elements on this view and see the selection synchronized with your source. Even if you select something that comes from a master page, the synchronization will open that page in Visual Studio to navigate to the selection.

Animation of element selection

So what you may say, well it’s not just selection synchronization, but source as well. You may have a web control and be selecting those elements and we know that, for example, that is an asp:DataGrid component. As you make changes to the source as well, they are immediately reflected in the running app. Notice no saving or no browser refresh happening:

Animation of changing styles

When working with things like Razor pages, we can detect code as well and even interact within those blocks of code. Here in a Razor page I have a loop. Notice how the selection is identified as a code loop, but I can still make some modifications in the code and see it reflected in the running app:

Animation of changing code

So even in your code blocks within your HTML you can make edits and see them reflected in your running app, shortening that inner loop to smaller changes in your process.

But wait, there’s more!

If you already use browser developer tools you may be asking if this is a full replacement for that for you. And it probably is NOT and that is by design! We know web devs rely a lot on developer tools in browsers and we are experimenting as well with a little extension (for Edge/Chrome) that synchronizes in the rendered dev tools view as well. So now you have synchronization across source representation (including controls/code/static) to rendered app, and with dev tools DOM tree views…and both ways:

Animation of browser tools integration

We have a lot more to do, hence this being a preview of our work so far.

Current constraints

With any preview there are going to be constraints, so here they are for the time of this writing. We eventually see this just being functionality for web developers in Visual Studio without installing anything else, so the extension is temporary. For now, we support the .NET Framework web project types for WebForms and MVC. Yes, we know, we know you want .NET Core and Blazor support. This is definitely on our roadmap, but we wanted to tackle some very known scenarios where our WebForms customers have been using design view for a while. We hear you and this has been echoed in our research as well…we are working on it!

For pure code changes outside of the ASPX/Razor pages we don’t have a full ‘hot reload’ story yet so you will have to refresh the browser in those cases where some fundamental object models are changing that you may rely on (new classes/functions/properties, changed data types, etc.). This is something that we hope to tackle more broadly for the .NET ecosystem and take all that we have learned from customers using similar experiments we have had in this area.

The extension works with Chromium-based browsers such as the latest Microsoft Edge and Google Chrome browsers. This also enables us to have a single browser developer tools extension that handles that integration as well. To use that browser extension, you will have to use developer mode in your browser and load the extension from disk. This process is fairly simple but you have to follow a few steps which are documented on adding and removing extensions in Edge. The location of the dev tools plugin is located at C:\Program Files (x86)\Microsoft Visual Studio\2019\Common7\IDE\Extensions\Microsoft\Web Live Preview\BrowserExtension (assuming you have the default Visual Studio 2019 install location and ensuring you specify Community/Professional/Enterprise you have installed). Please note this is also a temporary solution as we are in development of these capabilities. Any final browser tools extensions would be distributed in browser stores.

Summary

If you are one of our customers that can leverage this preview extension we’d love for you to download and install it and give it a try. If you have feedback or issues, please use the Visual Studio Feedback mechanism as that helps us get diagnostic information for any issues you may be facing. We know that you have a lot of tools at your disposal but we hope this web live preview mode will make some of your flow easier. Nothing to install into your project, no tool-specific code in your source, and (eventually) no additional tools to install. Please give it a try and let us know what you think!

On behalf of the team working on web tools for you, thank you for reading!

Posted on Leave a comment

ASP.NET Core updates in .NET 5 Preview 5

Avatar

Sourabh

.NET 5 Preview 5 is now available and is ready for evaluation! .NET 5 will be a current release.

Get started

To get started with ASP.NET Core in .NET 5.0 Preview5 install the .NET 5.0 SDK.

If you’re on Windows using Visual Studio, we recommend installing the latest preview of Visual Studio 2019 16.7.

If you’re on macOS, we recommend installing the latest preview of Visual Studio 2019 for Mac 8.7.

Upgrade an existing project

To upgrade an existing ASP.NET Core 5.0 preview4 app to ASP.NET Core 5.0 preview5:

  • Update all Microsoft.AspNetCore.* package references to 5.0.0-preview.5.*.
  • Update all Microsoft.Extensions.* package references to 5.0.0-preview.5.*.

See the full list of breaking changes in ASP.NET Core 5.0.

That’s it! You should now be all set to use .NET 5 Preview 5.

What’s new?

Reloadable endpoints via configuration for Kestrel

Kestrel now has the ability to observe changes to configuration passed to KestrelServerOptions.Configure and unbind from existing endpoints and bind to new endpoints without requiring you to restart your application.

See the release notes for additional details and known issues.

Give feedback

We hope you enjoy this release of ASP.NET Core in .NET 5! We are eager to hear about your experiences with this latest .NET 5 release. Let us know what you think by filing issues on GitHub.

Thanks for trying out ASP.NET Core!

Posted on Leave a comment

Introducing Project Tye

Amiee Lo

Amiee

Project Tye is an experimental developer tool that makes developing, testing, and deploying microservices and distributed applications easier.

When building an app made up of multiple projects, you often want to run more than one at a time, such as a website that communicates with a backend API or several services all communicating with each other. Today, this can be difficult to setup and not as smooth as it could be, and it’s only the very first step in trying to get started with something like building out a distributed application. Once you have an inner-loop experience there is then a, sometimes steep, learning curve to get your distributed app onto a platform such as Kubernetes.

The project has two main goals:

  1. Making development of microservices easier by:
    • Running many services with one command
    • Using dependencies in containers
    • Discovering addresses of other services using simple conventions
  2. Automating deployment of .NET applications to Kubernetes by:
    • Automatically containerizing .NET applications
    • Generating Kubernetes manifests with minimal knowledge or configuration
    • Using a single configuration file

If you have an app that talks to a database, or an app that is made up of a couple of different processes that communicate with each other, then we think Tye will help ease some of the common pain points you’ve experienced.

We have recently demonstrated Tye in a few Build sessions that we encourage you to watch, Cloud Native Apps with .NET and AKS and Journey to one .NET

Installation

To get started with Tye, you will first need to have .NET Core 3.1 installed on your machine.

Tye can then be installed as a global tool using the following command:

dotnet tool install -g Microsoft.Tye --version "0.2.0-alpha.20258.3"

Running a single service

Tye makes it very easy to run single applications. To demonstrate this:

1. Make a new folder called microservices and navigate to it:

mkdir microservices
cd microservices

2. Then create a frontend project:

dotnet new razor -n frontend

3. Now run this project using tye run:

tye run frontend

Image tye run output The above displays how Tye is building, running, and monitoring the frontend application.

One key feature from tye run is a dashboard to view the state of your application. Navigate to http://localhost:8000 to see the dashboard running.

Image tye dashboard

The dashboard is the UI for Tye that displays a list of all of your services. The Bindings column has links to the listening URLs of the service. The Logs column allows you to view the streaming logs for the service.

Image tye logs

Services written using ASP.NET Core will have their listening ports assigned randomly if not explicitly configured. This is useful to avoid common issues like port conflicts.

Running multiple services

Instead of just a single application, suppose we have a multi-application scenario where our frontend project now needs to communicate with a backend project. If you haven’t already, stop the existing tye run command using Ctrl + C.

1. Create a backend API that the frontend will call inside of the microservices/ folder.

dotnet new webapi -n backend

2. Then create a solution file and add both projects:

dotnet new sln
dotnet sln add frontend backend

You should now have a solution called microservices.sln that references the frontend and backend projects.

3. Run tye in the folder with the solution.

tye run

The dashboard should show both the frontend and backend services. You can navigate to both of them through either the dashboard of the url outputted by tye run.

The backend service in this example was created using the webapi project template and will return an HTTP 404 for its root URL.

Getting the frontend to communicate with the backend

Now that we have two applications running, let’s make them communicate.

To get both of these applications communicating with each other, Tye utilizes service discovery. In general terms, service discovery describes the process by which one service figures out the address of another service. Tye uses environment variables for specifying connection strings and URIs of services.

The simplest way to use Tye’s service discovery is through the Microsoft.Extensions.Configuration system – available by default in ASP.NET Core or .NET Core Worker projects. In addition to this, we provide the Microsoft.Tye.Extensions.Configuration package with some Tye-specific extensions layered on top of the configuration system.

If you want to learn more about Tye’s philosophy on service discovery and see detailed usage examples, check out this reference document.

1. If you haven’t already, stop the existing tye run command using Ctrl + C. Open the solution in your editor of choice.

2. Add a file WeatherForecast.cs to the frontend project.

using System; namespace frontend { public class WeatherForecast { public DateTime Date { get; set; } public int TemperatureC { get; set; } public int TemperatureF => 32 + (int)(TemperatureC / 0.5556); public string Summary { get; set; } } }

This will match the backend WeatherForecast.cs.

3. Add a file WeatherClient.cs to the frontend project with the following contents:

using System.Net.Http;
using System.Text.Json;
using System.Threading.Tasks; namespace frontend
{ public class WeatherClient { private readonly JsonSerializerOptions options = new JsonSerializerOptions() { PropertyNameCaseInsensitive = true, PropertyNamingPolicy = JsonNamingPolicy.CamelCase, }; private readonly HttpClient client; public WeatherClient(HttpClient client) { this.client = client; } public async Task<WeatherForecast[]> GetWeatherAsync() { var responseMessage = await this.client.GetAsync("/weatherforecast"); var stream = await responseMessage.Content.ReadAsStreamAsync(); return await JsonSerializer.DeserializeAsync<WeatherForecast[]>(stream, options); } }
}

4. Add a reference to the Microsoft.Tye.Extensions.Configuration package to the frontend project

dotnet add frontend/frontend.csproj package Microsoft.Tye.Extensions.Configuration --version "0.2.0-*"

5. Now register this client in frontend by adding the following to the existing ConfigureServices method to the existing Startup.cs file:

...
public void ConfigureServices(IServiceCollection services)
{ services.AddRazorPages(); /** Add the following to wire the client to the backend **/ services.AddHttpClient<WeatherClient>(client => { client.BaseAddress = Configuration.GetServiceUri("backend"); }); /** End added code **/
}
...

This will wire up the WeatherClient to use the correct URL for the backend service.

6. Add a Forecasts property to the Index page model under Pages\Index.cshtml.cs in the frontend project.

... public WeatherForecast[] Forecasts { get; set; }
...

7. Change the OnGet method to take the WeatherClient to call the backend service and store the result in the Forecasts property:

... public async Task OnGet([FromServices]WeatherClient client) { Forecasts = await client.GetWeatherAsync(); }
...

8. Change the Index.cshtml razor view to render the Forecasts property in the razor page:

@page
@model IndexModel
@{ ViewData["Title"] = "Home page";
} <div class="text-center"> <h1 class="display-4">Welcome</h1> <p>Learn about <a href="https://docs.microsoft.com/aspnet/core">building Web apps with ASP.NET Core</a>.</p>
</div> Weather Forecast: <table class="table"> <thead> <tr> <th>Date</th> <th>Temp. (C)</th> <th>Temp. (F)</th> <th>Summary</th> </tr> </thead> <tbody> @foreach (var forecast in @Model.Forecasts) { <tr> <td>@forecast.Date.ToShortDateString()</td> <td>@forecast.TemperatureC</td> <td>@forecast.TemperatureF</td> <td>@forecast.Summary</td> </tr> } </tbody>
</table>

9. Run the project with tye run and the frontend service should be able to successfully call the backend service!

When you visit the frontend service you should see a table of weather data. This data was produced randomly in the backend service. The fact that you’re seeing it in a web UI in the frontend means that the services are able to communicate. Unfortunately, this doesn’t work out of the box on Linux right now due to how self-signed certificates are handled, please see the workaround here

Tye’s configuration schema

Tye has a optional configuration file (tye.yaml) to enable customizing settings. This file contains all of your projects and external dependencies. If you have an existing solution, Tye will automatically populate this with all of your current projects.

To initalize this file, you will need to run the following command in the microservices directory to generate a default tye.yaml file:

tye init

The contents of the tye.yaml should look like this:

Image tye yaml

The top level scope (like the name node) is where global settings are applied.

tye.yaml lists all of the application’s services under the services node. This is the place for per-service configuration.

To learn more about Tye’s yaml specifications and schema, you can check it out here in Tye’s repository on Github.

We provide a json-schema for tye.yaml and some editors support json-schema for completion and validation of yaml files. See json-schema for instructions.

Adding external dependencies (Redis)

Not only does Tye make it easy to run and deploy your applications to Kubernetes, it’s also fairly simple to add external dependencies to your applications as well. We will now add redis to the frontend and backend application to store data.

Tye can use Docker to run images that run as part of your application. Make sure that Docker is installed on your machine.

1. Change the WeatherForecastController.Get() method in the backend project to cache the weather information in redis using an IDistributedCache.

2. Add the following using‘s to the top of the file:

using Microsoft.Extensions.Caching.Distributed;
using System.Text.Json;

3. Update Get():

[HttpGet]
public async Task<string> Get([FromServices]IDistributedCache cache)
{ var weather = await cache.GetStringAsync("weather"); if (weather == null) { var rng = new Random(); var forecasts = Enumerable.Range(1, 5).Select(index => new WeatherForecast { Date = DateTime.Now.AddDays(index), TemperatureC = rng.Next(-20, 55), Summary = Summaries[rng.Next(Summaries.Length)] }) .ToArray(); weather = JsonSerializer.Serialize(forecasts); await cache.SetStringAsync("weather", weather, new DistributedCacheEntryOptions { AbsoluteExpirationRelativeToNow = TimeSpan.FromSeconds(5) }); } return weather;
}

This will store the weather data in Redis with an expiration time of 5 seconds.

4. Add a package reference to Microsoft.Extensions.Caching.StackExchangeRedis in the backend project:

cd backend/
dotnet add package Microsoft.Extensions.Caching.StackExchangeRedis
cd ..

5. Modify Startup.ConfigureServices in the backend project to add the redis IDistributedCache implementation.

 public void ConfigureServices(IServiceCollection services) { services.AddControllers(); services.AddStackExchangeRedisCache(o => { o.Configuration = Configuration.GetConnectionString("redis"); }); }

The above configures redis to the configuration string for the redis service injected by the tye host.

6. Modify tye.yaml to include redis as a dependency.

name: microservice
services:
- name: backend project: backend\backend.csproj
- name: frontend project: frontend\frontend.csproj
- name: redis image: redis bindings: - port: 6379 connectionString: "${host}:${port}"
- name: redis-cli image: redis args: "redis-cli -h redis MONITOR"

We’ve added 2 services to the tye.yaml file. The redis service itself and a redis-cli service that we will use to watch the data being sent to and retrieved from redis.

The "${host}:${port}" format in the connectionString property will substitute the values of the host and port number to produce a connection string that can be used with StackExchange.Redis.

7. Run the tye command line in the solution root

Make sure your command-line is in the microservices/ directory. One of the previous steps had you change directories to edit a specific project.

tye run

Navigate to http://localhost:8000 to see the dashboard running. Now you will see both redis and the redis-cli running listed in the dashboard.

Navigate to the frontend application and verify that the data returned is the same after refreshing the page multiple times. New content will be loaded every 5 seconds, so if you wait that long and refresh again, you should see new data. You can also look at the redis-cli logs using the dashboard and see what data is being cached in redis.

The "${host}:${port}" format in the connectionString property will substitute the values of the host and port number to produce a connection string that can be used with StackExchange.Redis.

Deploying to Kubernetes

Tye makes the process of deploying your application to Kubernetes very simple with minimal knowlege or configuration required.

Tye will use your current credentials for pushing Docker images and accessing Kubernetes clusters. If you have configured kubectl with a context already, that’s what tye deploy is going to use!

Prior to deploying your application, make sure to have the following:

  1. Docker installed based off on your operating system
  2. A container registry. Docker by default will create a container registry on DockerHub. You could also use Azure Container Registry (ACR) or another container registry of your choice.
  3. A Kubernetes Cluster. There are many different options here, including:

If you choose a container registry provided by a cloud provider (other than Dockerhub), you will likely have to take some steps to configure your kubernetes cluster to allow access. Follow the instructions provided by your cloud provider.

Deploying Redis

tye deploy will not deploy the redis configuration, so you need to deploy it first by running:

kubectl apply -f https://raw.githubusercontent.com/dotnet/tye/master/docs/tutorials/hello-tye/redis.yaml

This will create a deployment and service for redis.

Tye deploy

You can deploy your application by running the follow command:

tye deploy --interactive

Enter the Container Registry (ex: example.azurecr.io for Azure or example for dockerhub):

You will be prompted to enter your container registry. This is needed to tag images, and to push them to a location accessible by kubernetes.

Image tye deploy output

If you are using dockerhub, the registry name will be your dockerhub username. If you are using a standalone container registry (for instance from your cloud provider), the registry name will look like a hostname, eg: example.azurecr.io.

You’ll also be prompted for the connection string for redis.

Image redis connection string

Enter the following to use the instance that you just deployed:

redis:6379

tye deploy will create Kubernetes secret to store the connection string.

–interactive is needed here to create the secret. This is a one-time configuration step. In a CI/CD scenario you would not want to have to specify connection strings over and over, deployment would rely on the existing configuration in the cluster.

Tye uses Kubernetes secrets to store connection information about dependencies like redis that might live outside the cluster. Tye will automatically generate mappings between service names, binding names, and secret names.

tye deploy does many different things to deploy an application to Kubernetes. It will:

  • Create a docker image for each project in your application.
  • Push each docker image to your container registry.
  • Generate a Kubernetes Deployment and Service for each project.
  • Apply the generated Deployment and Service to your current Kubernetes context.

Image tye deploy building images

You should now see three pods running after deploying.

kubectl get pods NAME READY STATUS RESTARTS AGE
backend-ccfcd756f-xk2q9 1/1 Running 0 85m
frontend-84bbdf4f7d-6r5zp 1/1 Running 0 85m
redis-5f554bd8bd-rv26p 1/1 Running 0 98m

You can visit the frontend application, you will need to port-forward to access the frontend from outside the cluster.

kubectl port-forward svc/frontend 5000:80

Now navigate to http://localhost:5000 to view the frontend application working on Kubernetes.

Image kubernetes portforward

Currently tye does not automatically enable TLS within the cluster, and so communication takes place over HTTP instead of HTTPS. This is typical way to deploy services in kubernetes – we may look to enable TLS as an option or by default in the future.

Adding a registry to tye.yaml

If you want to use tye deploy as part of a CI/CD system, it’s expected that you’ll have a tye.yaml file initialized. You will then need to add a container registry to tye.yaml. Based on what container registry you configured, add the following line in the tye.yaml file:

registry: <registry_name>

Now it’s possible to use tye deploy without --interactive since the registry is stored as part of configuration.

This step may not make much sense if you’re using tye.yaml to store a personal Dockerhub username. A more typical use case would storing the name of a private registry for use in a CI/CD system.

For a conceptual overview of how Tye behaves when using tye deploy for deployment, check out this document.

Undeploying your application

After deploying and playing around with the application, you may want to remove all resources associated from the Kubernetes cluster. You can remove resources by running:

tye undeploy

This will remove all deployed resources. If you’d like to see what resources would be deleted, you can run:

tye undeploy --what-if

If you want to experiment more with using Tye, we have a variety of different sample applications and tutorials that you can walk through, check them out down below:

We have been diligently working on adding new capabilities and integrations to continuously improve Tye. Here are some integrations below that we have recently released. There is also information provided on how to get started for each of these:

  • Ingressto expose pods/services created to the public internet.
  • Redisto store data, cache, or as a message broker.
  • Daprfor integrating a Dapr application with Tye.
  • Zipkinusing Zipkin for distributed tracing.
  • Elastic Stackusing Elastic Stack for logging.

While we are excited about the promise Tye holds, it’s an experimental project and not a committed product. During this experimental phase we expect to engage deeply with anyone trying out Tye to hear feedback and suggestions. The point of doing experiments in the open is to help us explore the space as much as we can and use what we learn to determine what we should be building and shipping in the future.

Project Tye is currently commited as an experiment until .NET 5 ships. At which point we will be evaluating what we have and all that we’ve learnt to decide what we should do in the future.

Our goal is to ship every month, and some new capabilities that we are looking into for Tye include:

  • More deployment targets
  • Sidecar support
  • Connected development
  • Database migrations

We are excited by the potential Tye has to make developing distributed applications easier and we need your feedback to make sure it reaches that potential. We’d really love for you to try it out and tell us what you think, there is a link to a survey on the Tye dashboard that you can fill out or you can create issues and talk to us on GitHub. Either way we’d love to hear what you think.

Posted on Leave a comment

ASP.NET Core updates in .NET 5 Preview 4

Avatar

Sourabh

.NET 5 Preview 4 is now available and is ready for evaluation! .NET 5 will be a current release.

Get started

To get started with ASP.NET Core in .NET 5.0 Preview4 install the .NET 5.0 SDK.

If you’re on Windows using Visual Studio, we recommend installing the latest preview of Visual Studio 2019 16.6.

If you’re on macOS, we recommend installing the latest preview of Visual Studio 2019 for Mac 8.6.

Upgrade an existing project

To upgrade an existing ASP.NET Core 5.0 preview3 app to ASP.NET Core 5.0 preview4:

  • Update all Microsoft.AspNetCore.* package references to 5.0.0-preview.4.*.
  • Update all Microsoft.Extensions.* package references to 5.0.0-preview.4.*.

See the full list of breaking changes in ASP.NET Core 5.0.

That’s it! You should now be all set to use .NET 5 Preview 4.

What’s new?

Performance Improvements to HTTP/2

By adding support for HPack dynamic compression of HTTP/2 response headers in Kestrel, the 5.0.0-prevew4 release improves the performance of HTTP/2. For more information on how HPACK helps save bandwidth and help reduce latency, we recommend reading this excellent write-up by the team at CloudFlare.

Reduction in container image sizes

The canonical multi-stage Docker build for ASP.NET Core involves pulling both the SDK image and ASP.NET Core runtime image. By re-platting the SDK image upon the ASP.NET runtime image, we’re sharing layers between the two images. This dramatically reduces the size of the aggregate images that you pull. For more information about the size improvements and other container enhancements, check out the .NET 5 Preview 4 blog post

See the release notes for additional details and known issues.

Give feedback

We hope you enjoy this release of ASP.NET Core in .NET 5! We are eager to hear about your experiences with this latest .NET 5 release. Let us know what you think by filing issues on GitHub.

Thanks for trying out ASP.NET Core!

Posted on Leave a comment

ASP.NET Core updates in .NET 5 Preview 3

Avatar

Sourabh

.NET 5 Preview3 is now available and is ready for evaluation! .NET 5 will be a current release.

Get started

To get started with ASP.NET Core in .NET 5.0 Preview3 install the .NET 5.0 SDK.

If you’re on Windows using Visual Studio, we recommend installing the latest preview of Visual Studio 2019 16.6.

If you’re on macOS, we recommend installing the latest preview of Visual Studio 2019 for Mac 8.6.

Upgrade an existing project

To upgrade an existing ASP.NET Core 5.0 preview2 app to ASP.NET Core 5.0 preview3:

  • Change the TFM in your *.csproj file from netcoreapp5.0 to net5.0
  • Update all Microsoft.AspNetCore.* package references to 5.0.0-preview.3.20215.14.
  • Update all Microsoft.Extensions.* package references to 5.0.0-preview.3.20215.2.

See the full list of breaking changes in ASP.NET Core 5.0.

That’s it! You should now be all set to use .NET 5 Preview3.

What’s new?

Performance Improvements to HTTP/2

By significantly reducing allocations in the HTTP/2 code path and adding support for HPack static compression of HTTP/2 response headers in Kestrel, the 5.0.0-prevew3 release improves the performance of HTTP/2.

We expect to announce additional features in upcoming preview releases.
See the release notes for additional details and known issues.

Give feedback

We hope you enjoy this release of ASP.NET Core in .NET 5! We are eager to hear about your experiences with this latest .NET 5 release. Let us know what you think by filing issues on GitHub.

Thanks for trying out ASP.NET Core!

Posted on Leave a comment

ASP.NET Core updates in .NET 5 Preview 2

Avatar

Sourabh

.NET 5 Preview2 is now available and is ready for evaluation! .NET 5 will be a current release.

Get started

To get started with ASP.NET Core in .NET 5.0 Preview2 install the .NET 5.0 SDK.

If you’re on Windows using Visual Studio, we recommend installing the latest preview of Visual Studio 2019 16.6. If you’re on macOS, we recommend installing the latest preview of Visual Studio 2019 for Mac 8.6.

Upgrade an existing project

To upgrade an existing ASP.NET Core 5.0 preview1 app to ASP.NET Core 5.0 preview2:

  • Update all Microsoft.AspNetCore.* package references to 5.0.0-preview.2.20167.3.
  • Update all Microsoft.Extensions.* package references to 5.0.0-preview.2.20160.3.

See the full list of breaking changes in ASP.NET Core 5.0.

That’s it! You should now be all set to use .NET 5 Preview2.

What’s new?

ASP.NET Core in .NET 5 Preview 2 doesn’t include any major new features just yet, but it does include plenty of minor bug fixes. We expect to announce new features in upcoming preview releases.

See the release notes for additional details and known issues.

Give feedback

We hope you enjoy this release of ASP.NET Core in .NET 5! We are eager to hear about your experiences with this latest .NET 5 release. Let us know what you think by filing issues on GitHub.

Thanks for trying out ASP.NET Core!

Posted on Leave a comment

ASP.NET Core updates in .NET 5 Preview 1

Avatar

Sourabh

.NET 5 Preview1 is now available and is ready for evaluation! .NET 5 will be a current release.

Get started

To get started with ASP.NET Core in .NET 5.0 install the .NET 5.0 SDK.

If you’re on Windows using Visual Studio, we recommend installing the latest preview of Visual Studio 2019 16.6.

Upgrade an existing project

To upgrade an existing ASP.NET Core 3.1 app to .NET 5:

  • Update the TargetFramework property to netcoreapp5.0
  • Update all Microsoft.AspNetCore.* package references to 5.0.0-preview.1.20124.5.
  • Update all Microsoft.Extensions.* package references to 5.0.0-preview.1.20120.4.

See the full list of breaking changes in ASP.NET Core 5.0.

That’s it! You should now be all set to use .NET 5.

What’s new?

ASP.NET Core in .NET 5 Preview 1 doesn’t include any major new features just yet, but it does include plenty of minor bug fixes. We expect to announce new features in upcoming preview releases.

See the release notes for additional details and known issues.

Give feedback

We hope you enjoy this release of ASP.NET Core in .NET 5! We are eager to hear about your experiences with this latest .NET 5 release. Let us know what you think by filing issues on GitHub.

Thanks for trying out ASP.NET Core!

Posted on Leave a comment

ASP.NET Core Apps Observability

Francisco Beltrao

Francisco

Thank you Sergey Kanzhelev for the support and review of this ASP.NET Core Apps Observability article.

Modern software development practices value quick and continuous updates, following processes that minimize the impact of software failures. As important as identifying bugs early, finding out if changes are improving business value are equally important. These practices can only work when a monitoring solution is in place. This article explores options for adding observability to .NET Core apps. They have been collected based on interactions with customers using .NET Core in different environments. We will be looking into OpenTelemetry and Application Insights SDKs to add observability to a sample distributed application.

Identifying software error and business impact require a monitoring solution with the ability to observe and report how the system and users behave. The collected data must provide the required information to analyze and identify a bad update. Answering questions such as:

  • Are we observing more errors than before?
  • Were there new error types?
  • Did the request duration unexpectedly increase compared to previous versions?
  • Has the throughput (req/sec) decreased?
  • Has the CPU and/or Memory usage increased?
  • Were there changes in our KPIs?
  • Is it selling less than before?
  • Did our visitor count decrease?

The impact of a bad system update can be minimized by combining the monitoring information with progressive deployment strategies. Such as canary, mirroring, rings, blue/green, etc.

Observability is Built on 3 Pillars:

  • Logging: collects information about events happening in the system. Helping the team analyze unexpected application behavior. Searching through the logs of suspect services can provide the necessary hint to identify the problem root cause. Such as: service throwing out of memory exceptions and app configuration not reflecting expected values. As well as calls to external service with incorrect address, calls to external service returns with unexpected results, and incoming requests with unexpected input.

  • Tracing: collects information to create an end-to-end view of how transactions are executed in a distributed system. A trace is like a stack trace spanning multiple applications. Once a problem has been recognized, traces are a good starting point in identifying the source in distributed operations. Like calls from service A to B are taking longer than normal, service payment calls are failing, etc.

  • Metrics: provide a real-time indication of how the system is running. It can be leveraged to build alerts, allowing proactive reactance to unexpected values. As opposed to logs and traces, the amount of data collected using metrics remains constant as the system load increases. Application problems are often first detected through abnormal metric values. Such as CPU usage being higher than before, payment error count spiking, and queued item count keeps growing.

Adding Observability to a .NET Core Application

There are many ways to add observability aspects to an application. Dapr for example, is a runtime to build distributed applications, transparently adding distribute tracing. Another example is through the usage of service meshes in Kubernetes (Istio, Linkerd).

Built-in and transparent tracing are typically covering basic scenarios and answering generic questions, such as observed request duration or CPU trends. Other questions, such as custom KPIs or user behavior, require adding instrumentation to your code.

To illustrate how observability can be added to a .NET Core application we will be using the following asynchronous distributed transaction example:

Sample Observability Application Overview

  1. Main Api receives a request from a “source”.
  2. Main Api enriches the request body with current day, obtained from Time Api.
  3. Main Api enqueues enriched request to a RabbitMQ queue for asynchronous processing.
  4. RabbitMQProcessor dequeues request.
  5. RabbitMQProcessor, as part of the request processing, calls Time Api to get dbtime.
  6. Time Api calls SQL Server to get current time.

To run the sample application locally (including dependencies and observability tools), follow this guide. The article will walkthrough adding each observability pillar (logging, tracing, metrics) into the sample asynchronous distributed transaction.

Note: for information on bootstrapping OpenTelemetry or Application Insights SDK please refer to the documentation: OpenTelemetry and Application Insights.

Logging was redesigned in .NET Core, bringing an integrated and extensible API. Built-in and external logging providers allow the collection of logs in multiple formats and targets. When deciding a logging platform, consider the following features:

  • Centralized: allowing the collection/storage of all system logs in a central location.
  • Structured logging: allows you to add searchable metadata to logs.
  • Searchable: allows searching by multiple criteria (app version, date, category, level, text, metadata, etc.)
  • Configurable: allows changing verbosity without code changes (based on log level and/or scope).
  • Integrated: integrated into tracing, facilitating analysis of traces and logs in the same tool.

The sample application uses the ILogger interface for logging. The snippet below demonstrates an example of structure logging. Which captures events using message template and generates information that is human and machine readable.

var result = await repository.GetTimeFromSqlAsync();
logger.LogInformation("{operation} result is {result}", nameof(repository.GetTimeFromSqlAsync), result);

When using a logging backend that understands structured logs, such as Application Insights, search instances of the example log items where “operation” is equal to “GetTimeForSqlAsync”:

Observability Application Insights structured log search

Tracing collects required information to enable the observation of a transaction as it “walks” through the system. It must be implemented in every service taking part of the transaction to be effective.

.NET Core defines a common way in which traces can be defined through the System.Diagnostics.Activity class. Through the usage of this class, dependency implementations (i.e. HTTP, SQL, Azure, EF Core, StackExchange.Redis, etc.) can create traces in a neutral way, independent of the monitoring tool used.

It is important to notice that those activities will not be available automatically in a monitoring system. Publishing them is responsibility of the monitoring SDK used. Typically, SDKs have built-in collectors to common activities, transferring them to the destination platform automatically.

In the last quarter of 2019 OpenTelemetry was announced, promising to standardize telemetry instrumentation and collection across languages and tools. Before OpenTelemetry (or its predecessors OpenCensus and OpenTracing), adding observability would often mean adding proprietary SDKs (in)directly to the code base.

The OpenTelemetry .NET SDK is currently in alpha. The Azure Monitor Application Insights team is investing in OpenTelemetry as a next step of Azure Monitor SDKs evolution.

Quick Intro on Tracing with OpenTelemetry

In a nutshell, OpenTelemetry collects traces using spans. A span delimits an operation (HTTP request processing, dependency call). It contains start and end time (among other properties). It has a unique identifier (SpanId, 16 characters, 8 bytes) and a trace identifier (TraceId, 32 characters, 16 bytes). The trace identifier is used to correlate all spans for a given transaction. A span can contain children spans (as calls in a stack trace). If you are familiar with Azure Application Insights, the following table might be helpful to understand OpenTelemetry terms:

Application Insights OpenTelemetry
Request, PageView Span with span.kind = server
Dependency Span with span.kind = client
Id of Request and Dependency SpanId
Operation_Id TraceId
Operation_ParentId ParentId

Adding Tracing to a .NET Core Application

As mentioned previously, an SDK is needed in order to collect and publish distributed tracing in a .NET Core application. Application Insights SDK sends traces to its centralized database while OpenTelemetry supports multiple exporters (including Application Insights). When configured to use OpenTelemetry, the sample application sends traces to a Jaeger instance.

In the asynchronous distributed transaction scenario, track the following operations:

HTTP Requests between microservices

HTTP correlation propagation is part of both SDKs. With the only requirement of setting activity id format to W3C at application start:

public static void Main(string[] args)
{ Activity.DefaultIdFormat = ActivityIdFormat.W3C; Activity.ForceDefaultIdFormat = true; // rest is omitted
}

Dependency calls (SQL, RabbitMQ)

Unlike Application Insights SDK, OpenTelemetry (in early alpha) does not yet have support for SQL Server trace collection. A simple way to track dependencies with OpenTelemetry is to wrap the call like the following example:

var span = this.tracer.StartSpan("My external dependency", SpanKind.Client);
try
{ return CallToMyDependency();
}
catch (Exception ex)
{ span.Status = Status.Internal.WithDescription(ex.ToString()); throw;
}
finally
{ span.End();
}

Asynchronous Processing / Queued Items

There is no built-in trace correlation between publishing and processing a RabbitMQ message. Custom code is required, creating the publishing activity (optional) and referencing the parent trace during the item dequeuing.

We covered previously creating traces by wrapping the dependency call. This option allows expressing additional semantic information such as links between spans for batching and other fan-in patterns. Another option is to use System.Diagnostics.Activity, which is a SDK independent way to create traces. This option has limited set of features, however, is built-in into .NET.

These two options work well with each other and .NET team is working on making .NET Activity and OpenTelemetry spans integration better.

Creating an Operation Trace

The snippet below demonstrates how the publish operation trace can be created. It adds the trace information to the enqueued message header, which will later be used to link both operations.

Activity activity = null;
if (diagnosticSource.IsEnabled("Sample.RabbitMQ"))
{ // Generates the Publishing to RabbitMQ trace // Only generated if there is an actual listener activity = new Activity("Publish to RabbitMQ"); diagnosticSource.StartActivity(activity, null);
} // Add current activity identifier to the RabbitMQ message
basicProperties.Headers.Add("traceparent", Activity.Current.Id); channel.BasicPublish(...) if (activity != null)
{ // Signal the end of the activity diagnosticSource.StopActivity(activity, null);
}

A collector, which subscribes to target activities, is required to publish the trace to a backend. Implementing a collector is not a straightforward task and is intended to be used by SDK implementors. The snippet below is taken from the sample application, where a simplified and not production-ready, RabbitMQ collector for OpenTelemetry was implemented:

public class RabbitMQListener : ListenerHandler
{ public override void OnStartActivity(Activity activity, object payload) { var span = this.Tracer.StartSpanFromActivity(activity.OperationName, activity); foreach (var kv in activity.Tags) span.SetAttribute(kv.Key, kv.Value); } public override void OnStopActivity(Activity activity, object payload) { var span = this.Tracer.CurrentSpan; span.End(); if (span is IDisposable disposableSpan) { disposableSpan.Dispose(); } }
} var subscriber = new DiagnosticSourceSubscriber(new RabbitMQListener("Sample.RabbitMQ", tracer), DefaultFilter);
subscriber.Subscribe();

For more information on how to build collectors, please refer to OpenTelemetry/Application Insights built-in collectors as well as this user guide.

Activity

As mentioned, HTTP requests in ASP.NET have built-in activity correlation injected by the framework. That is not the case for the RabbitMQ consumer. In order to continue the distributed transaction, we must create the span referencing the parent trace. This was injected into the message by the publisher. The snippet below uses an extension method to build the activity:

public static Activity ExtractActivity(this BasicDeliverEventArgs source, string name)
{ var activity = new Activity(name ?? Constants.RabbitMQMessageActivityName); if (source.BasicProperties.Headers.TryGetValue("traceparent", out var rawTraceParent) && rawTraceParent is byte[] binRawTraceParent) { activity.SetParentId(Encoding.UTF8.GetString(binRawTraceParent)); } return activity;
}

The activity is then used to create the concrete trace. In OpenTelemetry the code looks like this:

// Note: OpenTelemetry requires the activity to be started
activity.Start();
tracer.StartActiveSpanFromActivity(activity.OperationName, activity, SpanKind.Consumer, out span);

The snippet below creates the telemetry using Application Insights SDK:

// Note: Application Insights will start the activity
var operation = telemetryClient.StartOperation<Dependencytelemetry>(activity);

The usage of activities gives flexibility in terms of SDK used, as it is a neutral way to create traces. Once instrumented the distributed end-to-end transaction in Jaeger looks like this:

Distributed Trace in Jaeger

The same transaction in Application Insights looks like this:

Distributed Trace in Application Insights

When using single monitoring solution for traces and logs, such as Application Insights, the logs become part of the end-to-end transaction:

Observability Application Insights: traces and logs

Metrics

There are common metrics applicable to most applications, like CPU usage, allocated memory, and request time. As well as business specific like visitors, page views, sold items, and sent items. Exposing business metrics in a .NET Core application typically requires using an SDK.

Collection metrics in .NET Core happens through 3rd-party SDKs which aggregate values locally, before sending to a backend. Most libraries have built-in collection for common application metrics. However, business specific metrics need to be built in the application logic, since they are created based on events that occur in the application domain.

In the sample application we are using metric counters for: enqueued items, successfully processed items and unsuccessfully processed items. The implementation in both SDKs is similar, requiring setting up a metric, dimensions and finally, tracking the counter values.

OpenTelemetry supports multiple exporters and we will be using Prometheus exporter. Prometheus combined with Grafana, for visualization and alerting, is a popular choice for open source monitoring. Application Insights supports metrics as any other instrumentation type, requiring no additional SDK or tool.

Defining a metric and tracking values using OpenTelemetry looks like this:

// Create counter
var simpleProcessor = new UngroupedBatcher(exporter, TimeSpan.FromSeconds(5));
var meterFactory = MeterFactory.Create(simpleProcessor);
var meter = meterFactory.GetMeter("Sample App");
var enqueuedCounter = meter.CreateInt64Counter("Enqueued Item"); // Incrementing counter for specific source
var labelSet = new Dictionary<string, string>() { { "Source", source } }; enqueuedCounter.Add(context, 1L, this.meter.GetLabelSet(labelSet));

The visualization with Grafana is illustrated in the image below:

Metrics with Grafana/Prometheus

The snippet below demonstrates how to define a metric and track its values using the Application Insights SDK:

// create counter
var enqueuedCounter = telemetryClient.GetMetric(new MetricIdentifier("Sample App", "Enqueued Item", "Source")); // Incrementing counter for specific source
enqueuedCounter.TrackValue(metricValue, source);

The visualization in Application Insights is illustrated below:

Observability Application Insights custom metrics

Troubleshooting

Now that we have added the 3 observability pillars to a sample application, let’s use them to troubleshoot a scenario where the application is experiencing problems.

The first signals of an application problems are usually detected by anomalies in metrics. The snapshot below illustrates such a scenario, where the number of failed processed items spikes (red line).

Metrics indicating failure

A possible next step is to look for hints in distributed traces. This should help us identify where the problem is happening. In Jaeger, searching with the tag “error=true” filters the results, listing transaction where at least one error happened.

Jaeger traces with error

In Application Insights, we can search for errors in end-to-end transactions by looking in the Failures/Dependencies or Failures/Exceptions.

Search traces with error in Application Insights

Application Insights error details in trace

The problem seems to be related to the Sample.RabbitMQProcessor service. Logs of this service can help us identify the problem. When using Application Insights logging provider, log and traces are correlated, being displayed in the same view:

Observability Application Insights errors and logs

Looking at the details, we discover that the exception InvalidEventNameException is being raised. Since we are logging the message payload, details of the failed message are available in the monitoring tool. It appears the message being processed has the eventName value of “error”, which is causing the exception to be raised.

When introducing observability into a .NET Core application, two decisions need to be taken:

  • The backend(s) where collected data will be stored and analyzed.
  • How instrumentation will be added to the application code.

Depending on your organization, the monitoring tool might already be selected. However, if you do have the chance to make this decision, consider the following:

  • Centralized: having all data in a single place makes it simple to correlate information. For example, logs, distribute traces and CPU usage. If they are split, more effort is required.
  • Manageability: how simple is to manage the monitoring tool? Is it hosted in the same machines/VMs where your application is running? In that case, shared infrastructure unavailability might leave you in the dark. When monitoring is not working, alerts won’t be triggered and metrics won’t be collected.
  • Vendor Locking: if you need to run the same application in different environments (i.e. on premises and cloud), choosing a solution that can run everywhere might be favored.
  • Application Dependencies: parts of your infrastructure or tooling that might require you to use a specific monitoring vendor. For example, Kubernetes scaling and/or progressive deployment based on Prometheus metrics.

Once the monitoring tool has been defined, choosing an SDK is limited to two options. Using the one provided by the monitoring vendor or a library capable of integrating to multiple backends.

Vendor SDKs typically yield little/no surprises regarding stability and functionality. That is the case with Application Insights, for example. It is stable with a rich feature set, including live stream, which is a feature-specific to this specific monitoring system.

OpenTelemetry

Using OpenTelemetry SDK gives you more flexibility, offering integration with multiple monitoring backends. You can even mesh them: a centralized monitoring solution for all collected data, while having a subset sent to Prometheus to fulfill a requirement. If you are unsure whether OpenTelemetry is a good fit for your project, consider the following:

  • When is your project going to production? The SDK is currently in alpha, meaning breaking changes and non-production-ready is expected.
  • Are you using vendor specific features not yet available through the OpenTelemetry SDK (specific collectors, etc.)?
  • Is your monitoring backend supported by the SDK?
  • Are you replacing a vendor SDK with OpenTelemetry? Plan some time to compare both SDKs, OpenTelemetry exporters might have differences compared to how the vendor SDK collects data.

Source code with the sample application can be found in this GitHub Repository.

Francisco Beltrao