post

.NET Core Workers in Azure Container Instances

Avatar

In .NET Core 3.0 we are introducing a new type of application template called Worker Service. This template is intended to give you a starting point for writing long running services in .NET Core. In this walkthrough you’ll learn how to use a Worker with Azure Container Registry and Azure Container Instances to get your Worker running as a microservice in the cloud.

Since the Worker template Glenn blogged about is also available via the dotnet new command line, I can create one on my Mac and edit the code using Visual Studio for Mac or Visual Studio Code (which I’ll be using here to take advantage of the integrated Docker extension).

dotnet new worker

I’ll use the default from the Worker template. As it will write to logs during execution via ILogger, I’ll be able to tell quickly from looking in the logs if the Worker is running.

public class Worker : BackgroundService
{ private readonly ILogger<Worker> _logger; public Worker(ILogger<Worker> logger) { _logger = logger; } protected override async Task ExecuteAsync(CancellationToken stoppingToken) { while (!stoppingToken.IsCancellationRequested) { _logger.LogInformation("Worker running at: {time}", DateTimeOffset.Now); await Task.Delay(1000, stoppingToken); } }
}

Visual Studio Code’s Docker tools are intelligent enough to figure out this is a .NET Core app, and will suggest the correct Docker file via the Command Palette’s Add Docker files to workspace option.

By right-clicking the resulting Dockerfile I can build the Worker into a Docker image in one click.

The Build Image option will package my Worker’s code into a Docker container locally. The second option, ACR Tasks: Build Image would use Azure Container Registry Tasks to build the image in the cloud, rather than on disk. This is helpful for scenarios when the base image is larger than I want to download locally or when I’m building an application on a Windows base image from Linux or Mac. You can learn more about ACR Tasks in the ACR docs. The Azure CLI makes it easy to login to the Azure Container Registry using the Azure CLI. This results in my Docker client being authenticated to the Azure Container Registry in my subscription.

az acr login -n BackgroundWorkerImages

This can be done in the VS Code integrated terminal or in the local terminal, as the setting will be persisted across the terminals’ environment. It can’t be done using the cloud shell, since logging into the Azure Container Registry requires local shell access so local Docker images can be accessed. Before I push the container image into my registry, I need to tag the image with the URI of the image once it has been pushed into my registry. I can easily get the ACR instance URI from the portal.

I’ll copy the URI of the registry’s login server in the portal so I can paste it when I tag the image later.

By selecting the backgroundworker:latest image in Visual Studio Code’s Docker explorer pane, I can select Tag Image.

I’ll be prompted for the tag, and I can easily paste in the URI I copied from the portal.

Finally, I can right-click the image tag I created and select Push, and the image will be pushed into the registry. Once I have a Docker image in the registry, I can use the CLI or tools to deploy it to Azure Container Instances, Kubernetes, or even Azure App Service.

Now that the worker is containerized and stored in the registry, starting an instance of it is one click away.

Once the container instance starts up, I’ll see some logs indicating the worker is executing, but these are just the basic startup logs and not my information-level logs I have in my Worker code.

Since I added Information-level logs during the worker’s execution, the configuration in appsettings.json (or the environment variable for the container instance) will need to be updated to see more verbose logs.

{ "Logging": { "LogLevel": { "Default": "Information", "Microsoft.Hosting.Lifetime": "Information" } }
}

Once the code is re-packaged into an updated Docker image and pushed into the Azure Container Registry, following a simple Restart…

… more details will be visible in the container instance’s logging output.

The Worker template makes it easy to create long-running background workers that you can run for as long as you need in Azure Container Instances. New container instances can be created using the portal or the Azure Command Line. Or, you can opt for more advanced scenarios using Azure DevOps or Logic Apps. With the Worker template making it easy to get started building microservices using your favorite ASP.NET Core idioms and Azure’s arsenal of container orchestration services you can get your microservices up and running in minutes.

Avatar
Brady Gaster

Senior Program Manager, ASP.NET Core

Follow    

<!–


–>

post

Web and Azure Tool Updates in Visual Studio 2019

Angelos Petropoulos

Angelos

Hopefully by now you’ve seen that Visual Studio 2019 is now generally available. As you would expect, we’ve added improvements for web and Azure development. As a starting point, Visual Studio 2019 comes with a new experience for getting started with your code and we updated the experience for creating ASP.NET and ASP.NET Core projects to match:

If you are publishing your application to Azure, you can now configure Azure App Service to use Azure Storage and Azure SQL Database instances, right from the publish profile summary page, without leaving Visual Studio. This means that for any existing web application running in App Service, you can add SQL and Storage, it is no longer limited to creation time only.

By clicking the “Add” button you get to select between Azure Storage and Azure SQL Database (more Azure services to be supported in the future):

and then you get to choose between using an existing instance of Azure Storage that you provisioned in the past or provisioning a new one right then and there:

When you configure your Azure App Service through the publish profile as demonstrated above, Visual Studio will update the Azure App Service application settings to include the connection strings you have configured (e.g. in this case azgist). It will also apply hidden tags to the instances in Azure about how they are configured to work together so that this information is not lost and can be re-discovered later by other instances of Visual Studio.

For a 30 minute overview of developing with Azure in Visual Studio, check out the session we gave as part of the launch:

As always, we welcome your feedback. Tell us what you like and what you don’t like, tell us which features you are missing and which parts of the workflow work or don’t work for you. You can do this by submitting issues to Developer Community or contacting us via Twitter.

Angelos Petropoulos

post

.NET Core Workers as Windows Services

Avatar

Glenn

In .NET Core 3.0 we are introducing a new type of application template called Worker Service. This template is intended to give you a starting point for writing long running services in .NET Core. In this walkthrough we will create a worker and run it as a Windows Service.

Create a worker

Preview Note: In our preview releases the worker template is in the same menu as the Web templates. This will change in a future release. We intend to place the Worker Server template directly inside the create new project wizard.

Create a Worker in Visual Studio

image

image

image

Create a Worker on the command line

Run dotnet new worker

image

Run as a Windows Service

In order to run as a Windows Service we need our worker to listen for start and stop signals from ServiceBase the .NET type that exposes the Windows Service systems to .NET applications. To do this we want to:

Add the Microsoft.Extensions.Hosting.WindowsServices NuGet package

image

Add the UseServiceBaseLifetime call to the HostBuilder in our Program.cs

public class Program
{ public static void Main(string[] args) { CreateHostBuilder(args).Build().Run(); } public static IHostBuilder CreateHostBuilder(string[] args) => Host.CreateDefaultBuilder(args) .UseServiceBaseLifetime() .ConfigureServices(services => { services.AddHostedService<Worker>(); });
}

This method does a couple of things. First, it checks whether or not the application is actually running as a Windows Service, if it isn’t then it noops which makes this method safe to be called when running locally or when running as a Windows Service. You don’t need to add guard clauses to it and can just run the app normally when not installed as a Windows Service.

Secondly, it configures your host to use a ServiceBaseLifetime. ServiceBaseLifetime works with ServiceBase to help control the lifetime of your app when run as a Windows Service. This overrides the default ConsoleLifetime that handles signals like CTL+C.

Install the Worker

Once we have our worker using the ServiceBaseLifetime we then need to install it:

First, lets publish the application. We will install the Windows Service in-place, meaning the exe will be locked whenever the service is running. The publish step is a nice way to make sure all the files I need to run the service are in one place and ready to be installed.

dotnet publish -o c:\code\workerpub

Then we can use the sc utility in an admin command prompt

sc create workertest binPath=c:\code\workerpub\WorkerTest.exe

For example:

image

Security note: This command has the service run as local system, which isn’t something you will generally want to do. Instead you should create a service account and run the windows service as that account. We will not talk about that here, but there is some documentation on the ASP.NET docs talking about it here: https://docs.microsoft.com/en-us/aspnet/core/host-and-deploy/windows-service?view=aspnetcore-2.2

Logging

The logging system has an Event Log provider that can send log message directly to the Windows Event Log. To log to the event log you can add the Microsoft.Extensions.Logging.EventLog package and then modify your Program.cs:

public static IHostBuilder CreateHostBuilder(string[] args) => Host.CreateDefaultBuilder(args) .ConfigureLogging(loggerFactory => loggerFactory.AddEventLog()) .ConfigureServices(services => { services.AddHostedService<Worker>(); });

In upcoming previews we plan to improve the experience of using Workers with Windows Services by:

  1. Rename UseWindowsServiceBaseLifetime to UseWindowsService
  2. Add automatic and improved integration with the Event Log when running as a Windows Service.

We hope you try out this new template and want you to let us know how it goes, you can file any bugs or suggestions here: https://github.com/aspnet/AspNetCore/issues/new/choose

Avatar

post

Re-reading ASP.Net Core request bodies with EnableBuffering()

Avatar

Jeremy

In some scenarios there’s a need to read the request body multiple times. Some examples include

  • Logging the raw requests to replay in load test environment
  • Middleware that read the request body multiple times to process it

Usually Request.Body does not support rewinding, so it can only be read once. A straightforward solution is to save a copy of the stream in another stream that supports seeking so the content can be read multiple times from the copy.

In ASP.NET framework it was possible to read the body of an HTTP request multiple times using HttpRequest.GetBufferedInputStream method. However, in ASP.NET Core a different approach must be used.

In ASP.NET Core 2.1 we added an extension method EnableBuffering() for HttpRequest. This is the suggested way to enable request body for multiple reads. Here is an example usage in the InvokeAsync() method of a custom ASP.NET middleware:

public async Task InvokeAsync(HttpContext context, RequestDelegate next)
{ context.Request.EnableBuffering(); // Leave the body open so the next middleware can read it. using (var reader = new StreamReader( context.Request.Body, encoding: Encoding.UTF8, detectEncodingFromByteOrderMarks: false, bufferSize: bufferSize, leaveOpen: true)) { var body = await reader.ReadToEndAsync(); // Do some processing with body… // Reset the request body stream position so the next middleware can read it context.Request.Body.Position = 0; } // Call the next delegate/middleware in the pipeline await next(context);
}

The backing FileBufferingReadStream uses memory stream of a certain size first then falls back to a temporary file stream. By default the size of the memory stream is 30KB. There are also other EnableBuffering() overloads that allow specifying a different threshold, and/or a limit for the total size:

public static void EnableBuffering(this HttpRequest request, int bufferThreshold) public static void EnableBuffering(this HttpRequest request, long bufferLimit) public static void EnableBuffering(this HttpRequest request, int bufferThreshold, long bufferLimit)

For example, a call of

context.Request.EnableBuffering(bufferThreshold: 1024 * 45, bufferLimit: 1024 * 100);

enables a read buffer with limit of 100KB. Data is buffered in memory until the content exceeds 45KB, then it’s moved to a temporary file. By default there’s no limit on the buffer size but if there’s one specified and the content of request body exceeds the limit, an System.IOException will be thrown.

These overloads offer flexibility if there’s a need to fine-tune the buffering behaviors. Just keep in mind that:

  • Even though the memory stream is rented from a pool, it still has memory cost associated with it.
  • After the read is over the bufferThreshold the performance will be slower since a file stream will be used.
Avatar
Jeremy Meng

Software Development Engineer

Follow Jeremy   

<!–


–>

post

Blazor 0.9.0 experimental release now available

Daniel Roth

Daniel

Blazor 0.9.0 is now available! This release updates Blazor with the Razor Components improvements in .NET Core 3.0 Preview 3.

New Razor Component improvements now available to Blazor apps:

  • Improved event handling
  • Forms & validation

Checkout the ASP.NET Core 3.0 Preview 3 announcement for details on these improvements. See also the Blazor 0.9.0 release notes for additional details.

Note: The Blazor templates have not been updated to use the new .razor file extension for Razor Components in this release. This update will be done in a future release.

Get Blazor 0.9.0

To get started with Blazor 0.9.0 install the following:

  1. .NET Core 3.0 Preview 3 SDK (3.0.100-preview3-010431)
  2. Visual Studio 2019 (Preview 4 or later) with the ASP.NET and web development workload selected.
  3. The latest Blazor extension from the Visual Studio Marketplace.
  4. The Blazor templates on the command-line:

    dotnet new -i Microsoft.AspNetCore.Blazor.Templates::0.9.0-preview3-19154-02
    

You can find getting started instructions, docs, and tutorials for Blazor at https://blazor.net.

Upgrade to Blazor 0.9.0

To upgrade your existing Blazor apps to Blazor 0.9.0 first make sure you’ve installed the prerequisites listed above.

To upgrade a Blazor 0.8.0 project to 0.9.0:

  • Update the Blazor packages and .NET CLI tool references to 0.9.0-preview3-19154-02.
  • Update the remaining Microsoft.AspNetCore.* packages to 3.0.0-preview3-19153-02.
  • Remove any usage of JSRuntime.Current and instead use dependency injection to get the current IJSRuntime instance and pass it through to where it is needed.

Give feedback

We hope you enjoy this latest preview release of Blazor. As with previous releases, your feedback is important to us. If you run into issues or have questions while trying out Blazor, file issues on GitHub. You can also chat with us and the Blazor community on Gitter if you get stuck or to share how Blazor is working for you. After you’ve tried out Blazor for a while please let us know what you think by taking our in-product survey. Click the survey link shown on the app home page when running one of the Blazor project templates:

Blazor survey

Thanks for trying out Blazor!

Daniel Roth
Daniel Roth

Principal Program Manager, ASP.NET

Follow Daniel   

<!–


–>

post

ASP.NET Core updates in .NET Core 3.0 Preview 3

Daniel Roth

Daniel

.NET Core 3.0 Preview 3 is now available and it includes a bunch of new updates to ASP.NET Core.

Here’s the list of what’s new in this preview:

  • Razor Components improvements:
    • Single project template
    • New .razor extension
    • Endpoint routing integration
    • Prerendering
    • Razor Components in Razor Class Libraries
    • Improved event handling
    • Forms & validation
  • Runtime compilation
  • Worker Service template
  • gRPC template
  • Angular template updated to Angular 7
  • Authentication for Single Page Applications
  • SignalR integration with endpoint routing
  • SignalR Java client support for long polling

Please see the release notes for additional details and known issues.

Get started

To get started with ASP.NET Core in .NET Core 3.0 Preview 3 install the .NET Core 3.0 Preview 3 SDK

If you’re on Windows using Visual Studio, you’ll also want to install the latest preview of Visual Studio 2019.

  • Note: To use use this preview of .NET Core 3.0 with the release channel of Visual Studio 2019, you will need to enable the option to use previews of the .NET Core SDK by going to Tools > Options > Projects and Solutions > .NET Core > Use previews of the .NET Core SDK

Upgrade an existing project

To upgrade an existing an ASP.NET Core app to .NET Core 3.0 Preview 3, follow the migrations steps in the ASP.NET Core docs.

Please also see the full list of breaking changes in ASP.NET Core 3.0.

Razor Components improvements

In the previous preview, we introduced Razor Components as a new way to build interactive client-side web UI with ASP.NET Core. This section covers the various improvements we’ve made to Razor Components in this preview update.

Single project template

The Razor Components project template is now a single project instead of two projects in the same solution. The authored Razor Components live directly in the ASP.NET Core app that hosts them. The same ASP.NET Core project can contain Razor Components, Pages, and Views. The Razor Components template also has HTTPS enabled by default like the other ASP.NET Core web app templates.

New .razor extension

Razor Components are authored using Razor syntax, but are compiled differently than Razor Pages and Views. To clarify which Razor files should be compiled as Razor Components we’ve introduced a new file extension: .razor. In the Razor Components template all component files now use the .razor extension. Razor Pages and Views still use the .cshtml extension.

Razor Components can still be authored using the .cshtml file extension as long as those files are identified as Razor Component files using the _RazorComponentInclude MSBuild property. For example, the Razor Component template in this release specifies that all .cshtml files under the Components folder should be treated as Razor Components.

<_RazorComponentInclude>Components\**\*.cshtml</_RazorComponentInclude>

Please note that there are a number of limitations with .razor files in this release. Please refer to the release notes for the list of known issues and available workarounds.

Endpoint routing integration

Razor Components are now integrated into ASP.NET Core’s new Endpoint Routing system. To enable Razor Components support in your application, use MapComponentHub<TComponent> within your routing configuration. For example,

app.UseRouting(routes =>
{ routes.MapRazorPages(); routes.MapComponentHub<App>("app");
});

This configures the application to accept incoming connections for interactive Razor Components, and specifies that the root component App should be rendered within a DOM element matching the selector app.

Prerendering

The Razor Components project template now does server-side prerendering by default. This means that when a user navigates to your application, the server will perform an initial render of your Razor Components and deliver the result to their browser as plain static HTML. Then, the browser will reconnect to the server via SignalR and switch the Razor Components into a fully interactive mode. This two-phase delivery is beneficial because:

  • It improves the perceived performance of the site, because the UI can appear sooner, without waiting to make any WebSocket connections or even running any client-side script. This makes a bigger difference for users on slow connections, such as 2G/3G mobiles.
  • It can make your application easily crawlable by search engines.

For users on faster connections, such as intranet users, this feature has less impact because the UI should appear near-instantly regardless.

To set up prerendering, the Razor Components project template no longer has a static HTML file. Instead a single Razor Page, /Pages/Index.cshtml, is used to prerender the app content using the Html.RenderComponentAsync<TComponent>() HTML helper. The page also references the components.server.js script to set up the SignalR connection after the content has been prerendered and downloaded. And because this is a Razor Page, features like the environment tag helper just work.

Index.cshtml

@page "{*clientPath}"
<!DOCTYPE html>
<html>
<head> ...
</head>
<body> <app>@(await Html.RenderComponentAsync<App>())</app> http://_framework/components.server.js
</body>
</html>

Besides the app loading faster, you can see that prerendering is happening by looking at the downloaded HTML source in the browser dev tools. The Razor Components are fully rendered in the HTML.

Razor Components in Razor Class Libraries

You can now add Razor Components to Razor Class Libraries and reference them from ASP.NET Core projects using Razor Components.

To create reusable Razor Components in a Razor Class Library:

  1. Create an Razor Components app:

    dotnet new razorcomponents -o RazorComponentsApp1
    
  2. Create a new Razor Class Library

    dotnet new razorclasslib -o RazorClassLib1
    
  3. Add a Component1.razor file to the Razor Class Library

Component1.razor

<h1>Component1</h1> <p>@message</p> @functions { string message = "Hello from a Razor Class Library"!;
}
  1. Reference the Razor Class Library from an ASP.NET Core app using Razor Components:

    dotnet add RazorComponentsApp1 reference RazorClassLib1
    
  2. In the Razor Components app, import all components from the Razor Class Library using the @addTagHelper directive and then use Component1 in the app:

Index.razor

@page "/"
@addTagHelper *, RazorClassLib1 <h1>Hello, world!</h1> Welcome to your new app. <Component1 />

Note that Razor Class Libraries are not compatible with Blazor apps in this release. Also, Razor Class Libraries do not yet support static assets. To create components in a library that can be shared with Blazor and Razor Components apps you still need to use a Blazor Class Library. This is something we expect to address in a future update.

Improved event handling

The new EventCallback and EventCallback<> types make defining component callbacks much more straightforward. For example consider the following two components:

MyButton.razor

<button onclick="@OnClick">Click here and see what happens!</button> @functions { [Parameter] EventCallback<UIMouseEventArgs> OnClick { get; set; }
}

UsesMyButton.razor

@text
<MyButton OnClick="ShowMessage" /> @function { string text; void ShowMessage(UIMouseEventArgs e) { text = "Hello, world!"; } }

The OnClick callback is of type EventCallback<UIMouseEventArgs> (instead of Action<UIMouseEventArgs>), which MyButton passes off directly to the onclick event handler. The compiler handles converting the delegate to an EventCallback, and will do some other things to make sure that the rendering process has enough information to render the correct target component. As a result an explicit call to StateHasChanged in the ShowMessage event handler isn’t necessary.

By using the EventCallback<> type OnClick handler can now also be asynchronous without any additional code changes to MyButton:

UsesMyButton.razor

@text
<MyButton OnClick="ShowMessageAsync" /> @function { string text; async Task ShowMessageAsync(UIMouseEventArgs e) { await Task.Yield(); text = "Hello, world!"; } }

We recommend using EventCallback and EventCallback<T> when you define component parameters for event handling and binding. Use EventCallback<> where possible because it’s strongly typed and will provide better feedback to users of your component. Use EventCallback when there’s no value you will pass to the callback.

Note that consumers don’t have to write different code when using a component because of EventCallback. The same event handling code should still work, while removing some thorny issues and improving the usability of your components.

Forms & validation

This preview release adds built-in components and infrastructure for handling forms and validation.

One of the benefits of using .NET for client-side web development is the ability to share the same implementation logic between the client and the server. Validation logic is a prime example. The new forms & validation support in Razor Components includes support for handling validation using data annotations, or you can plug in your preferred validation system.

For example, the following Person type defines validation logic using data annotations:

public class Person
{ [Required(ErrorMessage = "Enter a name")] [StringLength(10, ErrorMessage = "That name is too long")] public string Name { get; set; } [Range(0, 200, ErrorMessage = "Nobody is that old")] public int AgeInYears { get; set; } [Required] [Range(typeof(bool), "true", "true", ErrorMessage = "Must accept terms")] public bool AcceptsTerms { get; set; }
}

Here’s how you can create a validating form based on this Person model:

<EditForm Model="@person" OnValidSubmit="@HandleValidSubmit"> <DataAnnotationsValidator /> <ValidationSummary /> <p class="name"> Name: <InputText bind-Value="@person.Name" /> </p> <p class="age"> Age (years): <InputNumber bind-Value="@person.AgeInYears" /> </p> <p class="accepts-terms"> Accepts terms: <InputCheckbox bind-Value="@person.AcceptsTerms" /> </p> <button type="submit">Submit</button>
</EditForm> @functions { Person person = new Person(); void HandleValidSubmit() { Console.WriteLine("OnValidSubmit"); }
}

If you add this form to your app and run it you will get a basic form that automatically validates the field inputs when they are changed and when the form is submitted.

Validating form

There are quite a few things going on here, so let’s break it down piece by piece:

  • The form is defined using the new EditForm component. The EditForm sets up an EditContext as a cascading value that tracks metadata about the edit process (e.g. what’s been modified, current validation messages, etc.). The EditForm also provides convenient events for valid and invalid submits (OnValidSubmit, OnInvalidSubmit). Or you can use OnSubmit directly if you want to trigger the validation yourself.
  • The DataAnnotationsValidator component attaches validation support using data annotations to the cascaded EditContext. Enabling support for validation using data annotations currently requires this explicit gesture, but we are considering making this the default behavior that you can then override.
  • Each of the form fields are defined using a set of built-in input components (InputText, InputNumber, InputCheckbox, InputSelect, etc.). These components provide default behavior for validating on edit and changing their CSS class to reflect the field state. Some of them have useful parsing logic (e.g., InputDate and InputNumber handle unparseable values gracefully by registering them as validation errors). The relevant ones also support nullability of the target field (e.g., int?).
  • The ValidationMessage component displays validation messages for a specific field.
  • The ValidationSummary component summarizes all validation messages (similar to the validation summary tag helper).

There are some limitations with the built-in input components that we expect to improve in future updates. For example, you can’t currently specify arbitrary attributes on the generate input tags. In the future, we plan to enable components that pass through all extra attributes. For now, you’ll need to build your own component subclasses to handle these cases.

Runtime compilation

Support for runtime compilation was removed from the ASP.NET Core shared framework in .NET Core 3.0, but you can now enable it by adding a package to your app.

To enable runtime compilation:

  1. Add a package reference to Microsoft.AspNetCore.Mvc.Razor.RuntimeCompilation

    <PackageReference Include="Microsoft.AspNetCore.Mvc.Razor.RuntimeCompilation" Version="3.0.0-preview3-19153-02" />
    
  2. In Startup.ConfigureServices add a call to AddRazorRuntimeCompilation

    services.AddMvc().AddRazorRuntimeCompilation();
    

Worker Service Template

In 3.0.0-preview3 we are introducing a new template called ‘Worker Service’. This template is designed as a starting point for running long-running background processes like you might run as a Windows Service or Linux Daemon. Examples of this would be producing/consuming messages from a message queue or monitoring a file to process when it appears. It’s intended to provide the productivity features of ASP.NET Core such as Logging, DI, Configuration, etc without carrying any web dependencies.

Worker Service

In the coming days we will publish a few blog posts giving more walkthroughs on using the Worker template to get started. We will have dedicated posts about publishing as a Windows/Systemd service, running on ACI/AKS, as well as running as a WebJob.

Caveats

Whilst the intent is for the worker template to not have any dependencies on web technologies by default, in preview3 it still uses the Web SDK and is shown after you select ‘ASP.NET Core WebApplication’ in Visual Studio. This is temporary and will be changed in a future preview. This means that for preview3 you will see many options in VS that may not make sense, such as publishing your worker as a web app to IIS.

Angular template updated to Angular 7

The Angular template is now updated to Angular 7. We anticipate updating again to Angular 8 before .NET Core 3.0 ships a stable release.

Authentication for the Single Page Application templates

This release introduces support for authentication in our Angular and React templates. In this section we show how to create a new Angular or React template that allows us to authenticate users and access a protected API resource.

Our support for authenticating and authorizing users is powered behind the scenes by IdentityServer, with some extensions we built to simplify the configuration experience for the scenarios we are targeting.

Note: In this post we showcase authentication support for Angular but the React template offers equivalent functionality.

Create a new Angular application

To create a new Angular application with authentication support we invoke the following command:

dotnet new angular -au Individual

This command creates a new ASP.NET Core application with a hosted client Angular application. The ASP.NET Core application includes an Identity Server instance already configured so that your Angular application can authenticate users and send HTTP requests against the protected resources in the ASP.NET Core application.

The authentication and authorization support is built as an Angular module that gets imported into your application and offers a suite of components and services to enhance the functionality of the main applicaiton module.

Run the application

To run the application simply execute the command below and open the browser in the url displayed on the console:

dotnet run
Hosting environment: Development
Content root path: C:\angularapp
Now listening on: https://localhost:5001
Now listening on: http://localhost:5000
Application started. Press Ctrl+C to shut down.

SPA index

When we open the application we see the usual Home, Counter and Fetch data menu options and two new options: Register and Login. If we click on Register we get sent to the default Identity UI where (after running migrations and updating the database) we can register as a new user.

Register a new user

SPA register

After registering as a new user we get redirected back to the application where we can see that we are successfully authenticated

SPA logged in

Call an authenticated API

If we click on the Fetch data we can see the weather forecast data table

SPA fetch data

Protect an existing API

To protect an API on the server we simply need to use the [Authorize] attribute on the controller or action that we want to protect.

[Authorize]
[Route("api/[controller]")]
public class SampleDataController : Controller
{
...
}

Require authentication for a client route.

To require the user be authenticated when visiting a page in the Angular application we apply the [AuthorizeGuard] to the route we are configuring.

import { ApiAuthorizationModule } from 'src/api-authorization/api-authorization.module';
import { AuthorizeGuard } from 'src/api-authorization/authorize.guard';
import { AuthorizeInterceptor } from 'src/api-authorization/authorize.interceptor'; @NgModule({ declarations: [ AppComponent, NavMenuComponent, HomeComponent, CounterComponent, FetchDataComponent ], imports: [ BrowserModule.withServerTransition({ appId: 'ng-cli-universal' }), HttpClientModule, FormsModule, ApiAuthorizationModule, RouterModule.forRoot([ { path: '', component: HomeComponent, pathMatch: 'full' }, { path: 'counter', component: CounterComponent }, { path: 'fetch-data', component: FetchDataComponent, canActivate: [AuthorizeGuard] }, ]) ], providers: [ { provide: HTTP_INTERCEPTORS, useClass: AuthorizeInterceptor, multi: true } ], bootstrap: [AppComponent]
})
export class AppModule { }

Get more details by visiting the docs page

This is a quick introduction to what our new authentication support for Single Page Applications has to offer. For more details check out the docs.

Endpoint Routing with SignalR Hubs

In 3.0.0-preview3 we are hooking SignalR hubs into the new Endpoint Routing feature recently released. SignalR hub wire-up was previously done explicitly:

app.UseSignalR(routes =>
{ routes.MapHub<ChatHub>("hubs/chat");
});

This meant developers would need to wire up controllers, Razor pages, and hubs in a variety of different places during startup, leading to a series of nearly-identical routing segments:

app.UseSignalR(routes =>
{ routes.MapHub<ChatHub>("hubs/chat");
}); app.UseRouting(routes =>
{ routes.MapRazorPages();
});

Now, SignalR hubs can also be routed via endpoint routing, so you’ve got a one-stop place to route nearly everything in ASP.NET Core.

app.UseRouting(routes =>
{ routes.MapRazorPages(); routes.MapHub<ChatHub>("hubs/chat");
});

Long Polling for Java SignalR Client

We added long polling support to the Java client which enables it to establish connections even in environments that do not support WebSockets. This also gives you the ability to specifically select the long polling transport in your client apps.

gRPC Template

This preview release introduces a new template for building gRPC services with ASP.NET Core using a new gRPC framework we are building in collaboration with Google.

gRPC is a popular RPC (remote procedure call) framework that offers an opinionated contract-first approach to API development. It uses HTTP/2 for transport, Protocol Buffers as the interface description language, and provides features such as authentication, bidirectional streaming and flow control, and cancellation and timeouts.

gRPC template

The templates create two projects: a gRPC service hosted inside ASP.NET Core, and a console application to test it with.

gRPC solution

This is the first public preview of gRPC for ASP.NET Core and it doesn’t implement all of gRPC’s features, but we’re working hard to make the best gRPC experience for ASP.NET possible. Please try it out and give us feedback at grpc/grpc-dotnet on GitHub.

Stay tuned for future blog posts discussing gRPC for ASP.NET Core in more detail.

Give feedback

We hope you enjoy the new features in this preview release of ASP.NET Core! Please let us know what you think by filing issues on Github.

Daniel Roth
Daniel Roth

Principal Program Manager, ASP.NET

Follow Daniel   

<!–


–>

post

Azure Hybrid Benefit for SQL Server

Angelos Petropoulos

Angelos

In my previous blog post I talked about how to migrate data from existing on-prem SQL Server instances to Azure SQL Database. If you haven’t heard SQL Server 2008 end of support is coming this summer, so it’s a good time to evaluate moving to an Azure SQL Database.

If you decide to try Azure, chances are you will not be able to immediately move 100% of your on-prem databases and relevant applications in one go. You’ll probably come up with a migration plan that spans weeks, months or even years. During the migration phase you will be spinning up new instances of Azure SQL Database and turning off on-prem SQL Server instances, but for a little bit of time there will be overlap.

To help manage the cost during such transitions we offer the Azure Hybrid Benefit. You can convert 1 core of SQL Enterprise edition to get up to 4 vCores of Azure SQL Database at a reduced rate. For example, if you have 4 core licenses of SQL Enterprise edition, you can receive up to 16 vCores of Azure SQL Database.

If you want to learn more check out the Azure Hybrid Benefit FAQ and don’t forget, if you have any questions around migrating your .NET applications to Azure you can always ask for free assistance. If you have any questions about this post just leave us a comment below.

Angelos Petropoulos

post

Migrating your existing on-prem SQL Server database to Azure SQL DB

If you are in the process of moving an existing .NET application to Azure, it’s likely you’ll have to migrate an existing, on-prem SQL database as well. There are a few different ways you can go about this, so let’s go through them.

Data Migration Assistant (downtime required)

The Data Migration Assistant (download | documentation) is free, easy to use, slick and extremely powerful! It can:

  • Evaluate if your database is ready to migrate and produce a readiness report (command line support included)
  • Provide recommendations for how to remediate migration blocking issues
  • Recommend the minimum Azure SQL Database SKU based on performance counter data of your existing database
  • Perform the actual migration of schema, data and objects (server roles, logins, etc.)

After a successful migration, applications will be able to connect to the target SQL server databases seamlessly. There are currently a couple of limitations, but the majority of databases shouldn’t be impacted. If this sounds interesting, check out the full tutorials on how to migrate to Azure SQL DB and how to migrate to Azure SQL DB Managed Instance.

Azure Data Migration Service (no downtime required)

The Azure Data Migration Service allows you to move your on-prem database to Azure without taking it offline during the migration. Applications can keep on running while the migration is taking place. Once the database in Azure is ready you can switch your applications over immediately.

If this sounds interesting, check out the full tutorials on how to migrate to Azure SQL DB and how to migrate to Azure SQL DB Managed Instance without downtime.

SQL Server Management Studio (downtime required)

You are probably already familiar with SQL Server Management Studio (download | documentation), but if you are not it’s basically an IDE for SQL Server built on top of the Visual Studio shell and it’s free! Unlike the Data Migration Assistant, it cannot produce readiness reports nor can it suggest remediating actions, but it can perform the actual migration two different ways.

The first way is by selecting the command “Deploy Database to Microsoft Azure SQL Database…” which will bring up the migration wizard to take you through the process step by step:

The second way is by exporting the existing, on-prem database as a .bacpac file (docs to help with that) and then importing the .backpac file into Azure:

Resolving database migration compatibility issues

There are a wide variety of compatibility issues that you might encounter, depending both on the version of SQL Server in the source database and the complexity of the database you are migrating. Use the following resources, in addition to a targeted Internet search using your search engine of choices:

In addition to searching the Internet and using these resources, use the MSDN SQL Server community forums or StackOverflow. If you have any questions or problems just leave us a comment below.

post

Microsoft’s Developer Blogs are Getting an Update

In the coming days, we’ll be moving our developer blogs to a new platform with a modern, clean design and powerful features that will make it easy for you to discover and share great content. This week, you’ll see the Visual Studio, IoTDev, and Premier Developer blogs move to a new URL  – while additional developer blogs will transition over the coming weeks. You’ll continue to receive all the information and news from our teams, including access to all past posts. Your RSS feeds will continue to seamlessly deliver posts and any of your bookmarked links will re-direct to the new URL location.

We’d love your feedback

Our most inspirational ideas have come from this community and we want to make sure our developer blogs are exactly what you need. Share your thoughts about the current blog design by taking our Microsoft Developer Blogs survey and tell us what you think! We would also love your suggestions for future topics that our authors could write about. Please use the survey to share what kinds of information, tutorials, or features you would like to see covered in future posts.

Frequently Asked Questions

For the most-asked questions, see the FAQ below. If you have any additional questions, ideas, or concerns that we have not addressed, please let us know by submitting feedback through the Developer Blogs FAQ page.

Will Saved/Bookmarked URLs Redirect?

Yes. All existing URLs will auto-redirect to the new site. If you discover any broken links, please report them through the FAQ feedback page.

Will My Feed Reader Still Send New Posts?

Yes, your RSS feed will continue to work through your current set-up. If you encounter any issues with your RSS feed, please report it with details about your feed reader.

Which Blogs Are Moving?

This migration involves the majority of our Microsoft developer blogs so expect to see changes to our Visual Studio, IoTDev, and Premier Developer blogs this week. The rest will be migrated in waves over the next few weeks.

We appreciate all of the feedback so far and look forward to showing you what we’ve been working on! For additional information about this blog migration, please refer to our Developer Blogs FAQ.

post

Announcing an easier way to use latest certificates from Key Vault

Posting on behalf of Prashanth Yerramilli

When we launched Azure Key Vault a few years ago, it solved a major problem users had which was that storing sensitive and/or secret information in code or config files in plain text causes multiple problems including security exposure. Users stored their secrets in a safe store like Key Vault and used a URI to fetch the secret material. This service has been wildly popular and has become a standard for cloud applications. It is used by fledling startups to Fortune 500 companies world over.

Developers use Key Vault to store their adhoc secrets, certificates and keys used for encryption. And to follow best security practices they create secrets that are short lived. An example of typical flow in this case could be

  • Step 1: Developer creates a certificate in Key Vault
  • Step 2: Developer sets the lifetime of the secret to be 30 day. In other words developer asks Key Vault to re-create the certificate every 30 days. Developer also chooses to receive an email when a certificate is about to expire
  • Step 3: Developer writes a polling service to check if the certificate has indeed expired

In the above scenario there are few challenges for the customer. They would have to write a polling service that constantly checks if the certificate has expired and if so they wait for the new certificate and then bind it in Windows Certificate manager.
Now what if developer doesn’t have to poll. And also if the developer doesn’t have to bind the new certificate in Windows Certificate manager. To solve this exact problem we built a Key Vault Virtual Machine Extension.

Azure virtual machine (VM) extensions are small applications that provide post-deployment configuration and automation tasks on Azure VMs. For example, if a virtual machine requires software installation, anti-virus protection, or to run a script inside of it, a VM extension can be used. Azure VM extensions can be run with the Azure CLI, PowerShell, Azure Resource Manager templates, and the Azure portal. Extensions can be bundled with a new VM deployment, or run against any existing system.
To learn more about VM Extensions please click here

Key Vault VM Extension is supposed to do just that as explained in the steps below

  • Step 1: Create a Key Vault and create an Azure Windows Virtual Machine
  • Step 2: Install the Key Vault VM Extension on the VM
  • Step 3: Configure Key Vault VM Extension to monitor a specific vault by specifying how often it should fetch the certificate

By doing the above steps the latest certificate is bound correctly in Windows Certificate Manager. This feature enables auto-rotation of SSL certificates, without necessitating a re-deployment or binding.

In the lifecycle of secrets management fetching the latest version of the secret (for the purpose of this article a certificate) is just as important as storing it securely. To solve this problem, on an Azure Virtual Machine, we’ve created a VM Extension for Windows. A Linux version is coming soon.
Virtual Machine Extensions are small applications that provide post-deployment configuration and automation tasks on Azure VMs. In this case the Key Vault Virtual Machine extension once installed fetches the latest version of the certificate at a specified interval and automatically binds the latest version of the certificate in the certificate store on Windows. As you can see this feature enables auto-rotation of SSL certificates, without necessitating a re-deployment or binding.

Also before we begin going through the tutorial, we need to understand a concept called Managed Identities.
Your code needs credentials to authenticate to cloud services, but you want to limit the visibility of those credentials as much as possible. Ideally, they never appear on a developer’s workstation or get checked-in to source control. Azure Key Vault can store credentials securely so they aren’t in your code, but to retrieve them you need to authenticate to Azure Key Vault. To authenticate to Key Vault, you need a credential! A classic bootstrap problem. Through the magic of Azure and Azure AD, MI provides a “bootstrap identity” that makes it much simpler to get things started.

Here’s how it works: When you enable MI for an Azure resource such as a virtual machine, Azure creates a Service Principal (an identity) for that resource in Azure AD, and injects the credentials (of that identity) into the resource (in this case a virtual machine).

  1. Your code calls a local MI endpoint to get an access token
  2. MI uses the locally injected credentials to get an access token from Azure AD
  3. Your code uses this access token to authenticate to an Azure service

Managed Identities

Now within Managed Identities there are 2 types

  1. System Assigned managed identity is enabled directly on an Azure service instance. When the identity is enabled, Azure creates an identity for the instance in the Azure AD tenant that’s trusted by the subscription. The lifecycle of the identity is managed by Azure and is tied to the Azure service instance.
  2. User Assigned managed identity is created as a standalone Azure resource. Users first create an identity and then assign that identity to one or more Azure resources.

In this tutorial I will demonstrate how to create a Azure Virtual Machine with an ARM template which also includes creating a Key Vault VM Extension on the VM.

Prerequisites

Step 1

After the prerequisites are complete, create an System Assigned identity by following this tutorial

Step 2

Assign the newly created System Assigned identity to access to your Key Vault

  • Go to https://portal.azure.com and navigate to your Key Vault
  • Select Access Policies section and Add New by searching for the User Assigned identity
    AccessPolicies

Step 3

Create or Update a VM with the following ARM template
You can view full the ARM template here and the ARM Parameters file here.

The most minimal settings in the ARM template are shown below:

 {
 "secretsManagementSettings": {
 "observedCertificates": [
 "<KeyVault URI of a secret to be monitored/retrieved, in versionless format: https://myVaultName.vault.azure.net/secrets/myCertName">,
 "<more entries here>", 
 "pollingIntervalInS": "[parameters('kvvmextPollingInterval')]",
 ]
 }
 }

As you can see we only specify the observedCertificates parameter and polling Interval in seconds


Note: Your observedCertificates urls should be of the form:

https://myVaultName.vault.azure.net/secrets/myCertName 

and not:

https://myVaultName.vault.azure.net/certificates/myCertName 

Reason being the /secrets path returns the full certificate, inluding the private key, while the /certificates path does not.

By following this tutorial you can create a VM with the above specified template

The above tutorial assumes that you are storing your certificates on Windows Certificate Manager. And so the VM Extension pulls down the latest certificates at a specified interval and automatically binds those certificates in your certificate manager.

That’s all folks!

Linux Version: We’re actively working on a VM Extension for Linux and would love to hear any feedback you might have.

We are eager to hear from you about your use cases and how we can evolve the VM Extension to help you. So please reach out to us and add your feature requests to the Azure feedback forum. If you run into issues using the VM extension please reach out to us on StackOverflow.

Prashanth Yerramilli, Senior Program Manager, Azure Key Vault

Prashanth Yerramilli Profile Pic Prashanth Yerramilli is the Key Vault Program Manager on the Azure Security team. He has over 10 years of Software Engineering experience and brings to the team love for creating the ultimate development experience.

Prashanth can be reached at:
-Twitter @yvprashanth1
-GitHub https://github.com/yvprashanth