Posted on Leave a comment

gRPC-Web for .NET now available

Avatar

James

gRPC-Web for .NET is now officially released. We announced experimental support in January and since then we’ve been making improvements based on feedback from early adopters.

With this release gRPC-Web graduates to a fully supported component of the grpc-dotnet project and is ready for production. Use gRPC in the browser with gRPC-Web and .NET today.

Getting started

Developers who are brand new to gRPC should check out Create a gRPC client and server in ASP.NET Core. This tutorial walks through creating your first gRPC client and server using .NET.

If you already have a gRPC app then the Use gRPC in browser apps article shows how to add gRPC-Web to a .NET gRPC server.

What are gRPC and gRPC-Web

gRPC is a modern high-performance RPC (Remote Procedure Call) framework. gRPC is based on HTTP/2, Protocol Buffers and other modern standard-based technologies. gRPC is an open standard and is supported by many programming languages, including .NET.

It is currently impossible to implement the gRPC HTTP/2 spec in the browser because there are no browser APIs with enough fine-grained control over requests. gRPC-Web is a standardized protocol that solves this problem and makes gRPC usable in the browser. gRPC-Web brings many of gRPC’s great features, like small binary messages and contract-first APIs, to modern browser apps.

New opportunities with gRPC-Web

gRPC-Web is designed to make gRPC available in more scenarios. These include:

  • Call ASP.NET Core gRPC apps from the browser – Browser APIs can’t call gRPC HTTP/2. gRPC-Web offers a compatible alternative.
    • JavaScript SPAs
    • .NET Blazor Web Assembly apps
  • Host ASP.NET Core gRPC apps in IIS and Azure App Service – Some servers, such as IIS and Azure App Service, currently can’t host gRPC services. While this is actively being worked on, gRPC-Web offers an interesting alternative that works in every environment today.
  • Call gRPC from non-.NET Core platforms – HTTP/2 is not supported by HttpClient on all .NET platforms. gRPC-Web can be used to call gRPC services from Blazor and Xamarin.

gRPC loves Blazor and .NET

gRPC is a registered trademark of the Linux foundation. Blazor is compatible with gRPC-web

We’ve worked with the Blazor team to make gRPC-Web a great end-to-end developer experience when used in Blazor WebAssembly apps. Not only will gRPC tooling automatically generate strongly typed clients for you to call gRPC services from your Blazor app, but gRPC offers significant performance benefits over JSON.

A great example of the performance benefits in action is Blazor’s default template app. The data transferred on the fetch data page is halved when gRPC is used instead of JSON. Data size is reduced from 627 bytes down to 309 bytes.

Developer tools showing gRPC-Web transfer size

The performance gain here comes from gRPC’s efficient binary serialization over traditional text-based JSON. gRPC-Web is an exciting opportunity to improve rich browser-based apps.

Try gRPC-Web with ASP.NET Core today

For more information about gRPC-Web, check out the documention, or try out a sample app that uses gRPC-Web.

gRPC-Web for .NET is out on NuGet now:

We look forward to seeing what you create with .NET, gRPC and now gRPC-Web!

Posted on Leave a comment

A new experiment: Call .NET gRPC services from the browser with gRPC-Web

Avatar

James

I’m excited to announce experimental support for gRPC-Web with .NET. gRPC-Web allows gRPC to be called from browser-based apps like JavaScript SPAs or Blazor WebAssembly apps.

gRPC-Web for .NET promises to bring many of gRPC’s great features to browser apps:

  • Strongly-typed code-generated clients
  • Compact Protobuf messages
  • Server streaming

What is gRPC-Web

It is impossible to implement the gRPC HTTP/2 spec in the browser because there is no browser API with enough fine-grained control over HTTP requests. gRPC-Web solves this problem by being compatible with HTTP/1.1 and HTTP/2.

gRPC-Web is not a new technology. There is a stable gRPC-Web JavaScript client, and a proxy for translating between gRPC and gRPC-Web for services. The new experimental packages allow an ASP.NET Core gRPC app to support gRPC-Web without a proxy, and allow the .NET Core gRPC client to call gRPC-Web services. (great for Blazor WebAssembly apps!)

New opportunites with gRPC-Web

  • Call ASP.NET Core gRPC apps from the browser – Browser APIs can’t call gRPC HTTP/2. gRPC-Web offers a compatible alternative.
    • JavaScript SPAs
    • .NET Blazor Web Assembly apps
  • Host ASP.NET Core gRPC apps in IIS and Azure App Service – Some servers, such as IIS and Azure App Service, currently can’t host gRPC services. While this is actively being worked on, gRPC-Web offers an interesting alternative that works in every environment today.
  • Call gRPC from non-.NET Core platforms – Some .NET platforms HttpClient doesn’t support HTTP/2. gRPC-Web can be used to call gRPC services on these platforms (e.g. Blazor WebAssembly, Xamarin).

Note that there is a small performance cost to gRPC-Web, and two gRPC features are no longer supported: client streaming and bi-directional streaming. (server streaming is still supported!)

Server gRPC-Web instructions

If you are new to gRPC in .NET, there is a simple tutorial to get you started.

gRPC-Web does not require any changes to your services, the only modification is startup configuration. To enable gRPC-Web with an ASP.NET Core gRPC service, add a reference to the Grpc.AspNetCore.Web package. Configure the app to use gRPC-Web by adding AddGrpcWeb(...) and UseGrpcWeb() in the startup file:

Startup.cs

public void ConfigureServices(IServiceCollection services)
{ services.AddGrpc();
} public void Configure(IApplicationBuilder app)
{ app.UseRouting(); // Add gRPC-Web middleware after routing and before endpoints app.UseGrpcWeb(); app.UseEndpoints(endpoints => { endpoints.MapGrpcService<GreeterService>().EnableGrpcWeb(); });
}

Some additional configuration may be required to call gRPC-Web from the browser, such as configuring the app to support CORS.

Client gRPC-Web instructions

The JavaScript gRPC-Web client has instructions for setting up a gRPC-Web client to use in browser JavaScript SPAs.

Calling gRPC-Web with a .NET client is the same as regular gRPC, the only modification is how the channel is created. To enable gRPC-Web, add a reference to the Grpc.Net.Client.Web package. Configure the channel to use the GrpcWebHandler:

// Configure a channel to use gRPC-Web
var handler = new GrpcWebHandler(GrpcWebMode.GrpcWebText, new HttpClientHandler());
var channel = GrpcChannel.ForAddress("https://localhost:5001", new GrpcChannelOptions { HttpClient = new HttpClient(handler) }); var client = Greeter.GreeterClient(channel);
var response = await client.SayHelloAsync(new GreeterRequest { Name = ".NET" });

To see gRPC-Web with .NET in action, take a moment to read a great blog post written by Steve Sanderson that uses gRPC-Web in Blazor WebAssembly.

Try gRPC-Web with ASP.NET Core today

Preview packages are on NuGet:

Documentation for using gRPC-Web with .NET Core can be found here.

gRPC-Web for .NET is an experimental project, not a committed product. We want to test that our approach to implementing gRPC-Web works, and get feedback on whether this approach is useful to .NET developers compared to the traditional way of setting up gRPC-Web via a proxy. Please add your feedback here or at the https://github.com/grpc/grpc-dotnet to ensure we build something that developers like and are productive with.

Thanks!

Avatar

Posted on Leave a comment

gRPC vs HTTP APIs

Avatar

James

ASP.NET Core now enables developers to build gRPC services. gRPC is an opinionated contract-first remote procedure call framework, with a focus on performance and developer productivity. gRPC integrates with ASP.NET Core 3.0, so you can use your existing ASP.NET Core logging, configuration, authentication patterns to build new gRPC services.

This blog post compares gRPC to JSON HTTP APIs, discusses gRPC’s strengths and weaknesses, and when you could use gRPC to build your apps.

gRPC strengths

Developer productivity

With gRPC services, a client application can directly call methods on a server app on a different machine as if it was a local object. gRPC is based around the idea of defining a service, specifying the methods that can be called remotely with their parameters and return types. The server implements this interface and runs a gRPC server to handle client calls. On the client, a strongly-typed gRPC client is available that provides the same methods as the server.

gRPC is able to achieve this through first-class support for code generation. A core file to gRPC development is the .proto file, which defines the contract of gRPC services and messages using Protobuf interface definition language (IDL):

Greet.proto

// The greeting service definition.
service Greeter { // Sends a greeting rpc SayHello (HelloRequest) returns (HelloReply);
} // The request message containing the user's name.
message HelloRequest { string name = 1;
} // The response message containing the greetings
message HelloReply { string message = 1;
}

Protobuf IDL is a language neutral syntax, so it can be shared between gRPC services and clients implemented in different languages. gRPC frameworks use the .proto file to code generate a service base class, messages, and a complete client. Using the generated strongly-typed Greeter client to call the service:

Program.cs

var channel = GrpcChannel.ForAddress("https://localhost:5001")
var client = new Greeter.GreeterClient(channel); var reply = await client.SayHelloAsync(new HelloRequest { Name = "World" });
Console.WriteLine("Greeting: " + reply.Message);

By sharing the .proto file between the server and client, messages and client code can be generated from end to end. Code generation of the client eliminates duplication of messages on the client and server, and creates a strongly-typed client for you. Not having to write a client saves significant development time in applications with many services.

Performance

gRPC messages are serialized using Protobuf, an efficient binary message format. Protobuf serializes very quickly on the server and client. Protobuf serialization results in small message payloads, important in limited bandwidth scenarios like mobile apps.

gRPC requires HTTP/2, a major revision of HTTP that provides significant performance benefits over HTTP 1.x:

  • Binary framing and compression. HTTP/2 protocol is compact and efficient both in sending and receiving.
  • Multiplexing of multiple HTTP/2 calls over a single TCP connection. Multiplexing eliminates head-of-line blocking at the application layer.

Real-time services

HTTP/2 provides a foundation for long-lived, real-time communication streams. gRPC provides first-class support for streaming through HTTP/2.

A gRPC service supports all streaming combinations:

  • Unary (no streaming)
  • Server to client streaming
  • Client to server streaming
  • Bidirectional streaming

Note that the concept of broadcasting a message out to multiple connections doesn’t exist natively in gRPC. For example, in a chat room where new chat messages should be sent to all clients in the chat room, each gRPC call is required to individually stream new chat messages to the client. SignalR is a useful framework for this scenario. SignalR has the concept of persistent connections and built-in support for broadcasting messages.

Deadline/timeouts and cancellation

gRPC allows clients to specify how long they are willing to wait for an RPC to complete. The deadline is sent to the server, and the server can decide what action to take if it exceeds the deadline. For example, the server might cancel in-progress gRPC/HTTP/database requests on timeout.

Propagating the deadline and cancellation through child gRPC calls helps enforce resource usage limits.

gRPC weaknesses

Limited browser support

gRPC has excellent cross-platform support! gRPC implementations are available for every programming language in common usage today. However one place you can’t call a gRPC service from is a browser. gRPC heavily uses HTTP/2 features and no browser provides the level of control required over web requests to support a gRPC client. For example, browsers do not allow a caller to require that HTTP/2 be used, or provide access to underlying HTTP/2 frames.

gRPC-Web is an additional technology from the gRPC team that provides limited gRPC support in the browser. gRPC-Web consists of two parts: a JavaScript client that supports all modern browsers, and a gRPC-Web proxy on the server. The gRPC-Web client calls the proxy and the proxy will forward on the gRPC requests to the gRPC server.

Not all of gRPC’s features are supported by gRPC-Web. Client and bidirectional streaming isn’t supported, and there is limited support for server streaming.

Not human readable

HTTP API requests using JSON are sent as text and can be read and created by humans.

gRPC messages are encoded with Protobuf by default. While Protobuf is efficient to send and receive, its binary format isn’t human readable. Protobuf requires the message’s interface description specified in the .proto file to properly deserialize. Additional tooling is required to analyze Protobuf payloads on the wire and to compose requests by hand.

Features such as server reflection and the gRPC command line tool exist to assist with binary Protobuf messages. Also, Protobuf messages support conversion to and from JSON. The built-in JSON conversion provides an efficient way to convert Protobuf messages to and from human readable form when debugging.

gRPC recommended scenarios

gRPC is well suited to the following scenarios:

  • Microservices – gRPC is designed for low latency and high throughput communication. gRPC is great for lightweight microservices where efficiency is critical.
  • Point-to-point real-time communication – gRPC has excellent support for bidirectional streaming. gRPC services can push messages in real-time without polling.
  • Polyglot environments – gRPC tooling supports all popular development languages, making gRPC a good choice for multi-language environments.
  • Network constrained environments – gRPC messages are serialized with Protobuf, a lightweight message format. A gRPC message is always smaller than an equivalent JSON message.

Conclusion

gRPC is a powerful new tool for ASP.NET Core developers. While gRPC is not a complete replacement for HTTP APIs, it offers improved productivity and performance benefits in some scenarios.

gRPC on ASP.NET Core is available now! If you are interested in learning more about gRPC, check out these resources:

Avatar

Posted on Leave a comment

Upcoming SameSite Cookie Changes in ASP.NET and ASP.NET Core

Avatar

Barry

SameSite is a 2016 extension to HTTP cookies intended to mitigate cross site request forgery (CSRF). The original design was an opt-in feature which could be used by adding a new SameSite property to cookies. It had two values, Lax and Strict. Setting the value to Lax indicated the cookie should be sent on navigation within the same site, or through GET navigation to your site from other sites. A value of Strict limited the cookie to requests which only originated from the same site. Not setting the property at all placed no restrictions on how the cookie flowed in requests. OpenIdConnect authentication operations (e.g. login, logout), and other features that send POST requests from an external site to the site requesting the operation, can use cookies for correlation and/or CSRF protection. These operations would need to opt-out of SameSite, by not setting the property at all, to ensure these cookies will be sent during their specialized request flows.

Google is now updating the standard and implementing their proposed changes in an upcoming version of Chrome. The change adds a new SameSite value, “None”, and changes the default behavior to “Lax”. This breaks OpenIdConnect logins, and potentially other features your web site may rely on, these features will have to use cookies whose SameSite property is set to a value of “None”. However browsers which adhere to the original standard and are unaware of the new value have a different behavior to browsers which use the new standard as the SameSite standard states that if a browser sees a value for SameSite it does not understand it should treat that value as “Strict”. This means your .NET website will now have to add user agent sniffing to decide whether you send the new None value, or not send the attribute at all.

.NET will issue updates to change the behavior of its SameSite attribute behavior in .NET 4.7.2 and in .NET Core 2.1 and above to reflect Google’s introduction of a new value. The updates for the .NET Framework will be available on November 19th as an optional update via Microsoft Update and WSUS if you use the “Check for Update” functionality. On December 10th it will become widely available and appear in Microsoft Update without you having to specifically check for updates. .NET Core updates will be available with .NET Core 3.1 starting with preview 1, in November.

.NET Core 3.1 will contain an updated enum definition, SameSite.Unspecified which will not set the SameSite property.

The OpenIdConnect middleware for Microsoft.Owin v4.1 and .NET Core will be updated at the same time as their .NET Framework and .NET updates, however we cannot introduce the user agent sniffing code into the framework, this must be implemented in your site code. The implementation of agent sniffing will vary according to what version of ASP.NET or ASP.NET Core you are using and the browsers you wish to support.

For ASP.NET 4.7.2 with Microsoft.Owin 4.1.0 agent sniffing can be implemented using ICookieManager;

public class SameSiteCookieManager : ICookieManager
{ private readonly ICookieManager _innerManager; public SameSiteCookieManager() : this(new CookieManager()) { } public SameSiteCookieManager(ICookieManager innerManager) { _innerManager = innerManager; } public void AppendResponseCookie(IOwinContext context, string key, string value, CookieOptions options) { CheckSameSite(context, options); _innerManager.AppendResponseCookie(context, key, value, options); } public void DeleteCookie(IOwinContext context, string key, CookieOptions options) { CheckSameSite(context, options); _innerManager.DeleteCookie(context, key, options); } public string GetRequestCookie(IOwinContext context, string key) { return _innerManager.GetRequestCookie(context, key); } private void CheckSameSite(IOwinContext context, CookieOptions options) { if (DisallowsSameSiteNone(context) && options.SameSite == SameSiteMode.None) { options.SameSite = null; } } public static bool DisallowsSameSiteNone(IOwinContext context) { // TODO: Use your User Agent library of choice here. var userAgent = context.Request.Headers["User-Agent"]; return userAgent.Contains("BrokenUserAgent") || userAgent.Contains("BrokenUserAgent2") }
}

And then configure OpenIdConnect settings to use the new CookieManager;

app.UseOpenIdConnectAuthentication( new OpenIdConnectAuthenticationOptions { // … Your preexisting options … CookieManager = new SameSiteCookieManager(new SystemWebCookieManager())
});

SystemWebCookieManager will need the .NET 4.7.2 or later SameSite patch installed to work correctly.

For ASP.NET Core you should install the patches and then implement the agent sniffing code within a cookie policy. For versions prior to 3.1 replace SameSiteMode.Unspecified with (SameSiteMode)(-1).

private void CheckSameSite(HttpContext httpContext, CookieOptions options)
{ if (options.SameSite > SameSiteMode.Unspecified) { var userAgent = httpContext.Request.Headers["User-Agent"].ToString(); // TODO: Use your User Agent library of choice here. if (/* UserAgent doesn’t support new behavior */) { // For .NET Core < 3.1 set SameSite = -1 options.SameSite = SameSiteMode.Unspecified; } }
} public void ConfigureServices(IServiceCollection services)
{ services.Configure<CookiePolicyOptions>(options => { options.MinimumSameSitePolicy = SameSiteMode.Unspecified; options.OnAppendCookie = cookieContext => CheckSameSite(cookieContext.Context, cookieContext.CookieOptions); options.OnDeleteCookie = cookieContext => CheckSameSite(cookieContext.Context, cookieContext.CookieOptions); });
} public void Configure(IApplicationBuilder app)
{ app.UseCookiePolicy(); // Before UseAuthentication or anything else that writes cookies. app.UseAuthentication(); // …
}

Under testing with the Azure Active Directory team we have found the following checks work for all the common user agents that Azure Active Directory sees that don’t understand the new value.

public static bool DisallowsSameSiteNone(string userAgent)
{
    // Cover all iOS based browsers here. This includes:
    // - Safari on iOS 12 for iPhone, iPod Touch, iPad
    // - WkWebview on iOS 12 for iPhone, iPod Touch, iPad
    // - Chrome on iOS 12 for iPhone, iPod Touch, iPad
    // All of which are broken by SameSite=None, because they use the iOS networking stack
    if (userAgent.Contains("CPU iPhone OS 12") || userAgent.Contains("iPad; CPU OS 12"))
    {
        return true;
    }     // Cover Mac OS X based browsers that use the Mac OS networking stack. This includes:
    // - Safari on Mac OS X.
    // This does not include:
    // - Chrome on Mac OS X
    // Because they do not use the Mac OS networking stack.
    if (userAgent.Contains("Macintosh; Intel Mac OS X 10_14") &&  userAgent.Contains("Version/") && userAgent.Contains("Safari"))
    {
        return true;
    }     // Cover Chrome 50-69, because some versions are broken by SameSite=None,  // and none in this range require it.
    // Note: this covers some pre-Chromium Edge versions,  // but pre-Chromium Edge does not require SameSite=None.
    if (userAgent.Contains("Chrome/5") || userAgent.Contains("Chrome/6"))
    {
        return true;
    }     return false;
}

This browser list is by no means canonical and you should validate that the common browsers and other user agents your system supports behave as expected once the update is in place.

Chrome 80 is scheduled to turn on the new behavior in February or March 2020, including a temporary mitigation added in Chrome 79 Beta. If you want to test the new behavior without the mitigation use Chromium 76. Older versions of Chromium are available for download.

If you cannot update your framework versions by the time Chrome turns the new behavior in early 2020 you may be able to change your OpenIdConnect flow to a Code flow, rather than the default implicit flow that ASP.NET and ASP.NET Core uses, but this should be viewed as a temporary measure.

We strongly encourage you to download the updated .NET Framework and .NET Core versions when they become available in November and start planning your update before Chrome’s changes are rolled out.

Avatar

Posted on Leave a comment

Redesigning Configuration Refresh for Azure App Configuration

Avatar

Overview

Since its inception, the .NET Core configuration provider for Azure App Configuration has provided the capability to monitor changes and sync them to the configuration within a running application. We recently redesigned this functionality to allow for on-demand refresh of the configuration. The new design paves the way for smarter applications that only refresh the configuration when necessary. As a result, inactive applications no longer have to monitor for configuration changes unnecessarily.
 

Initial design : Timer-based watch

In the initial design, configuration was kept in sync with Azure App Configuration using a watch mechanism which ran on a timer. At the time of initialization of the Azure App Configuration provider, users could specify the configuration settings to be updated and an optional polling interval. In case the polling interval was not specified, a default value of 30 seconds was used.

public static IWebHost BuildWebHost(string[] args)
{ WebHost.CreateDefaultBuilder(args) .ConfigureAppConfiguration((hostingContext, config) => { // Load settings from Azure App Configuration // Set up the provider to listen for changes triggered by a sentinel value var settings = config.Build(); string appConfigurationEndpoint = settings["AzureAppConfigurationEndpoint"]; config.AddAzureAppConfiguration(options => { options.ConnectWithManagedIdentity(appConfigurationEndpoint) .Use(keyFilter: "WebDemo:*") .WatchAndReloadAll(key: "WebDemo:Sentinel", label: LabelFilter.Null); }); settings = config.Build(); }) .UseStartup<Startup>() .Build();
}

For example, in the above code snippet, Azure App Configuration would be pinged every 30 seconds for changes. These calls would be made irrespective of whether the application was active or not. As a result, there would be unnecessary usage of network and CPU resources within inactive applications. Applications needed a way to trigger a refresh of the configuration on demand in order to be able to limit the refreshes to active applications. Then unnecessary checks for changes could be avoided.

This timer-based watch mechanism had the following fundamental design flaws.

  1. It could not be invoked on-demand.
  2. It continued to run in the background even in applications that could be considered inactive.
  3. It promoted constant polling of configuration rather than a more intelligent approach of updating configuration when applications are active or need to ensure freshness.
     

New design : Activity-based refresh

The new refresh mechanism allows users to keep their configuration updated using a middleware to determine activity. As long as the ASP.NET Core web application continues to receive requests, the configuration settings continue to get updated with the configuration store.

The application can be configured to trigger refresh for each request by adding the Azure App Configuration middleware from package Microsoft.Azure.AppConfiguration.AspNetCore in your application’s startup code.

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{ app.UseAzureAppConfiguration(); app.UseMvc();
}

At the time of initialization of the configuration provider, the user can use the ConfigureRefresh method to register the configuration settings to be updated with an optional cache expiration time. In case the cache expiration time is not specified, a default value of 30 seconds is used.

public static IWebHost BuildWebHost(string[] args)
{ WebHost.CreateDefaultBuilder(args) .ConfigureAppConfiguration((hostingContext, config) => { // Load settings from Azure App Configuration // Set up the provider to listen for changes triggered by a sentinel value var settings = config.Build(); string appConfigurationEndpoint = settings["AzureAppConfigurationEndpoint"]; config.AddAzureAppConfiguration(options => { options.ConnectWithManagedIdentity(appConfigurationEndpoint) .Use(keyFilter: "WebDemo:*") .ConfigureRefresh((refreshOptions) => { // Indicates that all settings should be refreshed when the given key has changed refreshOptions.Register(key: "WebDemo:Sentinel", label: LabelFilter.Null, refreshAll: true); }); }); settings = config.Build(); }) .UseStartup<Startup>() .Build();
}

In order to keep the settings updated and avoid unnecessary calls to the configuration store, an internal cache is used for each setting. Until the cached value of a setting has expired, the refresh operation does not update the value. This happens even when the value has changed in the configuration store.  

Try it now!

For more information about Azure App Configuration, check out the following resources. You can find step-by-step tutorials that would help you get started with dynamic configuration using the new refresh mechanism within minutes. Please let us know what you think by filing issues on GitHub.

Overview: Azure App configuration
Tutorial: Use dynamic configuration in an ASP.NET Core app
Tutorial: Use dynamic configuration in a .NET Core app
Related Blog: Configuring a Server-side Blazor app with Azure App Configuration

Avatar

Software Engineer, Azure App Configuration

Follow    

Posted on Leave a comment

Azure SignalR Service now supports ASP.NET!

Avatar

Zhidi

We’ve just shipped the official version of the SignalR Service SDK for ASP.NET support:

Azure SignalR Service is a fully managed Azure service for real-time messaging. It is a preferred way for scaling ASP.NET Core SignalR application. However, SignalR Service is based on SignalR for ASP.NET Core 2.0, which is not 100% compatible with ASP.NET SignalR. Some code changes and proper version of dependent libraries are needed to make ASP.NET SignalR application work with SignalR Service.

We have received many usage feedbacks from customers since we announced the preview support for ASP.NET, at Microsoft Ignite 2018. Today, we are excited to announce that we have released the generally available version 1.0.0 of ASP.NET support SDK for Azure SignalR Service!

This diagram shows the typical architecture to use Azure SignalR Service with application server either written in ASP.NET Core, or now, in ASP.NET.

arch.png

For self-hosted SignalR application, the application server listens to and serves client connections directly. With SignalR Service, the application server will only respond to clients’ negotiate requests, and redirect clients to SignalR Service to establish the persistent client-server connections.

Using the ASP.NET support for Azure SignalR Service you will be able to:

  • Continue to keep SignalR application using ASP.NET, but work with fully managed ASP.NET Core based SignalR Service.
  • Change a few lines of SignalR API codes, to switch to use SignalR Service instead of self-hosted SignalR Hubs.
  • Leverage Azure SignalR Service’s built-in features and tools to help operate the SignalR application, with guaranteed SLA.

To receive the full benefit from the new ASP.NET support feature, please download and upgrade your SDKs to the latest supported versions:

  • .NET: 4.6.1+
  • Microsoft.AspNet.SignalR.*: 2.4.1
  • Microsoft.Azure.SignalR.AspNet: 1.0.0

Many factors, including non-technical ones, make the web application migrate from ASP.NET to ASP.NET Core difficult.

The ASP.NET support for Azure SignalR Service is to enable ASP.NET SignalR application developers to easily switch to use SignalR Service with minimal code change.

Some APIs and features are no longer supported:

  • Automatic reconnects
  • Forever Frame transport
  • HubState
  • PersistentConnection class
  • GlobalHost object
  • HubPipeline module
  • Client side Internet Explorer support before Microsoft Internet Explorer 11

ASP.NET support is focus on compatibility, so not all ASP.NET Core SignalR new features are supported. To name a few: MessagePack, Streaming, etc., are only available for ASP.NET Core SignalR applications.

SignalR Service can be configured for different service mode: Classic/Default/Serverless. For ASP.NET support, the Serverless mode is not supported.

For a complete list of feature comparison between ASP.NET SignalR and ASP.NET Core SignalR, the proper version of SDKs to use in each case, and what are the recommended alternatives to use for features discontinued in ASP.NET Core SignalR, please refer to doc here.

We’d like to hear about your feedback and comments. You can reach the product team at the GitHub repo, or by email.

Avatar
Zhidi Shang

Principal Program Manager, Azure SignalR Service

Follow Zhidi   

Posted on Leave a comment

Microsoft’s Developer Blogs are Getting an Update

In the coming days, we’ll be moving our developer blogs to a new platform with a modern, clean design and powerful features that will make it easy for you to discover and share great content. This week, you’ll see the Visual Studio, IoTDev, and Premier Developer blogs move to a new URL  – while additional developer blogs will transition over the coming weeks. You’ll continue to receive all the information and news from our teams, including access to all past posts. Your RSS feeds will continue to seamlessly deliver posts and any of your bookmarked links will re-direct to the new URL location.

We’d love your feedback

Our most inspirational ideas have come from this community and we want to make sure our developer blogs are exactly what you need. Share your thoughts about the current blog design by taking our Microsoft Developer Blogs survey and tell us what you think! We would also love your suggestions for future topics that our authors could write about. Please use the survey to share what kinds of information, tutorials, or features you would like to see covered in future posts.

Frequently Asked Questions

For the most-asked questions, see the FAQ below. If you have any additional questions, ideas, or concerns that we have not addressed, please let us know by submitting feedback through the Developer Blogs FAQ page.

Will Saved/Bookmarked URLs Redirect?

Yes. All existing URLs will auto-redirect to the new site. If you discover any broken links, please report them through the FAQ feedback page.

Will My Feed Reader Still Send New Posts?

Yes, your RSS feed will continue to work through your current set-up. If you encounter any issues with your RSS feed, please report it with details about your feed reader.

Which Blogs Are Moving?

This migration involves the majority of our Microsoft developer blogs so expect to see changes to our Visual Studio, IoTDev, and Premier Developer blogs this week. The rest will be migrated in waves over the next few weeks.

We appreciate all of the feedback so far and look forward to showing you what we’ve been working on! For additional information about this blog migration, please refer to our Developer Blogs FAQ.

Posted on Leave a comment

When should you right click publish

Some people say ‘friends don’t let friends right click publish’ but is that true? If they mean that there are great benefits to setting up a CI/CD workflow, that’s true and we will talk more about these benefits in just a minute. First, let’s remind ourselves that the goal isn’t always coming up with the best long-term solution.

Technology moves fast and as developers we are constantly learning and experimenting with new languages, frameworks and platforms. Sometimes we just need to prototype something rather quickly in order to evaluate its capabilities. That’s a classic scenario where right click publish in Visual Studio provides the right balance between how much time you are going to spend (just a few seconds) and the options that become available to you (quite a few depending on the project type) such as publish to IIS, FTP  & Folder (great for xcopy deployments and integration with other tools).

Continuing with the theme of prototyping and experimenting, right click publish is the perfect way for existing Visual Studio customers to evaluate Azure App Service (PAAS). By following the right click publish flow you get the opportunity to provision new instances in Azure and publish your application to them without leaving Visual Studio:

When the right click publish flow has been completed, you immediately have a working application running in the cloud:

Platform evaluations and experiments take time and during that time, right click publish helps you focus on the things that matter. When you are ready and the demand rises for automation, repeatability and traceability that’s when investing into a CI/CD workflow starts making a lot of sense:

  • Automation: builds are kicked off and tests are executed as soon as you check in your code
  • Repeatability: it’s impossible to produce binaries without having the source code checked in
  • Traceability: each build can be traced back to a specific version of the codebase in source control which can then be compared with another build and figure out the differences

The right time to adopt CI/CD typically coincides with a milestone related to maturity; either and application milestone or the team’s that is building it. If you are the only developer working on your application you may feel that setting up CI/CD is overkill, but automation and traceability can be extremely valuable even to a single developer once you start shipping to your customers and you have to support multiple versions in production.

With a CI/CD workflow you are guaranteed that all binaries produced by a build can be linked back to the matching version of the source code. You can go from a customer bug report to looking at the matching source code easily, quickly and with certainty. In addition, the automation aspects of CI/CD save you valuable time performing common tasks like running tests and deploying to testing and pre-production environments, lowering the overhead of good practices that ensure high quality.

As always, we want to see you successful, so if you run into any issues using publish in Visual Studio or setting up your CI/CD workload, let me know in the comment section below and I’ll do my best to get your question answered.

Posted on Leave a comment

Announcing CosmosDB Table Async OutputCache Provider Release and ASP.NET Providers Connected Service Extension Update

Through the years, ASP.NET team have been releasing new ASP.NET SessionState and OutputCache providers to help developers make their web applications ready for the cloud environment. Today we are announcing a new OutputCache provider, Microsoft.AspNet.OutputCache.CosmosDBTableAsyncOutputCacheProvider,  to enable your applications store the OutputCache data into CosmosDB. It supports both Azure CosmosDB Table and Azure Storage Table.

How to Use CosmosDBTableAsyncOutputCacheProvider

  1. Open the NuGet package manager and search for Microsoft.AspNet.OutputCache.CosmosDBTableAsyncOutputCacheProvider and install. Make sure that your application is targeted to .NET Framework 4.6.2 or higher version. Download the .NET Framework 4.6.2 Developer Pack if you do not already have it installed.
  2. The package has dependency on Microsoft.AspNet.OutputCache.OutputCacheModuleAsync Nuget package and Microsoft.Azure.CosmosDB.Table Nuget package. After you install the package, both Microsoft.AspNet.OutputCache.CosmosDBTableAsyncOutputCacheProvider.dll and dependent assemblies will be copied to the Bin folder.
  3. Open the web.config file, you will see two new configuration sections are added, caching and appSettings. The first one is to configure the OutputCache provider, you may want to update the table name. The provider will create the table if it doesn’t exist on the configured Azure service instance. The second one is the appSettings for the storage connection string. Depends on the connection string you use, the provider can work with either Azure CosmosDB Table or Azure Storage Table.

    ASP.NET Providers Connected Service Extension Update

    5 months ago, we released ASP.NET Providers Connected Service Visual Studio Extension to help the developers to pick the right ASP.NET provider and configure it properly to work with Azure resources. Today we are releasing an update for this extension which enables you configure ComosDB table OutputCache provider and Redis cache OutputCache.

    Summary

    The new CosmosDBTableAsyncOutputCacheProvider Nuget package enables your ASP.NET application leverage Azure CosmosDB and Azure Storage to store the OutputCache data. The new ASP.NET Providers Connected Service extension adds more Azure ready ASP.NET providers support. Please try it today and let us know you feedback.

Posted on Leave a comment

Use Hybrid Connections to Incrementally Migrate Applications to the Cloud

As the software industry shifts to running software in the cloud, organizations are looking to migrate existing applications from on-premises to the cloud. Last week at Microsoft’s Ignite conference, Paul Yuknewicz and I delivered a talk focused on how to get started migrating applications to Azure (watch the talk free) where we walked through the business case for migrating to the cloud, and choosing the right hosting and data services.

If your application is a candidate for running in App Service, one of the most useful pieces of technology that we showed was Hybrid Connections. Hybrid Connections let you host a part of your application in Azure App Service, while calling back into resources and services not running in Azure (e.g. still on-premises). This enables you to try running a small part of your application in the cloud without the need to move your entire application and all of its dependencies at once; which is usually time consuming, and extremely difficult to debug when things don’t work. So, in this post I’ll show you how to host an ASP.NET front application in the cloud, and configure a hybrid connection to connect back to a service on your local machine.

Publishing Our Sample App to the Cloud

For the purposes of this post, I’m going to use the Smart Hotel 360 App sample that uses an ASP.NET front end that calls a WCF service which then accesses a SQL Express LocalDB instance on my machine.

The first thing I need to do is publish the ASP.NET application to App Service. To do this, right click on the “SmartHotel.Registration.Web” project and choose “Publish”

clip_image001

The publish target dialog is already on App Service, and I want to create a new one, so I will just click the “Publish” button.

This will bring up the “Create App Service” dialog.  Next, I will click “Create” and wait for a minute while the resources in the cloud are created and the application is published.

clip_image003

When it’s finished publishing, my web browser will open to my published site. At this point, there will be an error loading the page since it cannot connect to the WCF service. To fix this we’ll add a hybrid connection.

image

Create the Hybrid Connection

To create the Hybrid Connection, I navigate to the App Service I just created in the Azure Portal. One quick way to do this is to click the “Managed in Cloud Explorer” link on the publish summary page

clip_image005

Right click the site, and choose “Open in Portal” (You can manually navigate to the page by logging into the Azure portal, click App Services, and choose your site).

clip_image006

To create the hybrid connection:

Click the “Networking” tab in the Settings section on the left side of the App Service page

Click “Configure your hybrid connection endpoints” in the “Hybrid connections” section

image

Next, click “Add a hybrid connection”

Then click “Create a new hybrid connection”

clip_image010

Fill out the “Create new hybrid connection” form as follows:

  • Hybrid connection Name: any unique name that you want
  • Endpoint Host: This is the machine URL your application is currently using to connect to the on-premises resource. In this case, this is “localhost” (Note: per the documentation, use the hostname rather than a specific IP address if possible as it’s more robust)
  • Endpoint Port: The port the on-premises resource is listening on. In this case, the WCF service on my local machine is listening on 2901
  • Servicebus namespace: If you’ve previously configured hybrid connections you can re-use an existing one, in this case we’ll create a new one, and give it a name

clip_image011

Click “OK”. It will take about 30 seconds to create the hybrid connection, when it’s done you’ll see it appear on the Hybrid connections page.

Configure the Hybrid Connection Locally

Now we need to install the Hybrid Connection Manager on the local machine. To do this, click the “Download connection manager” on the Hybrid connections page and install the MSI.

clip_image013

After the connection manager finishes installing, launch the “Hybrid Connections Manager UI”, it should appear in your Windows Start menu if you type “Hybrid Connections”. (If for some reason it doesn’t appear on the Start Menu, launch it manually from “C:\Program Files\Microsoft\HybridConnectionManager <version#>”)

Click the “Add a new Hybrid Connection” button in the Hybrid Connections Manager UI and login with the same credentials you used to publish your application.

clip_image015

Choose the subscription you used published your application from the “Subscription” dropdown, choose the hybrid connection you just created in the portal, and click “Save”.

clip_image017

In the overview, you should see the status say “Connected”. Note: If the state won’t change from “Not Connected”, I’ve found that rebooting my machine fixes this (it can take a few minutes to connect after the reboot).

clip_image019

Make sure everything is running correctly on your local machine, and then when we open the site running in App Service we can see that it loads with no error. In fact, we can even put a breakpoint in the GetTodayRegistrations() method of Service.svc.cs, hit F5 in Visual Studio, and when the page loads in App Service the breakpoint on the local machine is hit!

clip_image021

Conclusion

If you are looking to move applications to the cloud, I hope that this quick introduction to Hybrid Connections will enable you to try moving things incrementally. Additionally, you may find these resources helpful:

As always, if you have any questions, or problems let me know via Twitter, or in the comments section below.