Posted on Leave a comment

Beyond the printed form: unlocking insights from documents with Form Recognizer

Data extraction from printed forms is by now a tried and true technology. Form Recognizer extracts key value pairs, tables and text from documents such as W2 tax statements, oil and gas drilling well reports, completion reports, invoices, and purchase orders. However, real-world businesses often rely on a variety of documents for their day-to-day needs that are not always cleanly printed.

We are excited to announce the addition of handwritten and mixed-mode (printed and handwritten) support. Starting now, handling handwritten and mixed-mode forms is the new norm.

Extracting data from handwritten and mixed-mode content with Form Recognizer

Entire data sets that were inaccessible in the past due to the limitations of extraction technology now become available. The handwritten and mixed-mode capability of Form Recognizer is available in preview and enables you to extract structured data out of handwritten text filled in forms such as:

  • Medical forms: New patient information, doctor notes.
  • Financial forms: Account opening forms, credit card applications.
  • Insurance: Claim forms, liability forms.
  • Manufacturing forms: Packaging slips, testing forms, quality forms.
  • And more.

By using our vast experience in optical character recognition (OCR) and machine learning for form analysis, our experts created a state-of-the-art solution that goes beyond printed forms. The OCR technology behind the service supports both handwritten and printed. Expanding the scope of Form Recognizer allows you to tap into previously uncharted territories, by making new sources of data available to you. You may extract valuable business information from newly available data, keeping you ahead of your competition.

Whether you are using Form Recognizer for the first time or already integrated it into your organization, you will now have an opportunity to create new business applications:

  • Expand your available data set: If you are only extracting data from machine printed forms, expand your total data set to mixed-mode forms and historic handwritten forms.
  • Create one application for a mix of documents: If you use a mix of handwritten and printed forms, you can create one application that applies across all your data.
  • Avoid manual digitization of handwritten forms: Original forms may be fed to Form Recognizer without any pre-processing, extracting the same key-value pairs and table data you would get from a machine-printed form to reduce costs, errors, and time.

Our customer: Avanade

Avanade values people as their most important asset. They are always on the lookout for talented and passionate professionals to grow their organization. One way they find these people is by attending external events, which may include university career fairs, trade shows, or technical conferences to name a few. 

During these events they often take the details of those interested in finding out more about Avanade, as well as their permission to contact them at a later date. Normally this is completed with a digital form using a set of tablets. But when the stand is particularly busy, they use a short paper form that attendees can fill in with their handwritten details. Unfortunately, these forms needed to be manually entered into the marketing database, requiring a considerable amount of time and resources. With the volume of potential new contacts at these events, multiplied by the number of events Avanade attends, this task can be daunting.

Azure Form Recognizer’s new handwritten support simplifies the process, giving Avanade peace of mind knowing no contact is lost and the information is there for them immediately.

In addition, Avanade integrated Form Recognizer as a skill within their cognitive search solution, enabling them to quickly use the service in their existing platform and follow-up with new leads, while their competitors may be spending time digitizing their handwritten forms.

Am image of a handwritten form and the data extracted via Form Recognizer.

“Azure Form Recognizer takes a vast amount of effort out of the process, changing the task from data entry to data validation. By integrating Form Recognizer with Azure Search, we are also immediately able to use the service in our existing platforms. If we need to find and check a form for any reason, for example to check for a valid signature there, we can simply search by any of the fields like name or job title and jump straight to that form. In our initial tests, using Form Recognizer has reduced the time taken to digitize the forms and double check the entries by 35 percent, a number we only expect to get better as we work to optimize our tools to work hand in hand with the service, and add in more automation.” – Fergus Kidd, Emerging Technology Engineer, Avanade

Getting started

To learn more about Form Recognizer and the rest of the Azure AI ecosystem, please visit our website and read the documentation.

Get started by contacting us.

For additional questions please reach out to us at [email protected]

Posted on Leave a comment

There’s power in the ‘Location of Things’ – find it with Azure Maps

The Internet of Things (IoT) is the beginning of accessing planetary-scale insights. With the mass adoption of IoT and the very near future explosion of sensors, connectivity, and computing, humanity is on the cusp of a fully connected, intelligent world. We will be part of the generation that realizes the data-rich, algorithmically deterministic lifestyle the world has never seen. The inherent value of this interconnectedness lies within the constructs of human nature to thrive. Bringing all of this information together with spatial intelligence has been challenging to say the least. Until today.

Today, we’re unveiling a cross-Azure IoT collaboration simplifying the use of location and spatial intelligence used in conjunction with IoT messaging. The result is the means for customers to use Azure IoT services to stay better informed about their “things” in terms of space. Azure IoT customers can now implement IoT spatial analytics using Azure Maps. Providing spatial intelligence to IoT devices means greater insights into not just what’s happening, but where it’s happening.

The map shows four points where the vehicle was outside the geofence, logged at regular time intervals.

Azure Maps provides geographic context for information and, as it pertains to IoT, thus geographic insights based on IoT information. Customers are using Azure Maps and Azure IoT for monitoring movement of assets and cross reference the “things” with their location. For example, assume a truck is delivering refrigerated goods from New York City to Washington DC. A route is calculated to determine the path and duration the truck should take to deliver the goods. From the route, a geofence can be created and stored in Azure Maps. The black box on the truck tracking the vehicle would provide Azure IoT Hub to determine if the truck ever leaves the predetermined path. If it does, this could signal that something is wrong—a detour could be disastrous for refrigerated goods. Notifications of detours could be setup and communicated through Azure Event Grid and sent over email, text, or a myriad of other communication mediums.

When we talk about Azure IoT, we often talk about data (from sensors) which leads to insights (when computed) which leads to actions (a result of insights). With The Location of Things, we’re now talking about data from sensors which leads to insights which leads to actions and where they are needed. Knowing where to take actions has massive implications in terms of cost efficacy and time management. When you know where you have issues or opportunities, you can then make informed decisions of where to deploy resources, where to deploy inventory, or where to withdraw them. Run this over time and with enough data and you have artificial intelligence you could deploy at the edge to help with real-time decision making. Have enough data coming in fast enough and you’d be making decisions fast enough to predict future opportunities and issues—and where to deploy resources before you need them.

Location is a powerful component of providing insights. If you have a means of providing location via your IoT messages you can start doing so immediately. If you don’t have location natively, you’d be surprised at how you can get location associated with your sensors and device location. RevIP, Wi-Fi, and cell tower triangulation all provide a means of getting location into your IoT messages. Get that location data into the cloud and start gaining spatial insights today.

Posted on Leave a comment

Latency is the new currency of the Cloud: announcing 31 new Azure edge sites

Providing users fast and reliable access to their cloud services, apps, and content is pivotal to a business’ success.

The latency when accessing cloud-based services can be the inhibitor to cloud adoption or migration. In most cases, this is caused by commercial internet connections that aren’t tailored to today’s global cloud needs. Through deployment and operation of globally and strategically placed edge sites, Microsoft dramatically accelerates the performance and experience when you are accessing apps, content, or services such as Azure and Office 365 on the Microsoft global network.

Edges optimize network performance through local access points to and from the vast Microsoft global network, in many cases providing 10x the acceleration to access and consume cloud-based content and services from Microsoft.

What is the network edge?

Solely providing faster network access isn’t enough, and applications need intelligent services to expedite and simplify how a global audience accesses and experiences their offerings. Edge sites provide application development teams increased visibility and higher availability to access services that improve how they deliver global applications.

Edge sites benefit infrastructure and development teams in multiple key areas

  • Improved optimization for application delivery through Azure Front Door (AFD.) Microsoft recently announced AFD, which allows customers to define, manage, accelerate, and monitor global routing for web traffic with customizations for the best performance and instant global failover for application accessibility.
  • An enhanced customer experience via high-bandwidth access to Azure Blob storage, web applications, and live video-on-demand streams. Azure Content Delivery Network delivers high-bandwidth content by caching objects to the consumer’s closest point of presence.
  • Private connectivity and dedicated performance through Azure ExpressRoute. ExpressRoute provides up to 100 gigabits per second of fully redundant bandwidth directly to the Microsoft global network at select peering locations across the globe, making connecting to and through Azure a seamless and integrated experience for customers.

A diagram of an Azure Edge Site.

New edge sites

Today, we’re announcing the addition of 31 new edge sites, bringing the total to over 150 across more than 50 countries. We’re also adding 14 new meet-me sites to Azure ExpressRoute to further enable and expand access to dedicated private connections between customers’ on-premises environments and Azure.

A map showing upcoming and live edges.

More than two decades of building global network infrastructure have given us a keen awareness of globally distributed edge sites and their critical role in a business’ success.

By utilizing the expanding network of edge sites, Microsoft provides more than 80 percent of global GDP with an experience of sub-30 milliseconds latency. We are adding new edges every week, and our ambition is to provide this level of performance to all of our global audience.

This expansion proves its value further when workloads move to the cloud or when Microsoft cloud services such as Azure, Microsoft 365, and Xbox are used. By operating over a dedicated, premium wide-area-network, our customers avoid transferring customer data over the public internet, which ensures security, optimizes traffic, and increases performance.

New edge sites

Country

City

Colombia

Bogota

Germany

Frankfurt
Munich

India

Hyderabad

Indonesia

Jakarta

Kenya

Nariobi

Netherlands

Amsterdam

New Zealand

Auckland

Nigeria

Lagos

Norway

Stavanger

United Kingdom

London

United States

Boston
Portland

Vietnam

Saigon

Upcoming edge sites

Country

City

Argentina

Buenos Aires

Egypt

Cairo

Germany

Dusseldorf

Israel

Tel Aviv

Italy

Rome

Japan

Tokyo

Norway

Oslo

Switzerland

Geneva

Turkey

Istanbul

United States

Detroit
Jacksonville
Las Vegas
Minneapolis
Nashville
Phoenix
Quincy (WA)
San Diego

Country

City

Canada

Vancouver

Colombia

Bogota

Germany

Berlin
Dusseldorf

Indonesia

Jakarta

Italy

Milan

Mexico

Queretaro (Mexico City)

Norway

Oslo
Stavanger

Switzerland

Geneva

Thailand

Bangkok

United States

Minneapolis
Phoenix
Quincy (WA)

With this latest announcement, Microsoft continues to offer cloud customers the fastest and most accessible global network, driving a competitive advantage for organizations accessing the global market and increased satisfaction for consumers.

Explore the Microsoft global network to learn about how it can benefit your organization today.

Posted on Leave a comment

Redesigning Configuration Refresh for Azure App Configuration

Avatar

Overview

Since its inception, the .NET Core configuration provider for Azure App Configuration has provided the capability to monitor changes and sync them to the configuration within a running application. We recently redesigned this functionality to allow for on-demand refresh of the configuration. The new design paves the way for smarter applications that only refresh the configuration when necessary. As a result, inactive applications no longer have to monitor for configuration changes unnecessarily.
 

Initial design : Timer-based watch

In the initial design, configuration was kept in sync with Azure App Configuration using a watch mechanism which ran on a timer. At the time of initialization of the Azure App Configuration provider, users could specify the configuration settings to be updated and an optional polling interval. In case the polling interval was not specified, a default value of 30 seconds was used.

public static IWebHost BuildWebHost(string[] args)
{ WebHost.CreateDefaultBuilder(args) .ConfigureAppConfiguration((hostingContext, config) => { // Load settings from Azure App Configuration // Set up the provider to listen for changes triggered by a sentinel value var settings = config.Build(); string appConfigurationEndpoint = settings["AzureAppConfigurationEndpoint"]; config.AddAzureAppConfiguration(options => { options.ConnectWithManagedIdentity(appConfigurationEndpoint) .Use(keyFilter: "WebDemo:*") .WatchAndReloadAll(key: "WebDemo:Sentinel", label: LabelFilter.Null); }); settings = config.Build(); }) .UseStartup<Startup>() .Build();
}

For example, in the above code snippet, Azure App Configuration would be pinged every 30 seconds for changes. These calls would be made irrespective of whether the application was active or not. As a result, there would be unnecessary usage of network and CPU resources within inactive applications. Applications needed a way to trigger a refresh of the configuration on demand in order to be able to limit the refreshes to active applications. Then unnecessary checks for changes could be avoided.

This timer-based watch mechanism had the following fundamental design flaws.

  1. It could not be invoked on-demand.
  2. It continued to run in the background even in applications that could be considered inactive.
  3. It promoted constant polling of configuration rather than a more intelligent approach of updating configuration when applications are active or need to ensure freshness.
     

New design : Activity-based refresh

The new refresh mechanism allows users to keep their configuration updated using a middleware to determine activity. As long as the ASP.NET Core web application continues to receive requests, the configuration settings continue to get updated with the configuration store.

The application can be configured to trigger refresh for each request by adding the Azure App Configuration middleware from package Microsoft.Azure.AppConfiguration.AspNetCore in your application’s startup code.

public void Configure(IApplicationBuilder app, IHostingEnvironment env)
{ app.UseAzureAppConfiguration(); app.UseMvc();
}

At the time of initialization of the configuration provider, the user can use the ConfigureRefresh method to register the configuration settings to be updated with an optional cache expiration time. In case the cache expiration time is not specified, a default value of 30 seconds is used.

public static IWebHost BuildWebHost(string[] args)
{ WebHost.CreateDefaultBuilder(args) .ConfigureAppConfiguration((hostingContext, config) => { // Load settings from Azure App Configuration // Set up the provider to listen for changes triggered by a sentinel value var settings = config.Build(); string appConfigurationEndpoint = settings["AzureAppConfigurationEndpoint"]; config.AddAzureAppConfiguration(options => { options.ConnectWithManagedIdentity(appConfigurationEndpoint) .Use(keyFilter: "WebDemo:*") .ConfigureRefresh((refreshOptions) => { // Indicates that all settings should be refreshed when the given key has changed refreshOptions.Register(key: "WebDemo:Sentinel", label: LabelFilter.Null, refreshAll: true); }); }); settings = config.Build(); }) .UseStartup<Startup>() .Build();
}

In order to keep the settings updated and avoid unnecessary calls to the configuration store, an internal cache is used for each setting. Until the cached value of a setting has expired, the refresh operation does not update the value. This happens even when the value has changed in the configuration store.  

Try it now!

For more information about Azure App Configuration, check out the following resources. You can find step-by-step tutorials that would help you get started with dynamic configuration using the new refresh mechanism within minutes. Please let us know what you think by filing issues on GitHub.

Overview: Azure App configuration
Tutorial: Use dynamic configuration in an ASP.NET Core app
Tutorial: Use dynamic configuration in a .NET Core app
Related Blog: Configuring a Server-side Blazor app with Azure App Configuration

Avatar

Software Engineer, Azure App Configuration

Follow    

Posted on Leave a comment

How Lucky Brand and its merchants are uncovering data-driven results with Azure VMware Solutions

Since announcing Azure VMware Solutions at Dell Technologies World this spring, we’ve been energized by the positive feedback we’ve received from our partners and customers who are beginning to move their VMware workloads to Azure. One of these customers is Lucky Brand, a leading retailer that is embracing digital transformation while staying true to its rich heritage. As part of their broader strategy to leverage the innovation possible in the cloud, Lucky Brand is transitioning several VMware workloads to Azure.

“We’re seeing great initial ROI with Azure VMware Solutions. We chose Microsoft Azure as our strategic cloud platform and decided to dramatically reduce our AWS footprint and 3rd Party co-located data centers. We have a significant VMware environment footprint for many of our on-premises business applications.

The strategy has allowed us to become more data driven and allow our merchants and finance analysts the ability to uncover results quickly and rapidly with all the data in a central cloud platform providing great benefits for us in the competitive retail landscape. Utilizing Microsoft Azure and VMware we leverage a scalable cloud architecture and VMware to virtualize and manage the computing resources and applications in Azure in a dynamic business environment.

Since May, we’ve been successfully leveraging these applications on the Azure VMware Solution by CloudSimple platform. We are impressed with the performance, ease of use and the level of support we have received by Microsoft and its partners.” 

Kevin Nehring, CTO, Lucky Brand

Expanding to more regions worldwide and adding new capabilities

Based on customer demand, we are excited to announce that we will expand Azure VMware Solutions to a total of eight regions across the US, Western Europe, and Asia Pacific by end of year.

In addition to expanding to more regions, we are continuing to add new capabilities to Azure VMware Solutions and deliver seamless integration with native Azure services. One example is how we’re expanding the supported Azure VMware Solutions storage options to include Azure NetApp Files by the end of the year. This new capability will allow IT organizations to more easily run storage intensive workloads on Azure VMware Solutions. We are committed to continuously innovating and delivering capabilities based on customer feedback.

Broadening the ecosystem

It is amazing to see the market interest in Azure VMware Solutions and the partner ecosystem building tools and capabilities that support Azure VMware Solutions customer scenarios.

RiverMeadow now supports capabilities to accelerate the migration of VMware environments on Azure VMware Solutions.

“I am thrilled about our ongoing collaboration with Microsoft. Azure VMware Solutions enable enterprise customers to get the benefit of cloud while still running their infrastructure and applications in a familiar, tried and trusted VMware environment. Add with the performance and cost benefits of VMware on Azure, you have a complete solution. I fully expect to see substantial enterprise adoption over the short term as we work with Microsoft’s customers to help them migrate even the most complex workloads to Azure.”

Jim Jordan, President and CEO, RiverMeadow

Zerto has integrated its IT Resilience Platform with Azure VMware Solutions, delivering replication and failover capabilities between Azure VMware Solution by CloudSimple, Azure and any other Hyper-V or VMware environments, keeping the same on-premises environment configurations, and reducing the impact of disasters, logical corruptions, and ransomware infections.

“Azure VMware Solution by CloudSimple, brings the familiarity and simplicity of VMware into Azure public cloud. Every customer and IT pro using VMware will be instantly productive with minimal or no Azure competency. With Zerto, VMware customers gain immediate access to simple point and click disaster recovery and migration capabilities between Azure VMware Solutions, the rest of Azure, and on-premises VMware private clouds. Enabled by Zerto, one of Microsoft’s top ISVs and an award-winning industry leader in VMware-based disaster recovery and cloud migration, delivers native support for Azure VMware Solutions. ”

Peter Kerr, Vice President of Global Alliances, Zerto

Veeam Backup & Replication™ software is specialized in supporting VMware vSphere environments, their solutions will help customers meet the backup demands of organizations deploying Azure VMware Solutions.

“As a leading innovator of Cloud Data Management solutions, Veeam makes it easy for our customers to protect their virtual, physical, and cloud-based workloads regardless of where those reside. Veeam’s support for Microsoft Azure VMware Solutions by CloudSimple further enhances that position by enabling interoperability and portability across multi-cloud environments. With Veeam Backup & Replication, customers can easily migrate and protect their VMware workloads in Azure as part of a cloud-first initiative, create an Azure-based DR strategy, or simply create new Azure IaaS instances – all with the same proven Veeam solutions they already use today.”  

Ken Ringdahl, Vice President of Global Alliances Architecture, Veeam Software

Join us at VMworld

If you plan to attend VMworld this week in San Francisco, stop by our booth and witness Azure VMware Solutions in action; or sit down for a few minutes and listen to one of our mini theater presentations addressing a variety of topics such as Windows Virtual Desktop, Windows Server, and SQL Server on Azure in addition to Azure VMware Solutions!

Learn more about Azure VMware Solutions.

Posted on Leave a comment

Azure Archive Storage expanded capabilities: Faster, simpler, better

Since launching Azure Archive Storage, we have seen unprecedented interest and innovative usage from a variety of industries. Archive Storage is built as a scalable service for cost-effectively storing rarely accessed data for long periods of time. Cold data such as application backups, healthcare records, autonomous driving recordings, etc. that might have been previously deleted could be stored in Azure Storage’s Archive tier in an offline state, then rehydrated to an online tier when needed. Earlier this month, we made Azure Archive Storage even more affordable by reducing prices by up to 50 percent in some regions, as part of our commitment to provide the most cost-effective data storage offering.

We’ve gathered your feedback regarding Azure Archive Storage, and today, we’re happy to share three archive improvements in public preview that make our service even better.

1. Priority retrieval from Azure Archive

To read data stored in Azure Archive Storage, you must first change the tier of the blob to hot or cool. This process is known as rehydration and takes a matter of hours to complete. Today we’re sharing the public preview release of priority retrieval from archive allowing for much faster offline data access. Priority retrieval allows you to flag the rehydration of your data from the offline archive tier back into an online hot or cool tier as a high priority action. By paying a little bit more for the priority rehydration operation, your archive retrieval request is placed in front of other requests and your offline data is expected to be returned in less than one hour.

Priority retrieval is recommended to be used for emergency requests for a subset of an archive dataset. For the majority of use cases, our customers plan for and utilize standard archive retrievals which complete in less than 15 hours. But on rare occasions, a retrieval time of an hour or less is required. Priority retrieval requests can deliver archive data in a fraction of the time of a standard retrieval operation, allowing our customers to quickly resume business as usual. For more information, please see Blob Storage Rehydration.

The archive retrieval options now provided under the optional parameter are:

  • Standard rehydrate-priority is the new name for what Archive has provided over the past two years and is the default option for archive SetBlobTier and CopyBlob requests, with retrievals taking up to 15 hours.
  • High rehydrate-priority fulfills the need for urgent data access from archive, with retrievals for blobs under ten GB, typically taking less than one hour.

Regional priority retrieval demand at the time of request can affect the speed at which your data rehydration is completed. In most scenarios, a high rehydrate-priority request may return your Archive data in under one hour. In the rare scenario where archive receives an exceptionally large amount of concurrent high rehydrate-priority requests, your request will still be prioritized over standard rehydrate-priority but may take one to five hours to return your archive data. In the extremely rare case that any high rehydrate-priority requests take over five hours to return archive blobs under a few GB, you will not be charged the priority retrieval rates.

2. Upload blob direct to access tier of choice (hot, cool, or archive)

Blob-level tiering for general-purpose v2 and blob storage accounts allows you to easily store blobs in the hot, cool, or archive access tiers all within the same container. Previously when you uploaded an object to your container, it would inherit the access tier of your account and the blob’s access tier would show as hot (inferred) or cool (inferred) depending on your account configuration settings. As data usage patterns change, you would change the access tier of the blob manually with the SetBlobTier API or automate the process with blob lifecycle management rules.

Today we’re sharing the public preview release of Upload Blob Direct to Access tier, which allows you to upload your blob using PutBlob or PutBlockList directly to the access tier of your choice using the optional parameter x-ms-access-tier. This allows you to upload your object directly into the hot, cool, or archive tier regardless of your account’s default access tier setting. This new capability makes it simple for customers to upload objects directly to Azure Archive in a single transaction. For more information, please see Blob Storage Access Tiers.

3. CopyBlob enhanced capabilities

In certain scenarios, you may want to keep your original data untouched but work on a temporary copy of the data. This holds especially true for data in Archive that needs to be read but still kept in Archive. The public preview release of CopyBlob enhanced capabilities builds upon our existing CopyBlob API with added support for the archive access tier, priority retrieval from archive, and direct to access tier of choice.

The CopyBlob API is now able to support the archive access tier; allowing you to copy data into and out of the archive access tier within the same storage account. With our access tier of choice enhancement, you are now able to set the optional parameter x-ms-access-tier to specify which destination access tier you would like your data copy to inherit. If you are copying a blob from the archive tier, you will also be able to specify the x-ms-rehydrate-priority of how quickly you want the copy created in the destination hot or cool tier. Please see Blob Storage Rehydration and the following table for information on the new CopyBlob access tier capabilities.

Hot tier source

Cool tier source

Archive tier source

Hot tier destination

Supported

Supported

Supported within the same account; pending rehydrate

Cool tier destination

Supported

Supported

Supported within the same account; pending rehydrate

Archive tier destination

Supported

Supported

Unsupported

Getting Started

All of the features discussed today (upload blob direct to access tier, priority retrieval from archive, and CopyBlob enhancements) are supported by the most recent releases of the Azure Portal, .NET Client Library, Java Client Library, Python Client Library. As always you can also directly use the Storage Services REST API (version 2019-02-02 and greater). In general, we always recommend using the latest version regardless of whether you are using these new features.

Build it, use it, and tell us about it!

We will continue to improve our Archive and Blob Storage services and are looking forward to hearing your feedback about these features through email at [email protected]. As a reminder, we love hearing all of your ideas and suggestions about Azure Storage, which you can post at Azure Storage feedback forum.

Thanks, from the entire Azure Storage Team!

Posted on Leave a comment

Azure Ultra Disk Storage: Microsoft’s service for the most I/O demanding workloads

Today, Tad Brockway, Corporate Vice President, Microsoft Azure, announced the general availability of Azure Ultra Disk Storage, an Azure Managed Disks offering that provides massive throughput with sub-millisecond latency for your most I/O demanding workloads. With the introduction of Ultra Disk Storage, Azure includes four types of persistent disk—Ultra Disk Storage, Premium SSD, Standard SSD, and Standard HDD. This portfolio gives you price and performance options tailored to meet the requirements of every workload. Ultra Disk Storage delivers consistent performance and low latency for I/O intensive workloads like SAP Hana, OLTP databases, NoSQL, and other transaction-heavy workloads. Further, you can reach maximum virtual machine (VM) I/O limits with a single Ultra disk, without having to stripe multiple disks.

Durability of data is essential to business-critical enterprise workloads. To ensure we keep our durability promise, we built Ultra Disk Storage on our existing locally redundant storage (LRS) technology, which stores three copies of data within the same availability zone. Any application that writes to storage will receive an acknowledgement only after it has been durably replicated to our LRS system.

Below is a clip from a presentation I delivered at Microsoft Ignite demonstrating the leading performance of Ultra Disk Storage:

for mark's blog[6]

Microsoft Ignite 2018: Azure Ultra Disk Storage demo

Below are some quotes from customers in our preview program:

“With Ultra Disk Storage, we achieved consistent sub-millisecond latency at high IOPS and throughput levels on a wide range of disk sizes. Ultra Disk Storage also allows us to fine tune performance characteristics based on the workload.”

– Amit Patolia, Storage Engineer, DEVON ENERGY

“Ultra Disk Storage provides powerful configuration options that can leverage the full throughput of a VM SKU. The ability to control IOPS and MBps is remarkable.”

Edward Pantaleone, IT Administrator, Tricore HCM

Inside Ultra Disk Storage

Ultra Disk Storage is our next generation distributed block storage service that provides disk semantics for Azure IaaS VMs and containers. We designed Ultra Disk Storage with the goal of providing consistent performance at high IOPS without compromising our durability promise. Hence, every write operation replicates to the storage in three different racks (fault domains) before being acknowledged to the client. Compared to Azure Premium Storage, Ultra Disk Storage provides its extreme performance without relying on Azure Blob storage cache, our on-server SSD-based cache, and hence it only supports un-cached reads and writes. We also introduced a new simplified client on the compute host that we call virtual disk client (VDC). VDC has full knowledge of virtual disk metadata mappings to disks in the Ultra Disk Storage cluster backing them. That enables the client to talk directly to storage servers, bypassing load balancers and front-end servers used for initial disk connections. This simplified approach minimizes the layers that a read or write operation traverses, reducing latency and delivering performance comparable to enterprise flash disk arrays.

Below is a figure comparing the different layers an operation traverses when issued on an Ultra disk compared to a Premium SSD disk. The operation flows from the client to Hyper-V to the corresponding driver. For an operation done on a Premium SSD disk, the operation will flow from the Azure Blob storage cache driver to the load balancers, front end servers, partition servers then down to the stream layer servers as documented in this paper. For an operation done on an Ultra disk, the operation will flow directly from the virtual disk client to the corresponding storage servers.

Client virual machine diagram

Comparison between the IO flow for Ultra Disk Storage versus Premium SSD Storage

One key benefit of Ultra Disk Storage is that you can dynamically tune disk performance without detaching your disk or restarting your virtual machines. Thus, you can scale performance along with your workload. When you adjust either IOPS or throughput, the new performance settings take effect in less than an hour.

Azure implements two levels of throttles that can cap disk performance, a “leaky bucket” VM level throttle that is specific to each VM size, described in documentation. A key benefit of Ultra Disk Storage is a new time-based disk level throttle that is applied at the disk level. This new throttle system provides more realistic behavior of a disk for a given IOPS and throughput. Hitting a leaky bucket throttle can cause erratic performance, while the new time-based throttle provides consistent performance even at the throttle limit. To take advantage of this smoother performance, set your disk throttles slightly less than your VM throttle. We will publish another blog post in the future describing more details about our new throttle system.

Available regions

Currently, Ultra Disk Storage is available in the following regions:

  • East US 2
  • North Europe
  • Southeast Asia

We will expand the service to more regions soon. Please refer to the FAQ for the latest on supported regions.

Virtual machine sizes

Ultra Disk Storage is supported on DSv3 and ESv3 virtual machine types. Additional virtual machine types will be supported soon. Refer to the FAQ for the latest on supported VM sizes.

Get started today

You can request onboarding to Azure Ultra Disk Storage by submitting an online request or by reaching out to your Microsoft representative. For general availability limitations refer to the documentation.

Posted on Leave a comment

Azure SignalR Service now supports Event Grid!

Avatar

Ken

Since we GA’ed Azure SignalR Service in last September, serverless has become a very popular use case in Azure SignalR Service and is used by many customers. Unlike the traditional SignalR application which requires a server to host the hub, in serverless scenario no server is needed, instead you can directly send messages to clients through REST APIs or our management SDK which can easily be used in serverless code like Azure Functions.

Though there is a huge benefit which saves you the cost of maintaining the app server, the feature set in serverless scenario is limited. Since there is no real hub, it’s not possible to respond to client activities like client invocations or connection events. Without client events serverless use cases will be limited and we hear a lot of customers asking about this support. Today we’re excited to announce a new feature that enables Azure SignalR Service to publish client events to Azure Event Grid so that you can subscribe and respond to them.

How does it work?

Let’s first revisit how serverless scenario in Azure SignalR Service works.

  1. In serverless scenario, even you don’t have an app server, you still need to have a negotiate API so SignalR client can do the negotiation to get the url to SignalR service and a corresponding access token. Usually this can be done using an Azure Function.

  2. Client will then use the url and access token to connect to SignalR service.

  3. After clients are connected, you can send message to clients using REST APIs or service management SDK. If you are using Azure Functions, our SignalR Service binding does the work for you so you only need to return the messages as an output binding.

This flow is illustrated as step 1-3 in the diagram below:

Serverless workflow

What’s missing here is that there is no equivalent of OnConnected() and OnDisconnected() in serverless APIs so there is no way for the Azure function to know whether a client is connected or disconnected.

Now with Event Grid you’ll be able to get such events through an Event Grid subscription (as step 4 and 5 in the above diagram):

  1. When a client is connected/disconnected to SignalR service, service will publish this event to Event Grid.

  2. In Azure function you can have an Event Grid trigger and subscribe to such events, then Event Grid will send those events to the function (through a webhook).

How to use it?

It’s very simple to make your serverless application subscribe to SignalR connection events. Let’s use Azure function as an example.

  1. First you need to make sure your SignalR Service instance is in serverless mode. (Create a SignalR Service instance if you haven’t done so.)

    Enable serverless mode

  2. Create an Event Grid trigger in your function app.

    Create Event Grid trigger

  3. In the Event Grid trigger, add an Event Grid subscription.

    Add Event Grid Subscription

    Then select your SignalR Service instance.

    Select SignalR Service instance

Now you’re all set! Your function app is now able to get connection events from SignalR Service.

To test it, you just need to open a SignalR connection to the service. You can use the SignalR client in our sample repo, which contains a simple negotiate API implementation.

  1. Clone AzureSignalR-samples repo.

  2. Start the sample negotiation server.

    cd samples\Management\NegotiationServer
    set Azure__SignalR__ConnectionString=<connection_string>
    dotnet run
    
  3. Run SignalR client.

    cd samples\Management\SignalRClient
    dotnet run
    

    Open the function logs in Azure portal and you’ll see a connected event is sent to the function:

    Azure Function output

    If you stop the client you’ll also see a disconnected event is received.

Try it now!

This feature is now in public preview so feel free to try it out and let us know your feedback by filing issues on Github.

For more information about how to use Event Grid with SignalR Service, you can read this article or try this sample.

Avatar
Ken Chen

Principal Software Engineering Manager

Follow Ken   

Posted on Leave a comment

Azure and Informatica team up to remove barriers for cloud analytics migration

Today, we are announcing the most comprehensive and compelling migration offer available in the industry to help customers simplify their cloud analytics journey.

This collaboration between Microsoft and Informatica provides customers an accelerated path for their digital transformation. As customers modernize their analytics systems, it enables them to truly begin integrating emerging technologies, such as AI and machine learning, into their business. Without migrating analytics workloads to the cloud, it becomes difficult for customers to maximize the potential their data holds.

For customers that have been tuning analytics appliances for years, such as Teradata and Netezza, it can seem overwhelming to start the journey towards the cloud. Customers have invested valuable time, skills, and personnel to achieve optimal performance from their analytics systems, which contain the most sensitive and valuable data for their business. We understand that the idea of migrating these systems to the cloud can seem risky and daunting. This is why we are partnering with Informatica to help customers begin their cloud analytics journey today with an industry-leading offer.

Free evaluation

With this offering, customers can now work with Azure and Informatica to easily understand their current data estate, determine what data is connected to their current data warehouse, and replicate tables without moving any data in order to conduct a robust proof of value.

This enables customers to get an end-to-end view of their data, execute a proof of value without disrupting their existing systems, and quickly see the possibilities of moving to Azure.

Free code conversion

A critical aspect of migrating on-premises appliances to the cloud is converting existing schemas to take advantage of cloud innovation. This conversion can quickly become expensive even in proof of values.

With this joint offering from Azure and Informatica, customers receive free code conversion for both the proof of value phase and when fully migrating to the cloud, as well as a SQL Data Warehouse subscription for the duration of the proof of value (up to 30 days).

Hands-on approach

Both Azure and Informatica are dedicating the personnel and resources to have analytics experts on-site helping customers as they begin migrating to Azure.

Customers that qualify for this offering will have full support from Azure SQL Data Warehouse experts. They will help with the initial assessment, executing the proof of value, and provide best practice guidance during migration.

Everything you need to start your cloud analytics journey

Image of table displaying Azure and Informatica proof of value

Get started today

Analytics in Azure is up to 14 times faster and costs 94 percent less than other cloud providers, and is the leader in both the TPC-H and TPC-DS industry benchmarks. Now with this joint offer, customers can easily get started on their cloud analytics journey.

Button image to sign up for Azure and Informatica joint offer

Posted on Leave a comment

How NV Play is shaping the future of cricket – and making the global game more approachable

Every summer, Andy Nott spends his Saturdays on a verdant cricket field with his local team from Calne, a picturesque town in southwest England. But instead of batting and fielding balls, Nott is live-scoring his team’s seven-hour matches, often from inside a scorebox that looks like a garden shed. The job, though largely hidden, is a key role in the game, requiring intense focus and meticulous record-keeping.

To score a match, Nott brings colored pens, a paper scorebook, binoculars, water, a fan or heater, and a laptop connected to NV Play, an innovative, cloud-based cricket scoring and analytics solution launched last year. Built by NV Interactive, a digital agency in New Zealand, the platform has enabled more than 2,000 U.K. recreational teams like Calne Cricket Club to produce professional-grade livestreams that make the sport more engaging to follow. Calne doesn’t yet have the budget for video, but its live scores are deeply appreciated by fans.

“We’ve heard from people lying on a beach somewhere in Spain keeping abreast of a game by looking at the internet and seeing what’s happening,” says Nott, who learned to score cricket 15 years ago on paper, a method he still uses while scoring digitally. “The technology and live scoring make the game more appealing to a wider audience.”

screen image of NV Play user interface
NV Play solution. (Image courtesy of NV Interactive)

For NV Interactive, building NV Play was a way to democratize technology in a favorite sport of product director Matt Smith, chairman Geoff Cranko and brothers Matt Pickering, managing director, and Gus Pickering, technical director. Previously, the company had developed elite cricket solutions for 15 years, beginning with a scoring platform in 2005 for ESPNcricinfo, a global cricket news site, still in use today.

NV then went on to build digital tools for first-class and national teams in the U.K. and New Zealand, also still in use today. For NV Play, the company integrated the same advanced capabilities into a single platform that can scale from serving small, recreational teams to the highest levels of professional cricket. Built on Microsoft Azure, the flexible solution features live scoring, live video, video highlights, ball-by-ball statistics, high-performance analytics and predictive insights – all to help teams grow and shape the future of cricket.

“Historically, cricket has been scored on pen and paper,” says Smith, who grew up playing the sport. “Depending on the level of the game, the scores might end up on a spreadsheet that you share around or on a website, if you’re lucky.

“We’re making all the things typically available only to massive cricket clubs with high-performance budgets available to all levels of the game, from grassroots through to the elite.”

Even for famous, professional clubs like Middlesex Cricket in London, NV Play has enabled new ways to serve fans and new opportunities for growth. The platform helps the club deliver high-quality video livestreams and video highlights of key moments, which delight fans who can’t attend the club’s four-day matches. The features also serve Middlesex’s many global fans from Australia to India to South Africa.

“We’ve had a lot of feedback from people who enjoy having an open window on their laptop and being able to duck in and out of a game and get a real feel for what’s going on, without being here in the stadium,” says Rob Lynch, chief operating officer of Middlesex Cricket, which plays at London’s iconic Lord’s Cricket Ground venue.

green cricket field with players and historic building in background
Middlesex Cricket first XI men’s team at Lord’s Cricket Ground in London. (Photo courtesy of Middlesex Cricket)

The platform helped Middlesex livestream video of its professional women’s matches for the first time this year. It has also increased engagement on the club’s website, leading to new monetization opportunities.

“We work hard to keep our website a living, breathing organism and not something that goes dormant and out of date quickly,” Lynch says. “By incorporating NV Play, we’ve seen a significant increase in traffic to our site.”

Building the solution with Azure DevOps, NV Interactive worked with the England and Wales Cricket Board to deliver the solution in the U.K., where it’s branded as Play-Cricket Scorer Pro. An NV Play partnership with New Zealand Cricket soon followed. To date, the platform has scored more than 30,000 matches involving 90,000 players and captured more than 24 terabytes of video.

It has coded metadata for 18 million balls bowled, including details on batter, bowler, type of hit, runs scored, weather and pitch conditions. In any given week, NV Play is livestreaming scores and video of more than 750 simultaneous matches to an audience of over 2 million people. The scalability of Azure is crucial for handling the enormous usage spikes.